In this challenge, a newly synthetic dataset of spontaneous micro expression based on CASME II [1] and SAMM [2] will be provided. Due to the limited amount of existing micro expression datasets, we take the advantage of synthesis algorithm and generate data by manipulating conventional facial expression datasets. With traditional facial expression described in a continuous manifold, the data will be synthesized image sequences, while each would be a complete simulation of the process eliciting micro-expression. *The copyright of the original picture is still owned by the author team of CASME II and SAMM. [1] Yan W J, Li X, Wang S J, et al. CASME II: An improved spontaneous micro-expression database and the baseline evaluation[J]. PloS one, 2014, 9(1). [2] A. K. Davison, C. Lansley, N. Costen, K. Tan and M. H. Yap, "SAMM: A Spontaneous Micro-Facial Movement Dataset," in IEEE Transactions on Affective Computing, vol. 9, no. 1, pp. 116-129, 1 Jan.-March 2018, doi: 10.1109/TAFFC.2016.2573832
For the purpose of recognition, the dataset is labeled with three categories on facial expression including positive, negative, surprise respectively, which follows the mainstream MER challenge. The provided dataset will include both training and testing part.
The criteria will be in terms of F1-score (or F1-measure). Different from the typical accuracy (i.e. number of correctly recognized samples divided by the total amount of all samples), the F1-score can be interpreted as the average of recall and precision and have the best value of 1 and worst value of 0.

Since False Positive (FP) and False Negative (FN) are crucial to recognition task targeting on micro expression, the F1-score is proved to be more representative of the accuracy and robustness of a model in the absence of True Negative(TN) when calculating, and thus can provide a rather fair standard for the challenge. The metrics mentioned can be calculated as follows: