Objective measures

What is facial coding?

Facial coding is a method used to analyze and quantify facial behavior when people are exposed to stimuli such as advertisements or websites, or engaged in tasks such as usability testing. Humans are social beings who communicate intentions, emotions, and goals through both verbal and nonverbal behavior. An important distinction is that nonverbal behavior, such as facial expressions, is more innate and automatic. We can react without having to explicitly learn how to do it or think about it. It is often estimated that up to two-thirds of human communication is nonverbal. Thus, insights into nonverbal signals can provide valuable information that complements self-report measures and verbal feedback.

Humans intuitively interpret facial and vocal cues, but for scientific research, this interpretation traditionally required manual, frame-by-frame annotation of video recordings. Researchers either labeled prototypical emotional expressions or coded the activation of Facial Action Units (AUs), which represent specific facial muscle movements as defined by the Facial Action Coding System (FACS). With advances in computer vision and artificial intelligence, it has become possible to automatically analyze facial behavior at scale. Today, the term facial coding is commonly used to refer to this automated facial expression analysis.

Facial action coding system
Technology

How does facial coding work?

Our face analysis models are trained on large collections of public and self-collected datasets. These datasets consist of a wide range of people and behaviors. These datasets are labelled by the general public or experts (FACS coders) and then used by the AI models to classify the behavior. We’ve published one of these first papers on expression classification, and FaceReader was the first tool for automated facial expression analysis on the market in 2008. Newer models rely on convolutional neural networks (CNNs), a class of deep learning algorithms well-suited for image and video analysis. In the last few decades, we’ve continued to adapt and improve to the latest knowledge in the field of affective computing. As a result, our models achieve state of the art accuracy in identifying facial expressions and Action Units.

FaceReader Online uses AI models to learn facial expressions from large data sets
Technology

Scientific basis and validation

Facial coding is grounded in strong and extensive scientific literature. Although emotion researchers may differ in their theoretical perspectives, the study of facial expressions has been a rich and active field for decades (see more on FaceReader and different theories on emotion).

The core technology behind FaceReader Online and the FaceReader Desktop software is similar. FaceReader Online is optimized for scalable, remote research, while FaceReader Desktop supports more detailed and controlled laboratory studies. FaceReader is unique in its extensive use and validation by both internal and independent academic researchers. You can find detailed information on validation studies here.

The most recent version of FaceReader used in FaceReader Online shows near-perfect accuracy on controlled, generic validation datasets. For FACS-based coding, the system reaches the accuracy levels required for human coder certification, with F1 scores around 80%. In these validation studies, algorithmic classifications are compared against ground-truth labels assigned to thousands of images. Performance on datasets collected in more natural, real-world conditions typically shows greater variability and lower accuracy, as expected. Still under these conditions, FaceReader consistently performs among the best facial coding systems.

Emotional storytelling & user experience

Advertising and UX applications

Beyond scientific research, which includes fields such as social psychology, neuroscience, and developmental science, facial coding is widely used in ad testing and UX research. In advertising, emotional storytelling plays a central role. Many successful ads rely on humor or emotional engagement to capture attention and drive impact. Facial coding allows researchers to pinpoint the exact moments or scenes that elicit smiles, laughter, or surprise. These expressions of happiness and positive affect have been shown to predict ad effectiveness and purchase intent.

In UX research, negative expressions (or the absence of negative expressions!) are often most informative. For example, a frown may indicate that information is difficult to process or that a task requires high cognitive effort. In quantitative studies, facial coding can highlight problematic steps in user flow; in qualitative studies, it can reveal the precise moment users struggle. When combined with eye tracking, facial coding enables even deeper insights into attention and user experience. For real-world examples, take a look at how some of our clients use FaceReader Online in our case studies.

Advertising testing and User experience testing with FaceReader Online

FAQ

Frequently asked facial coding questions

Below you will find a list of the frequently asked questions regarding facial coding. Click one of the questions to get the answer you are looking for.

The simple answer is 99%, as determined by the latest validation on controlled datasets of prototypical expressions. The more complex answer is that accuracy depends on many factors: the version of the software, the data it is tested on (posed versus natural data), the quality of the data, and the specific expressions being analyzed. For example, performance is higher for happy expressions than for less frequent negative expressions, and higher on controlled datasets than on “in-the-wild” data. 

For FACS coding there are standardized tests required for certification. Our FACS coding models have been validated to match FACS-certified human coders, achieving an F1 score of approximately 80% when annotating facial action units.

No, facial EMG is recorded by placing electrodes on the face to measure contractions of the facial muscles. These muscles can contract without producing visible facial movement. Imagine putting your arms next to your body (or preparing to do this). This will surely activate your muscles, but still, your arms can be very still. Facial EMG can detect subtle preparatory signals or non-visible activations, whereas facial coding measures visible movements and expressions. These are therefore distinct signals and serve their own purpose. Of course, facial EMG requires a lab setting and special equipment, making it suitable only for specific research questions.

If your goal is to get relevant insights on how people react to certain stimuli, the answer is probably yes! Most importantly, you should expect your stimuli to yield a real reaction. Very subtle or neutral reactions will not elicit expressive behavior and are more suitable for lab based studies or survey results. Facial coding is especially valuable for capturing in-the-moment reactions.

Read this blog on how to get the most out of your online research. Here we highlight some of the key points. For the best results, clear instructions greatly improve data quality. Ask participants to sit centered and face the screen directly, ensure good frontal lighting (no backlighting), and minimize head movement. Also make sure participants do not place their hands in front of their face; for example, they should not be eating or drinking during the session.

Some variability in lighting or expressiveness is common. To account for this, filter out low-quality data and ensure a sufficiently large sample size to increase the signal-to-noise ratio.

Facial coding does not directly measure how a participant consciously labels their feelings, but it provides objective insight into their emotional responses as they occur.

Emotion is a multi-component process involving subjective experience, physiological responses, and observable behavior in response to internal or external stimuli. While self-report captures how participants reflect on and describe their feelings, facial coding measures expressive behavior – often automatically and in real time.

Importantly, facial responses may align with or differ from self-reported feelings. This convergence or divergence is highly informative, as it can reveal moments of implicit emotional engagement, hesitation, or conflict that participants may not consciously notice or report. When combined with self-report, facial coding provides a more complete and nuanced understanding of someone’s experience.

This is a fascinating question that has divided and inspired researchers for decades! There is no single agreed-upon answer, although most researchers support the idea of some universal or innate patterns. How many and how distinct they are is what they disagree on. For example, researchers who argue that emotions are psychologically constructed and not universal, still often propose innate forms of core affect on arousal and valence dimensions. It is also widely accepted that there is cultural variation in both expressions and how they are interpretated. The late Paul Ekman, the legendary founder of Basic Emotion Theory, likewise highlights the importance of cultural differences. 

Overall, while there is no full consensus, the general answer is that that expressive behavior reflects a combination of universal patterns, cultural variation, and individual differences. This interesting paper illustrates it nicely and shows that the degree of universality differs across emotions.

Get your free whitepaper

Get your free example report