Behavioral science

The path to human insights

FaceReader Online brings the advanced analysis capabilities of the FaceReader ecosystem to the cloud, leveraging data-driven computer vision and machine learning models with scientific research methodologies anchored in emotion theory and human behavioral research.

Datavizualization
Affective computing
Emotion aware

Affective computing

At the heart of the FaceReader ecosystem lies Affective computing – the intersection of artificial intelligence, psychology, and cognitive science – that enables machines to measure human emotions and behavior. In FaceReader, this is the scientific foundation that powers its facial analysis and interpretation. Through joint analysis of facial cues, gestures, expressions, and eye movement patterns, FaceReader Online can translate complex emotional signals into actionable insights, embodying the cutting-edge of this empathic technology.

Objective measures

Facial Action Coding System (FACS)

FACS provides an objective measure of facial expressions in FaceReader Online, breaking down visible emotions into individual components based on facial muscle movements known as Action Units (AUs). This granular analysis allows for an unbiased, precise representation of facial expressions, transforming subtle changes into quantifiable data. By employing FACS, FaceReader Online ensures a robust and scientific approach to decoding emotions, providing a solid objective foundation for its affective computing capabilities.

Facial action coding system
Universal emotions

Basic emotions and theories of emotion

The concept of basic emotions, central to the most popular theories of emotion, suggests that there are universal emotions with common expressions across different cultures. FaceReader Online incorporates this theory, recognizing and analyzing these foundational emotions.

It builds upon these concepts of basic emotions and FACS action units to operationalize Russell’s Valence/Arousal model. FaceReader Online maps key emotional signals from expressions and AUs onto the circumplex dimensions of Valence (pleasant-unpleasant) and Arousal (active-inactive), thereby providing a rich multi-dimensional understanding of affective states.
Basic emotions
vs
Circumplex model

In addition, it also offers flexibility to accommodate alternate theories of emotion, enabling users to apply the software within diverse theoretical frameworks and research contexts, thereby broadening its applicability and relevance across various fields of study.

Read more about alternative theories of emotion
Theorists on emotion
Representative theorists on emotions grouped into 4 major perspectives (from Gross & Barrett, 2011).
Eye tracking
Focus and attention

Eye tracking

Eye tracking within FaceReader Online enhances the understanding of user engagement and attention by identifying when, where and what individuals look at when presented with stimuli. Further analysis of eye movement patterns and blink rates provides insights into cognitive processes. This eye tracking capability is crucial for pinpointing key stimuli that evoke emotional or cognitive responses, enabling researchers and marketers to discern which aspects of their content or products capture attention and resonate emotionally with the audience.
Dual analysis

Intersection of Emotional Response and Gaze Tracking

Eye tracking provides objective data on where and how long individuals look at specific elements, while emotional response analysis deciphers the affective impact of what they see. This synergy allows researchers to not only ascertain which aspects of a stimulus draw attention but also understand the emotional reactions they evoke.

With FaceReader Online, this dual analysis facilitates deeper insights into user engagement, effectiveness of content, and consumer preferences. The integration of these two powerful tools is not just a technical achievement but also a reflection of an established research paradigm that emphasizes the importance of a multi-modal approach to understanding human emotions and behaviors, providing researchers with a sophisticated, evidence-based framework to inform their studies and strategies.
Gaze heatmap and areas
Predictive models

Validated core AI models

At the heart of the technology stack behind the FaceReader ecosystem lie our facial analysis AI models. These custom models are deep neural networks that incorporate the latest in architectural advancements tailored towards webcam-based facial analysis; and produce efficient and robust computer vision algorithms that maximize accuracy in less-than-idea real-world recording conditions.
Model
Face key points

Creates a 3D model of the face and detects over 500 keypoints

Model
Face analysis
Used to analyze the state of the face
Model
Gaze tracking

Models the eyes and derives gaze angles

Model
Vital signs
Used to analyze the state of the face
Algorithm
Heart rate

Advanced algorithm to calculate the heart rate form changes in redness

Algorithm
Breathing rate

Advanced algorithm to calculate the breathing rate

Classification
Action units

Classifies over 20 facial action units

Algorithm
Visual attention
Algorithm to determine the visual attention base on action units and the gaze
Classification
Characteristics
Used to analyze the state of the face
Classification
Basic emotions

Classifies the 6 basic universal emotions

Our face analysis models are trained on large collections of public and self-collected datasets and obtain state of the art accuracies in identifying facial expressions and detection of facial action unit activation (denoting minute facial muscle movements as defined under FACS). These models have been validated to show that they match FACS-certified human coders with ~80% F1-score in annotating facial action units, and over 90% accuracy in identifying facial expressions corresponding to basic emotion.

Facial analysis in FaceReader Online is supplemented by its gaze tracking algorithm. This deep learning-based model is able to compute the direction of the users’ gaze using only webcam images, and has been validated to produce results within an error of ~5°. By hybridizing 3D geometry and machine learning based calibration procedures, this gaze-tracking model can determine where the user is looking at on the screen with an error of ~2.5cm – which is typically sufficient to resolve banners and buttons on website – with minimal calibration effort. Furthermore, the pattern of eye movements is also analyzed to recognize movement patterns and distinguish between fixations and saccades, which are indicative of attention and cognitive load.
Datasets

Trained on high quality data

Our AI models are as good as the data they learn from. FaceReader Online’s models are trained on an expansive collection of high quality facial image data, curated from both self-collected proprietary datasets and leading public datasets. A key highlight of this training data is the meticulous facial expression and action unit labels that have been coded by multiple FACS-certified facial coders for every data sample. Another highlight of our datasets is the varied range of real-world conditions as well as diverse demographics, ensuring minimal bias in our models.
Media example
Expose your participants to items from you media library. Conduct a thorough assessment of your messaging and creative designs to evaluate their effectiveness and areas for improvement. Use this process to reliably forecast the potential of an advertisement to engage and captivate the audience’s attention.
Question example
Invite participants to respond to a query, capturing their facial expressions during the presentation of the question. This technique can be applied within a structured questionnaire or blended with additional stimuli to gather precise data and insights. Employing this strategy allows for a deeper understanding and nuanced analysis of participants’ perspectives on the discussed topic, offering valuable and intricate feedback.
Website example
Conduct an interactive session where participants are requested to explore a designated website. This engagement is aimed at meticulously monitoring their navigation patterns and interactions with the site’s content and features. By analyzing these interactions, you can gain invaluable insights into user behavior and preferences, thereby identifying key areas for enhancement to optimize the website’s design and functionality.
Media example
Question example
Website example

Request a free trial

Get your free whitepaper

Get your free example report