The path to human insights
FaceReader Online brings the advanced analysis capabilities of the FaceReader ecosystem to the cloud, leveraging data-driven computer vision and machine learning models with scientific research methodologies anchored in emotion theory and human behavioral research.
Affective computing
At the heart of the FaceReader ecosystem lies Affective computing – the intersection of artificial intelligence, psychology, and cognitive science – that enables machines to measure human emotions and behavior. In FaceReader, this is the scientific foundation that powers its facial analysis and interpretation. Through joint analysis of facial cues, gestures, expressions, and eye movement patterns, FaceReader Online can translate complex emotional signals into actionable insights, embodying the cutting-edge of this empathic technology.
Facial Action Coding System (FACS)
FACS provides an objective measure of facial expressions in FaceReader Online, breaking down visible emotions into individual components based on facial muscle movements known as Action Units (AUs). This granular analysis allows for an unbiased, precise representation of facial expressions, transforming subtle changes into quantifiable data. By employing FACS, FaceReader Online ensures a robust and scientific approach to decoding emotions, providing a solid objective foundation for its affective computing capabilities.
Basic emotions and theories of emotion
It builds upon these concepts of basic emotions and FACS action units to operationalize Russell’s Valence/Arousal model. FaceReader Online maps key emotional signals from expressions and AUs onto the circumplex dimensions of Valence (pleasant-unpleasant) and Arousal (active-inactive), thereby providing a rich multi-dimensional understanding of affective states.
In addition, it also offers flexibility to accommodate alternate theories of emotion, enabling users to apply the software within diverse theoretical frameworks and research contexts, thereby broadening its applicability and relevance across various fields of study.
Eye tracking
Intersection of Emotional Response and Gaze Tracking
With FaceReader Online, this dual analysis facilitates deeper insights into user engagement, effectiveness of content, and consumer preferences. The integration of these two powerful tools is not just a technical achievement but also a reflection of an established research paradigm that emphasizes the importance of a multi-modal approach to understanding human emotions and behaviors, providing researchers with a sophisticated, evidence-based framework to inform their studies and strategies.
Validated core AI models
Creates a 3D model of the face and detects over 500 keypoints
Models the eyes and derives gaze angles
Advanced algorithm to calculate the heart rate form changes in redness
Advanced algorithm to calculate the breathing rate
Classifies over 20 facial action units
Classifies the 6 basic universal emotions
Facial analysis in FaceReader Online is supplemented by its gaze tracking algorithm. This deep learning-based model is able to compute the direction of the users’ gaze using only webcam images, and has been validated to produce results within an error of ~5°. By hybridizing 3D geometry and machine learning based calibration procedures, this gaze-tracking model can determine where the user is looking at on the screen with an error of ~2.5cm – which is typically sufficient to resolve banners and buttons on website – with minimal calibration effort. Furthermore, the pattern of eye movements is also analyzed to recognize movement patterns and distinguish between fixations and saccades, which are indicative of attention and cognitive load.