State of the art machine learning

FaceReader Online analyses facial expressions using the FaceReader engine, a software program for facial analysis. It has proven its value in the more than 300 sites where it is used worldwide. FaceReader technology has been improved over the past 20 years using scientific research as input for software development plans. The analysis and related services run on the Microsoft Azure cloud platform.
Using state of the art machine learning techniques, the FaceReader engine is capable of recognizing the basic facial expressions (Neutral, Happy, Sad, Angry, Suprised, Scared, Disgusted) as well as a set of the 20 most used Facial Action Units. FaceReader Online direcly provides the basic facial expressions, to obtain Action Unit output the videos must be reanalyzed in FaceReader. Below you will see a demonstration video of the FaceReader™ engine in action.

Facial expression analysis

FaceReader has been trained to classify expressions in one of the following categories:
  • Neutral
  • Happy
  • Sad
  • Angry
  • Surprised
  • Scared
  • Disgusted
  • and
  • Valence
  • Arousal
These emotional categories have been described by Ekman [1] as the basic or universal emotions. Additionally FaceReader can classify Valence a measure of the positive of negative attitude of your participants towards your content, and Arousal which indicates the level of excitement you content induces with your participants.

The FaceReader engine has been in development for about 20 years, improving recognition accuracy year over year. Please visit the product description page at VicarVision or Noldus if you wish to read more about FaceReader. Below you will find a description of the different steps of the algorithm.
Face Detection

Find the face in the image

The first step of our Face Analysis system consist of accurately finding the location and size of faces in arbitrary scenes under varying lighting conditions and complex backgrounds. Face detection, combined with eye detection, gives us a perfect starting point for the following facial modeling and expression analysis. FaceReader uses the popular Viola-Jones algorithm [5] to detect the presence of a face.
Face detection
Face Modeling

Build a detailed model of the face

The next step is an accurate modeling of the face using an algorithmic approach based on the Active Appearance method described by Cootes and Taylor [6]. The model is trained with a database of annotated images. It describes over 500 key points in the face and the facial texture of the face entangled by these points. The key points include (A) the points that enclose the face (the part of the face that FaceReader analyzes) and (B) points in the face that are easily recognizable (lips, eyebrows, nose and eyes). The texture is important because it gives extra information about the state of the face. The key points describe the global position and the shape of the face, but do not give any information about, for example, the presence of wrinkles and the shape of the eye brows. These are important cues for classifying the facial expressions.
Face modeling
Face Classification

Extract information from the face

Using the facemodel and the input image, the classification of the facial expression is done by training a state of the art deep neural network [7]. Over 10,000 manually annotated images were used as training material.
Face classification
White paper

The FaceReader methodology explained

Would you like to learn more about facial expression analysis using FaceReader software? Please download this free white paper to answer the following questions:
Microsoft Azure

Cloud architecture

All FaceReader Online processes run on the reliable Microsoft Azure cloud solution for all of its processes. This approach of analysis in the cloud brings a number of advantages for our customers:


Rapid scaling of processing capabilities to deal with sudden bursts of demand. Even the recording data of thousands of participants can be analyzed within minutes.


High reliability and availability – Microsoft guarantees an uptime of over 99.9%.


Geo-redundancy – Servers are located in different geographical regions. This further improves the availability, and provides a better connectivity (ping/bandwidth) for users all over the world.


Maintainability – You always have the latest and best version of our software to work with.


  1. Ekman, P. (1970). Universal facial expressions of emotion. California Mental Health Research Digest, 8, 151-158.
  2. Van Kuilenburg, H.; Wiering, M; Den Uyl, M.J. (2005). A Model Based Method for Automatic Facial Expression Recognition. Proceedings of the 16th European Conference on Machine Learning, Porto, Portugal, 2005, pp. 194-205, Springer-Verlag GmbH.
  3. Den Uyl, M.J.; Van Kuilenburg, H. (2008). The FaceReader: Online Facial Expression Recognition. Proceedings of Measuring Behavior 2005, Wageningen, The Netherlands, August 30 – September 2, 2008, pp. 589-590.

  4. Van Kuilenburg, H.; Den Uyl, M.J.; Israël, M.L.; Ivan, P. (2008). Advances in face and gesture analysis. Proceedings of Measuring Behavior 2008, Maastricht, The Netherlands, August 26-29, 2008, pp. 371-372.
  5. Viola, P.; Jones, M. (2001). Rapid object detection using a boosted cascade of simple features. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, U.S.A., December 8-14, 2001.
  6. Cootes, T.; Taylor, C. (2000). Statistical models of appearance for computer vision. Technical report, University of Manchester, Wolfson Image Analysis Unit, Imaging Science and Biomedical Engineering.
  7. Gudi, Amogh. Recognizing semantic features in faces using deep learning. arXiv preprint arXiv:1512.00743 (2015).

Request a free trial

Get your free whitepaper