Scientific publications
Read about the research that supports the FaceReader Ecosystem
Over the past 20+ years, our facial coding platform and its embedded technologies have been the subject as well as the preferred instrument for numerous accredited scientific studies. Below we present a comprehensive overview of the literature that has emerged from these studies, highlighting and validating the cutting-edge technology of FaceReader Online.
2022
3 citations
Proximally Sensitive Error for Anomaly Detection and Feature Learning
A. Gudi, F. Büttner, J. van Gemert
Mean squared error is widely used to measure differences between multi-dimensional entities, including images. However, MSE lacks local sensitivity as it doesn’t consider the spatial arrangement of pixel differences, which is crucial for structured data like images. Such spatial arrangements provide information about the source of differences; therefore, an error function that incorporates the location of errors can offer a more meaningful distance measure. We introduce Proximally Sensitive Error , suggesting that emphasizing regions in the error measure can highlight semantic differences between images over syntactic or random deviations. We demonstrate that this emphasis can be leveraged for anomaly or occlusion detection. Additionally, we explore its utility as a loss function to help models focus on learning representations of semantic objects instead of minimizing syntactic reconstruction noise.
2021
14 citations
Complex website tasks increase the expression anger measured with FaceReader online
L. Talen and T.E. den Uyl
To stand out among the large variations of websites that exist, users should have a good experience. Earlier research found that a good experience influences important user behavior statistics, such as further use of the website. Complexity seems to play a role in the usability of websites and a blockage or delay in reaching the goal leads to negative feelings such as frustration. In this study, FaceReader Online, a tool to measure facial expressions via the internet, was used to measure the effect of the complexity of website design and website tasks on the facial expression anger. Because the expression scores had a low intensity, we calculated a metric for the peak expression. The results indicated that in the more complex tasks the facial expression of anger was higher. These results suggest that the automatically detected facial expression anger could be used to measure usability aspects of a website.
2021
26 citations
Efficiency in real-time webcam gaze tracking
A. Gudi, X Li and J. van Gemert
Efficiency and ease of use are essential for practical applications of camera-based eye/gaze-tracking. Gaze tracking involves estimating where a person is looking on a screen based on face images from a computer-facing camera. In this paper, we investigate two complementary forms of efficiency in gaze tracking: 1. The computational efficiency of the system, which is dominated by the inference speed of a CNN predicting gaze-vectors; 2. The usability efficiency, which is determined by the tediousness of the mandatory calibration of the gaze-vector to a computer screen. To do so, we evaluate the computational speed/accuracy trade-off for the CNN and the calibration effort/accuracy trade-off for screen calibration. For the CNN, we evaluate the full face, two-eyes, and single eye input. For screen calibration, we measure the number of calibration points needed and evaluate three types of calibration: 1. pure geometry, 2. pure machine learning, and 3. hybrid geometric regression. Results suggest that a single eye input and geometric regression calibration achieve the best trade-off.
2020
45 citations
Real-Time Webcam Heart-Rate and Variability Estimation with Clean Ground Truth for Evaluation
A. Gudi, M. Bittner and J. van Gemert
Remote photo-plethysmography enables heart rate estimation using a camera by detecting skin reflectance changes associated with blood volume variations. Beyond HR, heart rate variability —the fine fluctuations between heartbeats—offers insights into physiological and psychological states but requires precise heartbeat timing. This study introduces an efficient, real-time rPPG pipeline with novel filtering and motion suppression techniques that not only estimate HR but also extract pulse waveforms to accurately time heartbeats and measure HRV. The unsupervised method operates in real-time without rPPG-specific training. Additionally, the authors present VicarPPG 2, a new multi-modal video dataset designed to evaluate rPPG algorithms for HR and HRV estimation. The method is validated across various conditions using a comprehensive range of public and self-recorded datasets, demonstrating state-of-the-art results and providing insights into unique aspects of rPPG analysis. Furthermore, CleanerPPG, a collection of human-verified ground truth peak/heartbeat annotations for existing rPPG datasets, is introduced to enhance the accuracy, standardization, and fairness of future rPPG algorithm evaluations.
2019
36 citations
Efficient Real-Time Camera Based Estimation of Heart Rate and Its Variability
A. Gudi, M. Bittner, R. Lochmans and J. van Gemert
Remote photo-plethysmography utilizes a camera to estimate a person’s heart rate . Beyond HR, heart rate variability offers insights into physiological and psychological conditions by measuring fluctuations between heartbeats. Accurate HRV assessment requires precise heartbeat timing. This paper introduces an efficient real-time rPPG pipeline featuring novel filtering and motion suppression techniques that enhance HR estimation accuracy and extract pulse waveforms for HRV measurement. The method operates in real-time without the need for rPPG-specific training. Validation on a self-recorded dataset under ideal lab conditions and two public datasets with realistic scenarios demonstrates state-of-the-art performance.
2018
35 citations
Conveying facial expressions to blind and visually impaired persons through a wearable vibrotactile device
H.P. Buimer, M. Bittner, T. Kostelijk, T.M. van der Geest, A. Nemri, R. J. A. van Wezel and Y. Zhao
In face-to-face social interactions, blind and visually impaired persons lack access to nonverbal cues like facial expressions, body posture, and gestures, which may lead to impaired interpersonal communication. In this study, a wearable sensory substitution device consisting of a head-mounted camera and a haptic belt was evaluated to determine whether vibrotactile cues around the waist could be used to convey facial expressions to users and whether such a device is desired by VIPs for use in daily living situations. Ten VIPs and 10 sighted persons participated in the study, in which validated sets of pictures, silent videos, and videos with audio of facial expressions were presented to the participant. A control measurement was first performed to determine how accurately participants could identify facial expressions while relying on their functional senses. After a short training, participants were asked to determine facial expressions while wearing the emotion feedback system. VIPs using the device showed significant improvements in their ability to determine which facial expressions were shown. A significant increase in accuracy of 44.4% was found across all types of stimuli when comparing the scores of the control and supported phases. The greatest improvements achieved with the support of the SSD were found for silent stimuli . SPs also showed consistent, though not statistically significant, improvements while supported. Overall, our study shows that vibrotactile cues are well suited to convey facial expressions to VIPs in real-time. Participants became skilled with the device after a short training session. Further testing and development of the SSD is required to improve its accuracy and aesthetics for potential daily use.