Scientific publications

Read about the research that supports the FaceReader Ecosystem

Over the past 20+ years, our facial coding platform and its embedded technologies have been the subject as well as the preferred instrument for numerous accredited scientific studies. Below we present a comprehensive overview of the literature that has emerged from these studies, highlighting and validating the cutting-edge technology of FaceReader Online.
2007
27 citations
Affective multimodal mirror: sensing and eliciting laughter
W. A. Melder, K. P. Truong, M. den Uyl, D. A. van Leeuwen, M. A. Neerincx, L. R. Roos, B. S. Plum
In this paper, we present a multimodal affective mirror that senses and elicits laughter. Currently, the mirror contains a vocal and a facial affect-sensing module, a component that fuses the output of these two modules to achieve a user-state assessment, a user state transition model, and a component to present audiovisual affective feedback that should keep or bring the user in the intended state. Interaction with this intelligent interface involves a full cyclic process of sensing, interpreting, reacting, sensing , interpreting. The intention of the mirror is to evoke positive emotions, to make people laugh and to increase the laughter. The first user experiences tests showed that users show cooperative behavior, resulting in mutual user-mirror action-reaction cycles. Most users enjoyed the interaction with the mirror and immersed in an excellent user experience.
2007
3 citations
Visual Alphabets: Video Classification by End Users
M. Israël, E.L. van den Broek, P. van der Putten, M.J. den Uyl
The work presented here introduces a real-time automatic scene classifier within content-based video retrieval. In our envisioned approach, end users like documentalists, not image processing experts, build classifiers interactively by simply indicating positive examples of a scene. Classification consists of a two-stage procedure. First, small image fragments called patches are classified. Second, frequency vectors of these patch classifications are fed into a second classifier for global scene classification . The first stage classifiers can be seen as a set of highly specialized, learned feature detectors, as an alternative to letting an image processing expert determine features a priori. The end user or domain expert thus builds a visual alphabet that can be used to describe the image in features that are relevant for the task at hand. We present results for experiments on a variety of patch and image classes. The scene classifier approach has been successfully applied to other domains of video content analysis, such as content-based video retrieval in television archives, automated sewer inspection, and porn filtering.
2007
14 citations
Distance Measures for Gabor Jets-Based Face Authentication: A Comparative Evaluation
D. González-Jiménez, M. Bicego, J.W.H. Tangelder, B.A.M. Schouten, O. Ambekar, J.L. Alba-Castro, E. Grosso, M. Tistarelli
Local Gabor features have been widely used in face recognition systems. Once the sets of jets have been extracted from the two faces to be compared, a proper measure of similarity between corresponding features should be chosen. For instance, in the well-known Elastic Bunch Graph Matching approach and other Gabor-based face recognition systems, the cosine distance was used as a measure. In this paper, we provide an empirical evaluation of seven distance measures for comparison, using a recently introduced face recognition system, based on Shape Driven Gabor Jets . Moreover, we evaluate different normalization factors that are used to pre-process the jets. Experimental results on the BANCA database suggest that the concrete type of normalization applied to jets is a critical factor, and that some combinations of normalization and distance achieve better performance than the classical cosine measure for jet comparison.
2007
84 citations
Using Emotion in Games: Emotional Flowers
Bernhaupt, Boldt, Mirlacher, Wilfinger, Tscheligi
The “Emotional Flowers” game utilizes players’ facial expressions to control the growth of a flower, aiming to elicit emotional reactions such as happiness and surprise. Multiple players can participate simultaneously, with their flowers displayed on a public ambient display, influencing both individual emotions and social interactions. This paper presents the design, implementation, and evaluation of the game.
2006
612 citations
How to capture the heart? Reviewing 20 years of emotion measurement in advertising
Poels & Dewitte
In recent decades, emotions have become a significant research focus across behavioral sciences, particularly in advertising. However, the literature on measuring emotions in advertising lacks clarity. This article aims to update the various methods used for measuring emotions in advertising, discussing their validity and applicability. It also examines the relationship between emotions and traditional measures of advertising effectiveness, offering recommendations for using different methods and suggesting directions for future research.
2006
15 citations
Learning a Sparse Representation from Multiple Still Images for On-Line Face Recognition in an Unconstrained Environment
J.W.H. Tangelder, B.A.M. Schouten
In a real-world environment a face detector can be applied to extract multiple face images from multiple video streams without constraints on pose and illumination. The extracted face images will have varying image quality and resolution. Moreover, also the detected faces will not be precisely aligned. This paper presents a new approach to on-line face identification from multiple still images obtained under such unconstrained conditions. Our method learns a sparse representation of the most discriminative descriptors of the detected face images according to their classification accuracies. On-line face recognition is supported using a single descriptor of a face image as a query. We apply our method to our newly introduced BHG descriptor, the SIFT descriptor, and the LBP descriptor, which obtain limited robustness against illumination, pose and alignment errors. Our experimental results using a video face database of pairs of unconstrained low resolution video clips of ten subjects, show that our method achieves a recognition rate of 94% with a sparse representation containing 10% of all available data, at a false acceptance rate of 4%.

Get your free example report

Get your free whitepaper