Scientific publications

Read about the research that supports the FaceReader Ecosystem

Over the past 20+ years, our facial coding platform and its embedded technologies have been the subject as well as the preferred instrument for numerous accredited scientific studies. Below we present a comprehensive overview of the literature that has emerged from these studies, highlighting and validating the cutting-edge technology of FaceReader Online.
2006
15 citations
Learning a Sparse Representation from Multiple Still Images for On-Line Face Recognition in an Unconstrained Environment
J.W.H. Tangelder, B.A.M. Schouten
In a real-world environment a face detector can be applied to extract multiple face images from multiple video streams without constraints on pose and illumination. The extracted face images will have varying image quality and resolution. Moreover, also the detected faces will not be precisely aligned. This paper presents a new approach to on-line face identification from multiple still images obtained under such unconstrained conditions. Our method learns a sparse representation of the most discriminative descriptors of the detected face images according to their classification accuracies. On-line face recognition is supported using a single descriptor of a face image as a query. We apply our method to our newly introduced BHG descriptor, the SIFT descriptor, and the LBP descriptor, which obtain limited robustness against illumination, pose and alignment errors. Our experimental results using a video face database of pairs of unconstrained low resolution video clips of ten subjects, show that our method achieves a recognition rate of 94% with a sparse representation containing 10% of all available data, at a false acceptance rate of 4%.
2005
179 citations
A Model Based Method for Automatic Facial Expression Recognition
H. van Kuilenburg, M. Wiering and M. den Uyl
Automatic facial expression recognition is a research topic with interesting applications in the field of human-computer interaction, psychology, and product marketing. The classification accuracy for an automatic system that uses static images as input is, however, largely limited by the image quality, lighting conditions, and the orientation of the depicted face. These problems can be partially overcome by using a holistic model-based approach called the Active Appearance Model. A system will be described that can classify expressions from one of the emotional categories joy, anger, sadness, surprise, fear, and disgust with remarkable accuracy. It is also able to detect smaller, local facial features based on minimal muscular movements described by the Facial Action Coding System . Finally, we show how the system can be used for expression analysis and synthesis.
2005
335 citations
The FaceReader: Online facial expression recognition
M.J. den Uyl and H. Van Kuilenburg
This paper describes the FaceReader system, presented at Measuring Behavior 2005, which accurately analyzes facial expressions and features in real-time. The system achieves 89% accuracy in recognizing emotional expressions and can classify various other facial features. The authors discuss the system’s capabilities and the technology employed to achieve its performance.
2004
5 citations
Real time automatic scene classification
M. Israël, E.L. van den Broek, P. van der Putten, M.J. den Uyl
This work, part of the EU VICAR and SCOFI projects, aimed to develop a real-time video indexing, classification, annotation, and retrieval system. The authors introduced a generic approach for visual scene recognition using “typed patches”—groups of adjacent pixels characterized by local pixel distribution, brightness, and color. Each patch is described using an HSI color histogram and texture features. A fixed grid overlays the image, segmenting each cell into patches categorized by a classifier. Frequency vectors of these classified patches are concatenated to represent the entire image. Testing on eight scene categories from the Corel database showed 87.5% accuracy in patch classification and 73.8% in scene classification. The method’s advantages include low computational complexity and versatility for image classification, segmentation, and matching. However, manual classification of training patches is a drawback, prompting the development of algorithms for automatic extraction of relevant patch types. The approach was implemented in the VICAR project’s video indexing system for the Netherlands Institute for Sound and Vision and in the SCOFI project’s real-time Internet pornography filter, achieving 92% accuracy with minimal overblocking and underblocking.
2004
1808 citations
A survey of content based 3D shape retrieval methods
J.W.H. Tangelder, R.C. Veltkamp
Recent developments in techniques for modeling, digitizing and visualizing 3D shapes has led to an explosion in the number of available 3D models on the Internet and in domain-specific databases. This has led to the development of 3D shape retrieval systems that, given a query object, retrieve similar 3D objects. For visualization, 3D shapes are often represented as a surface, in particular polygonal meshes,forexampleinVRMLformat.Oftenthesemodelscontainholes,intersecting polygons, are not manifold, and do not enclose a volume unambiguously. On the contrary, 3D volume models, such as solid models produced by CAD systems, or voxels models, enclose a volume properly. This paper surveys the literature on methods for content based 3D retrieval, taking into account the applicability to surface models as well as to volume models. The methods are evaluated with respect to several requirements of content based 3D shape retrieval, such as: (1) shape representation requirements, (2) properties of dissimilarity measures, (3) efficiency, (4) discrimination abilities, (5) ability to perform partial matching, (6) robustness, and (7) necessity of pose normalization. Finally, the advantages and limitations of the several approaches in content based 3D shape retrieval are discussed.
2004
29 citations
Automating the Construction of Scene Classifiers for Content-Based Video Retrieval
M. Israël, E.L. van den Broek, P. van der Putten, M.J. den Uyl
This paper introduces a real-time automatic scene classifier within content-based video retrieval. In the proposed approach, end users like documentalists, not image processing experts, build classifiers interactively by simply indicating positive examples of a scene. Classification consists of a two-stage procedure: first, small image fragments called patches are classified; second, frequency vectors of these patch classifications are fed into a second classifier for global scene classification . The first-stage classifiers can be seen as a set of highly specialized, learned feature detectors, serving as an alternative to having an image processing expert determine features a priori. The paper presents results from experiments on a variety of patch and image classes. The scene classifier has been used successfully within television archives and for Internet porn filtering.

Get your free example report

Get your free whitepaper