Browsing

Publication Tag: Computer Vision

An overview of all publications that have the tag you selected.

2022
3 citations
Proximally Sensitive Error for Anomaly Detection and Feature Learning
A. Gudi, F. Büttner, J. van Gemert
Mean squared error is widely used to measure differences between multi-dimensional entities, including images. However, MSE lacks local sensitivity as it doesn’t consider the spatial arrangement of pixel differences, which is crucial for structured data like images. Such spatial arrangements provide information about the source of differences; therefore, an error function that incorporates the location of errors can offer a more meaningful distance measure. We introduce Proximally Sensitive Error , suggesting that emphasizing regions in the error measure can highlight semantic differences between images over syntactic or random deviations. We demonstrate that this emphasis can be leveraged for anomaly or occlusion detection. Additionally, we explore its utility as a loss function to help models focus on learning representations of semantic objects instead of minimizing syntactic reconstruction noise.
2017
41 citations
Computerised analysis of facial emotion expression in eating disorders
Leppanen, Dapelo, Davies, Lang, Treasure, Tchanturia
Problems with social-emotional processing are known to be an important contributor to the development and maintenance of eating disorders . Diminished facial communication of emotion has been frequently reported in individuals with anorexia nervosa . Less is known about facial expressivity in bulimia nervosa and in people who have recovered from AN . This study aimed to pilot the use of computerised facial expression analysis software to investigate emotion expression across the ED spectrum and recovery in a large sample of participants. 297 participants with AN, BN, RecAN, and healthy controls were recruited. Participants watched film clips designed to elicit happy or sad emotions, and facial expressions were then analysed using FaceReader. The finding mirrored those from previous work showing that healthy control and RecAN participants expressed significantly more positive emotions during the positive clip compared to the AN group. There were no differences in emotion expression during the sad film clip. These findings support the use of computerised methods to analyse emotion expression in EDs. The findings also demonstrate that reduced positive emotion expression is likely to be associated with the acute stage of AN illness, with individuals with BN showing an intermediate profile.
2018
33 citations
Emotional expressions by sports teams: An analysis of World Cup soccer player portraits
Hopfensitz & Mantilla
Emotion display serves as incentives or deterrents for others in many social interactions. We study the portrayal of anger and happiness, two emotions associated with dominance, and its relationship to team performance in a high-stakes environment. We analyze 4,318 pictures of players from 304 participating teams in twelve editions of the FIFA Soccer World Cup and use automated face-reading to evaluate the display of anger and happiness. We observe that the display of both anger and happiness is positively correlated with team performance in the World Cup. Teams whose players display more anger, an emotion associated with competitiveness, concede fewer goals. Teams whose players display more happiness, an emotion associated with confidence, score more goals. We show that this result is driven by less than half the players in a team.
2015
30 citations
Association between facial expression and PTSD symptoms among young children exposed to the Great East Japan Earthquake: a pilot study
Fujiwara
“Emotional numbing” is a symptom of post-traumatic stress disorder characterized by a loss of interest in usually enjoyable activities, feeling detached from others, and an inability to express a full range of emotions. Emotional numbing is usually assessed through self-report, and is particularly difficult to ascertain among young children. We conducted a pilot study to explore the use of facial expression ratings in response to a comedy video clip to assess emotional reactivity among preschool children directly exposed to the Great East Japan Earthquake. This study included 23 child participants. Child PTSD symptoms were measured using a modified version of the Parent’s Report of the Child’s Reaction to Stress scale. Children were filmed while watching a 2-min video compilation of natural scenes followed by a 2-min video clip from a television comedy . Children’s facial expressions were processed the using Noldus FaceReader software, which implements the Facial Action Coding System . We investigated the association between PTSD symptom scores and facial emotion reactivity using linear regression analysis. Children with higher PTSD symptom scores showed a significantly greater proportion of neutral facial expressions, controlling for sex, age, and baseline facial expression . This pilot study suggests that facial emotion reactivity, measured using facial expression recognition software, has the potential to index emotional numbing in young children. This pilot study adds to the emerging literature on using experimental psychopathology methods to characterize children’s reactions to disasters.
2013
54 citations
Computerized facial analysis for understanding constricted/ blunted affect: initial feasibility, reliability, and validity data
Cohen, Morrison, Callaway
This study explores the feasibility, reliability, and validity of using computerized facial analysis to assess constricted or blunted affect in individuals. The authors employed automated facial expression recognition technology to analyze facial movements and expressions, aiming to provide objective measures of affective flattening. The results indicate that this method is both reliable and valid, suggesting its potential utility in clinical settings for evaluating affective disorders.
2016
41 citations
Comparison of selected off-the-shelf solutions for emotion recognition based on facial expressions
Brodny, Kolakowska, Landowska, Szwoch, Wrobel
The paper examines the accuracy of emotion recognition from facial expressions by evaluating two off-the-shelf solutions: FaceReader by Noldus and Xpress Engine by QuantumLab. The study reveals that recognition accuracies vary between photo and video inputs, suggesting that solutions should be tailored to the specific application domain.
2004
5 citations
Real time automatic scene classification
M. Israël, E.L. van den Broek, P. van der Putten, M.J. den Uyl
This work, part of the EU VICAR and SCOFI projects, aimed to develop a real-time video indexing, classification, annotation, and retrieval system. The authors introduced a generic approach for visual scene recognition using “typed patches”—groups of adjacent pixels characterized by local pixel distribution, brightness, and color. Each patch is described using an HSI color histogram and texture features. A fixed grid overlays the image, segmenting each cell into patches categorized by a classifier. Frequency vectors of these classified patches are concatenated to represent the entire image. Testing on eight scene categories from the Corel database showed 87.5% accuracy in patch classification and 73.8% in scene classification. The method’s advantages include low computational complexity and versatility for image classification, segmentation, and matching. However, manual classification of training patches is a drawback, prompting the development of algorithms for automatic extraction of relevant patch types. The approach was implemented in the VICAR project’s video indexing system for the Netherlands Institute for Sound and Vision and in the SCOFI project’s real-time Internet pornography filter, achieving 92% accuracy with minimal overblocking and underblocking.
2004
29 citations
Automating the Construction of Scene Classifiers for Content-Based Video Retrieval
M. Israël, E.L. van den Broek, P. van der Putten, M.J. den Uyl
This paper introduces a real-time automatic scene classifier within content-based video retrieval. In the proposed approach, end users like documentalists, not image processing experts, build classifiers interactively by simply indicating positive examples of a scene. Classification consists of a two-stage procedure: first, small image fragments called patches are classified; second, frequency vectors of these patch classifications are fed into a second classifier for global scene classification . The first-stage classifiers can be seen as a set of highly specialized, learned feature detectors, serving as an alternative to having an image processing expert determine features a priori. The paper presents results from experiments on a variety of patch and image classes. The scene classifier has been used successfully within television archives and for Internet porn filtering.
2006
15 citations
Learning a Sparse Representation from Multiple Still Images for On-Line Face Recognition in an Unconstrained Environment
J.W.H. Tangelder, B.A.M. Schouten
In a real-world environment a face detector can be applied to extract multiple face images from multiple video streams without constraints on pose and illumination. The extracted face images will have varying image quality and resolution. Moreover, also the detected faces will not be precisely aligned. This paper presents a new approach to on-line face identification from multiple still images obtained under such unconstrained conditions. Our method learns a sparse representation of the most discriminative descriptors of the detected face images according to their classification accuracies. On-line face recognition is supported using a single descriptor of a face image as a query. We apply our method to our newly introduced BHG descriptor, the SIFT descriptor, and the LBP descriptor, which obtain limited robustness against illumination, pose and alignment errors. Our experimental results using a video face database of pairs of unconstrained low resolution video clips of ten subjects, show that our method achieves a recognition rate of 94% with a sparse representation containing 10% of all available data, at a false acceptance rate of 4%.
2007
14 citations
Distance Measures for Gabor Jets-Based Face Authentication: A Comparative Evaluation
D. González-Jiménez, M. Bicego, J.W.H. Tangelder, B.A.M. Schouten, O. Ambekar, J.L. Alba-Castro, E. Grosso, M. Tistarelli
Local Gabor features have been widely used in face recognition systems. Once the sets of jets have been extracted from the two faces to be compared, a proper measure of similarity between corresponding features should be chosen. For instance, in the well-known Elastic Bunch Graph Matching approach and other Gabor-based face recognition systems, the cosine distance was used as a measure. In this paper, we provide an empirical evaluation of seven distance measures for comparison, using a recently introduced face recognition system, based on Shape Driven Gabor Jets . Moreover, we evaluate different normalization factors that are used to pre-process the jets. Experimental results on the BANCA database suggest that the concrete type of normalization applied to jets is a critical factor, and that some combinations of normalization and distance achieve better performance than the classical cosine measure for jet comparison.

Request a free trial

Get your free example report

Get your free whitepaper