120,148 research outputs found

    Understanding critical factors in gender recognition

    Get PDF
    Gender classification is a task of paramount importance in face recognition research, and it is potentially useful in a large set of applications. In this paper we investigate the gender classification problem by an extended empirical analysis on the Face Recognition Grand Challenge version 2.0 dataset (FRGC2.0). We propose challenging experimental protocols over the dimensions of FRGC2.0 – i.e., subject, face expression, race, controlled or uncontrolled environment. We evaluate our protocols with respect to several classification algorithms, and processing different types of features, like Gabor and LBP. Our results show that gender classification is independent from factors like the race of the subject, face expressions, and variations of controlled illumination conditions. We also report that Gabor features seem to be more robust than LBPs in the case of uncontrolled environment

    Brief mindfulness training enhances cognitive control in socioemotional contexts: Behavioral and neural evidence.

    Get PDF
    In social contexts, the dynamic nature of others' emotions places unique demands on attention and emotion regulation. Mindfulness, characterized by heightened and receptive moment-to-moment attending, may be well-suited to meet these demands. In particular, mindfulness may support more effective cognitive control in social situations via efficient deployment of top-down attention. To test this, a randomized controlled study examined effects of mindfulness training (MT) on behavioral and neural (event-related potentials [ERPs]) responses during an emotional go/no-go task that tested cognitive control in the context of emotional facial expressions that tend to elicit approach or avoidance behavior. Participants (N = 66) were randomly assigned to four brief (20 min) MT sessions or to structurally equivalent book learning control sessions. Relative to the control group, MT led to improved discrimination of facial expressions, as indexed by d-prime, as well as more efficient cognitive control, as indexed by response time and accuracy, and particularly for those evidencing poorer discrimination and cognitive control at baseline. MT also produced better conflict monitoring of behavioral goal-prepotent response tendencies, as indexed by larger No-Go N200 ERP amplitudes, and particularly so for those with smaller No-Go amplitude at baseline. Overall, findings are consistent with MT's potential to enhance deployment of early top-down attention to better meet the unique cognitive and emotional demands of socioemotional contexts, particularly for those with greater opportunity for change. Findings also suggest that early top-down attention deployment could be a cognitive mechanism correspondent to the present-oriented attention commonly used to explain regulatory benefits of mindfulness more broadly

    Water Resources Review - February 1993 Vol 6 No 1

    Get PDF

    Subspace-Based Holistic Registration for Low-Resolution Facial Images

    Get PDF
    Subspace-based holistic registration is introduced as an alternative to landmark-based face registration, which has a poor performance on low-resolution images, as obtained in camera surveillance applications. The proposed registration method finds the alignment by maximizing the similarity score between a probe and a gallery image. We use a novel probabilistic framework for both user-independent as well as user-specific face registration. The similarity is calculated using the probability that the face image is correctly aligned in a face subspace, but additionally we take the probability into account that the face is misaligned based on the residual error in the dimensions perpendicular to the face subspace. We perform extensive experiments on the FRGCv2 database to evaluate the impact that the face registration methods have on face recognition. Subspace-based holistic registration on low-resolution images can improve face recognition in comparison with landmark-based registration on high-resolution images. The performance of the tested face recognition methods after subspace-based holistic registration on a low-resolution version of the FRGC database is similar to that after manual registration

    Conducting Research with Community Groups

    Get PDF
    Nurse scientists are increasingly recognizing the necessity of conducting research with community groups to effectively address complex health problems and successfully translate scientific advancements into the community. While several barriers to conducting research with community groups exist, community based participatory research (CBPR) has the potential to mitigate these barriers. CBPR has been employed in programs of research that respond in culturally sensitive ways to identify community needs and thereby address current health disparities. This manuscript presents case studies that demonstrate how CBPR principles guided the development of: (a) a healthy body weight program for urban, underserved African-American women, (b) a reproductive health educational intervention for urban, low-income, underserved, ethnically diverse women, and (c) a pilot anxiety/depression intervention for urban, low-income, underserved, ethnically diverse women. These case studies illustrate the potential of CBPR as an orientation to research that can be employed effectively in non-research intensive academic environments

    Enriched Long-term Recurrent Convolutional Network for Facial Micro-Expression Recognition

    Full text link
    Facial micro-expression (ME) recognition has posed a huge challenge to researchers for its subtlety in motion and limited databases. Recently, handcrafted techniques have achieved superior performance in micro-expression recognition but at the cost of domain specificity and cumbersome parametric tunings. In this paper, we propose an Enriched Long-term Recurrent Convolutional Network (ELRCN) that first encodes each micro-expression frame into a feature vector through CNN module(s), then predicts the micro-expression by passing the feature vector through a Long Short-term Memory (LSTM) module. The framework contains two different network variants: (1) Channel-wise stacking of input data for spatial enrichment, (2) Feature-wise stacking of features for temporal enrichment. We demonstrate that the proposed approach is able to achieve reasonably good performance, without data augmentation. In addition, we also present ablation studies conducted on the framework and visualizations of what CNN "sees" when predicting the micro-expression classes.Comment: Published in Micro-Expression Grand Challenge 2018, Workshop of 13th IEEE Facial & Gesture 201

    CHORUS Deliverable 4.3: Report from CHORUS workshops on national initiatives and metadata

    Get PDF
    Minutes of the following Workshops: • National Initiatives on Multimedia Content Description and Retrieval, Geneva, October 10th, 2007. • Metadata in Audio-Visual/Multimedia production and archiving, Munich, IRT, 21st – 22nd November 2007 Workshop in Geneva 10/10/2007 This highly successful workshop was organised in cooperation with the European Commission. The event brought together the technical, administrative and financial representatives of the various national initiatives, which have been established recently in some European countries to support research and technical development in the area of audio-visual content processing, indexing and searching for the next generation Internet using semantic technologies, and which may lead to an internet-based knowledge infrastructure. The objective of this workshop was to provide a platform for mutual information and exchange between these initiatives, the European Commission and the participants. Top speakers were present from each of the national initiatives. There was time for discussions with the audience and amongst the European National Initiatives. The challenges, communalities, difficulties, targeted/expected impact, success criteria, etc. were tackled. This workshop addressed how these national initiatives could work together and benefit from each other. Workshop in Munich 11/21-22/2007 Numerous EU and national research projects are working on the automatic or semi-automatic generation of descriptive and functional metadata derived from analysing audio-visual content. The owners of AV archives and production facilities are eagerly awaiting such methods which would help them to better exploit their assets.Hand in hand with the digitization of analogue archives and the archiving of digital AV material, metadatashould be generated on an as high semantic level as possible, preferably fully automatically. All users of metadata rely on a certain metadata model. All AV/multimedia search engines, developed or under current development, would have to respect some compatibility or compliance with the metadata models in use. The purpose of this workshop is to draw attention to the specific problem of metadata models in the context of (semi)-automatic multimedia search

    High-level feature detection from video in TRECVid: a 5-year retrospective of achievements

    Get PDF
    Successful and effective content-based access to digital video requires fast, accurate and scalable methods to determine the video content automatically. A variety of contemporary approaches to this rely on text taken from speech within the video, or on matching one video frame against others using low-level characteristics like colour, texture, or shapes, or on determining and matching objects appearing within the video. Possibly the most important technique, however, is one which determines the presence or absence of a high-level or semantic feature, within a video clip or shot. By utilizing dozens, hundreds or even thousands of such semantic features we can support many kinds of content-based video navigation. Critically however, this depends on being able to determine whether each feature is or is not present in a video clip. The last 5 years have seen much progress in the development of techniques to determine the presence of semantic features within video. This progress can be tracked in the annual TRECVid benchmarking activity where dozens of research groups measure the effectiveness of their techniques on common data and using an open, metrics-based approach. In this chapter we summarise the work done on the TRECVid high-level feature task, showing the progress made year-on-year. This provides a fairly comprehensive statement on where the state-of-the-art is regarding this important task, not just for one research group or for one approach, but across the spectrum. We then use this past and on-going work as a basis for highlighting the trends that are emerging in this area, and the questions which remain to be addressed before we can achieve large-scale, fast and reliable high-level feature detection on video
    corecore