132,774 research outputs found

    Facial Point Detection using Boosted Regression and Graph Models

    Get PDF
    Finding fiducial facial points in any frame of a video showing rich naturalistic facial behaviour is an unsolved problem. Yet this is a crucial step for geometric-featurebased facial expression analysis, and methods that use appearance-based features extracted at fiducial facial point locations. In this paper we present a method based on a combination of Support Vector Regression and Markov Random Fields to drastically reduce the time needed to search for a point’s location and increase the accuracy and robustness of the algorithm. Using Markov Random Fields allows us to constrain the search space by exploiting the constellations that facial points can form. The regressors on the other hand learn a mapping between the appearance of the area surrounding a point and the positions of these points, which makes detection of the points very fast and can make the algorithm robust to variations of appearance due to facial expression and moderate changes in head pose. The proposed point detection algorithm was tested on 1855 images, the results of which showed we outperform current state of the art point detectors

    Automatic facial analysis for objective assessment of facial paralysis

    Get PDF
    Facial Paralysis is a condition causing decreased movement on one side of the face. A quantitative, objective and reliable assessment system would be an invaluable tool for clinicians treating patients with this condition. This paper presents an approach based on the automatic analysis of patient video data. Facial feature localization and facial movement detection methods are discussed. An algorithm is presented to process the optical flow data to obtain the motion features in the relevant facial regions. Three classification methods are applied to provide quantitative evaluations of regional facial nerve function and the overall facial nerve function based on the House-Brackmann Scale. Experiments show the Radial Basis Function (RBF) Neural Network to have superior performance

    Source extraction and photometry for the far-infrared and sub-millimeter continuum in the presence of complex backgrounds

    Full text link
    (Abridged) We present a new method for detecting and measuring compact sources in conditions of intense, and highly variable, fore/background. While all most commonly used packages carry out the source detection over the signal image, our proposed method builds from the measured image a "curvature" image by double-differentiation in four different directions. In this way point-like as well as resolved, yet relatively compact, objects are easily revealed while the slower varying fore/background is greatly diminished. Candidate sources are then identified by looking for pixels where the curvature exceeds, in absolute terms, a given threshold; the methodology easily allows us to pinpoint breakpoints in the source brightness profile and then derive reliable guesses for the sources extent. Identified peaks are fit with 2D elliptical Gaussians plus an underlying planar inclined plateau, with mild constraints on size and orientation. Mutually contaminating sources are fit with multiple Gaussians simultaneously using flexible constraints. We ran our method on simulated large-scale fields with 1000 sources of different peak flux overlaid on a realistic realization of diffuse background. We find detection rates in excess of 90% for sources with peak fluxes above the 3-sigma signal noise limit; for about 80% of the sources the recovered peak fluxes are within 30% of their input values.Comment: Accepted on A&

    To dash or to dawdle: verb-associated speed of motion influences eye movements during spoken sentence comprehension

    Get PDF
    In describing motion events verbs of manner provide information about the speed of agents or objects in those events. We used eye tracking to investigate how inferences about this verb-associated speed of motion would influence the time course of attention to a visual scene that matched an event described in language. Eye movements were recorded as participants heard spoken sentences with verbs that implied a fast (“dash”) or slow (“dawdle”) movement of an agent towards a goal. These sentences were heard whilst participants concurrently looked at scenes depicting the agent and a path which led to the goal object. Our results indicate a mapping of events onto the visual scene consistent with participants mentally simulating the movement of the agent along the path towards the goal: when the verb implies a slow manner of motion, participants look more often and longer along the path to the goal; when the verb implies a fast manner of motion, participants tend to look earlier at the goal and less on the path. These results reveal that event comprehension in the presence of a visual world involves establishing and dynamically updating the locations of entities in response to linguistic descriptions of events
    • 

    corecore