124,038 research outputs found

    From Facial Parts Responses to Face Detection: A Deep Learning Approach

    Full text link
    In this paper, we propose a novel deep convolutional network (DCN) that achieves outstanding performance on FDDB, PASCAL Face, and AFW. Specifically, our method achieves a high recall rate of 90.99% on the challenging FDDB benchmark, outperforming the state-of-the-art method by a large margin of 2.91%. Importantly, we consider finding faces from a new perspective through scoring facial parts responses by their spatial structure and arrangement. The scoring mechanism is carefully formulated considering challenging cases where faces are only partially visible. This consideration allows our network to detect faces under severe occlusion and unconstrained pose variation, which are the main difficulty and bottleneck of most existing face detection approaches. We show that despite the use of DCN, our network can achieve practical runtime speed.Comment: To appear in ICCV 201

    Factor Analysis of the Milwaukee Inventory for Subtypes of Trichotillomania-Adult Version

    Get PDF
    The Milwaukee Inventory for Subtypes of Trichotillomania-Adult Version (MIST-A; Flessner et al., 2008) measures the degree to which hair pulling in Trichotillomania (TTM) can be described as “automatic” (i.e., done without awareness and unrelated to affective states) and/or “focused” (i.e., done with awareness and to regulate affective states). Despite preliminary evidence in support of the psychometric properties of the MIST-A, emerging research suggests the original factor structure may not optimally capture TTM phenomenology. Using data from a treatment-seeking TTM sample, the current study examined the factor structure of the MIST-A via exploratory factor analysis. The resulting two factor solution suggested the MIST-A consists of a 5-item “awareness of pulling” factor that measures the degree to which pulling is done with awareness and an 8-item “internal-regulated pulling” factor that measures the degree to which pulling is done to regulate internal stimuli (e.g., emotions, cognitions, and urges). Correlational analyses provided preliminary evidence for the validity of these derived factors. Findings from this study challenge the notions of “automatic” and “focused” pulling styles and suggest that researchers should continue to explore TTM subtypes

    MoSculp: Interactive Visualization of Shape and Time

    Full text link
    We present a system that allows users to visualize complex human motion via 3D motion sculptures---a representation that conveys the 3D structure swept by a human body as it moves through space. Given an input video, our system computes the motion sculptures and provides a user interface for rendering it in different styles, including the options to insert the sculpture back into the original video, render it in a synthetic scene or physically print it. To provide this end-to-end workflow, we introduce an algorithm that estimates that human's 3D geometry over time from a set of 2D images and develop a 3D-aware image-based rendering approach that embeds the sculpture back into the scene. By automating the process, our system takes motion sculpture creation out of the realm of professional artists, and makes it applicable to a wide range of existing video material. By providing viewers with 3D information, motion sculptures reveal space-time motion information that is difficult to perceive with the naked eye, and allow viewers to interpret how different parts of the object interact over time. We validate the effectiveness of this approach with user studies, finding that our motion sculpture visualizations are significantly more informative about motion than existing stroboscopic and space-time visualization methods.Comment: UIST 2018. Project page: http://mosculp.csail.mit.edu
    • …
    corecore