4 research outputs found

    Embodied Visual Perception Models For Human Behavior Understanding

    Get PDF
    Many modern applications require extracting the core attributes of human behavior such as a person\u27s attention, intent, or skill level from the visual data. There are two main challenges related to this problem. First, we need models that can represent visual data in terms of object-level cues. Second, we need models that can infer the core behavioral attributes from the visual data. We refer to these two challenges as ``learning to see\u27\u27, and ``seeing to learn\u27\u27 respectively. In this PhD thesis, we have made progress towards addressing both challenges. We tackle the problem of ``learning to see\u27\u27 by developing methods that extract object-level information directly from raw visual data. This includes, two top-down contour detectors, DeepEdge and HfL, which can be used to aid high-level vision tasks such as object detection. Furthermore, we also present two semantic object segmentation methods, Boundary Neural Fields (BNFs), and Convolutional Random Walk Networks (RWNs), which integrate low-level affinity cues into an object segmentation process. We then shift our focus to video-level understanding, and present a Spatiotemporal Sampling Network (STSN), which can be used for video object detection, and discriminative motion feature learning. Afterwards, we transition into the second subproblem of ``seeing to learn\u27\u27, for which we leverage first-person GoPro cameras that record what people see during a particular activity. We aim to infer the core behavior attributes such as a person\u27s attention, intention, and his skill level from such first-person data. To do so, we first propose a concept of action-objects--the objects that capture person\u27s conscious visual (watching a TV) or tactile (taking a cup) interactions. We then introduce two models, EgoNet and Visual-Spatial Network (VSN), which detect action-objects in supervised and unsupervised settings respectively. Afterwards, we focus on a behavior understanding task in a complex basketball activity. We present a method for evaluating players\u27 skill level from their first-person basketball videos, and also a model that predicts a player\u27s future motion trajectory from a single first-person image

    Motion Learning for Dynamic Scene Understanding

    Get PDF
    An important goal of computer vision is to automatically understand the visual world. With the introduction of deep networks, we see huge progress in static image understanding. However, we live in a dynamic world, so it is far from enough to merely understand static images. Motion plays a key role in analyzing dynamic scenes and has been one of the fundamental research topics in computer vision. It has wide applications in many fields, including video analysis, socially-aware robotics, autonomous driving, etc. In this dissertation, we study motion from two perspectives: geometric and semantic. From the geometric perspective, we aim to accurately estimate the 3D motion (or scene flow) and 3D structure of the scene. Since manually annotating motion is difficult, we propose self-supervised models for scene flow estimation from image and point cloud sequences. From the semantic perspective, we aim to understand the meanings of different motion patterns and first show that motion benefits detecting and tracking objects from videos. Then we propose a framework to understand the intentions and predict the future locations of agents in a scene. Finally, we study the role of motion information in action recognition
    corecore