5 research outputs found

    Object Recognition Robust to Imperfect Depth Data

    No full text
    Abstract. In this paper, we present an adaptive data fusion model that robustly integrates depth and image only perception. Combining dense depth measurements with images can greatly enhance the performance of many computer vision algorithms, yet degraded depth measurements (e.g., missing data) can also cause dramatic performance losses to levels below image-only algorithms. We propose a generic fusion model based on maximum likelihood estimates of fused image-depth functions for both available and missing depth data. We demonstrate its application to each step of a state-of-the-art image-only object instance recognition pipeline. The resulting approach shows increased recognition performance over alternative data fusion approaches. Despite its tremendous potential, dense depth estimation has fundamental limitations that must be addressed for robust performance. In many realistic scenes, depth sensors fail to compute depth measurements on portions of the associated color data (as shown in Fig. 1). We refer to this phenomenon o

    Object Recognition Robust to Imperfect Depth Data

    No full text
    <p>n this paper, we present an adaptive data fusion model that robustly integrates depth and image only perception. Combining dense depth measurements with images can greatly enhance the performance of many computer vision algorithms, yet degraded depth measurements (e.g., missing data) can also cause dramatic performance losses to levels below image-only algorithms. We propose a generic fusion model based on maximum likelihood estimates of fused image-depth functions for both available and missing depth data. We demonstrate its application to each step of a state-of-the-art image-only object instance recognition pipeline. The resulting approach shows increased recognition performance over alternative data fusion approaches.</p

    Lifelong Robotic Object Perception

    No full text
    In this thesis, we study the topic of Lifelong Robotic Object Perception. We propose, as a long-term goal, a framework to recognize known objects and to discover unknown objects in the environment as the robot operates, for as long as the robot operates. We build the foundations for Lifelong Robotic Object Perception by focusing our study on the two critical components of this framework: 1) how to recognize and register known objects for robotic manipulation, and 2) how to automatically discover novel objects in the environment so that we can recognize them in the future. Our work on Object Recognition and Pose Estimation addresses two main challenges in computer vision for robotics: robust performance in complex scenes, and low latency for real-time operation. We present MOPED, a framework for Multiple Object Pose Estimation and Detection that integrates single-image and multi-image object recognition and pose estimation in one optimized, robust, and scalable framework. We extend MOPED to leverage RGBD images using an adaptive image-depth fusion model based on maximum likelihood estimates. We incorporate this model to each stage of MOPED to achieve object recognition robust to imperfect depth data. In Robotic Object Discovery, we address the challenges of scalability and robustness for long-term operation. As a first step towards Lifelong Robotic Object Perception, we aim to automatically process the raw video stream of an entire workday of a robotic agent to discover novel objects. The key to achieve this goal is to incorporate non-visual information—robotic metadata—in the discovery process. We encode the natural constraints and non-visual sensory information in service robotics to make long-term object discovery feasible. We introduce an optimized implementation, HerbDisc, that processes a video stream of 6 h 20 min of challenging human environments in under 19 min and discovers 206 novel objects. We tailor our solutions to the sensing capabilities and requirements in service robotics, with the goal of enabling our service robot, HERB, to operate autonomously in human environments

    Lifelong Robotic Object Perception

    No full text
    <p>In this thesis, we study the topic of Lifelong Robotic Object Perception. We propose, as a long-term goal, a framework to recognize known objects and to discover unknown objects in the environment as the robot operates, for as long as the robot operates. We build the foundations for Lifelong Robotic Object Perception by focusing our study on the two critical components of this framework: 1) how to recognize and register known objects for robotic manipulation, and 2) how to automatically discover novel objects in the environment so that we can recognize them in the future.</p> <p>Our work on Object Recognition and Pose Estimation addresses two main challenges in computer vision for robotics: robust performance in complex scenes, and low latency for real-time operation. We present MOPED, a framework for Multiple Object Pose Estimation and Detection that integrates single-image and multi-image object recognition and pose estimation in one optimized, robust, and scalable framework. We extend MOPED to leverage RGBD images using an adaptive image-depth fusion model based on maximum likelihood estimates. We incorporate this model to each stage of MOPED to achieve object recognition robust to imperfect depth data.</p> <p>In Robotic Object Discovery, we address the challenges of scalability and robustness for long-term operation. As a first step towards Lifelong Robotic Object Perception, we aim to automatically process the raw video stream of an entire workday of a robotic agent to discover novel objects. The key to achieve this goal is to incorporate non-visual information| robotic metadata|in the discovery process. We encode the natural constraints and nonvisual sensory information in service robotics to make long-term object discovery feasible. We introduce an optimized implementation, HerbDisc, that processes a video stream of 6 h 20 min of challenging human environments in under 19 min and discovers 206 novel objects.</p> <p>We tailor our solutions to the sensing capabilities and requirements in service robotics, with the goal of enabling our service robot, HERB, to operate autonomously in human environments.</p
    corecore