24,821 research outputs found

    Maximum likelihood estimation of cloud height from multi-angle satellite imagery

    Full text link
    We develop a new estimation technique for recovering depth-of-field from multiple stereo images. Depth-of-field is estimated by determining the shift in image location resulting from different camera viewpoints. When this shift is not divisible by pixel width, the multiple stereo images can be combined to form a super-resolution image. By modeling this super-resolution image as a realization of a random field, one can view the recovery of depth as a likelihood estimation problem. We apply these modeling techniques to the recovery of cloud height from multiple viewing angles provided by the MISR instrument on the Terra Satellite. Our efforts are focused on a two layer cloud ensemble where both layers are relatively planar, the bottom layer is optically thick and textured, and the top layer is optically thin. Our results demonstrate that with relative ease, we get comparable estimates to the M2 stereo matcher which is the same algorithm used in the current MISR standard product (details can be found in [IEEE Transactions on Geoscience and Remote Sensing 40 (2002) 1547--1559]). Moreover, our techniques provide the possibility of modeling all of the MISR data in a unified way for cloud height estimation. Research is underway to extend this framework for fast, quality global estimates of cloud height.Comment: Published in at http://dx.doi.org/10.1214/09-AOAS243 the Annals of Applied Statistics (http://www.imstat.org/aoas/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Scenic: A Language for Scenario Specification and Scene Generation

    Full text link
    We propose a new probabilistic programming language for the design and analysis of perception systems, especially those based on machine learning. Specifically, we consider the problems of training a perception system to handle rare events, testing its performance under different conditions, and debugging failures. We show how a probabilistic programming language can help address these problems by specifying distributions encoding interesting types of inputs and sampling these to generate specialized training and test sets. More generally, such languages can be used for cyber-physical systems and robotics to write environment models, an essential prerequisite to any formal analysis. In this paper, we focus on systems like autonomous cars and robots, whose environment is a "scene", a configuration of physical objects and agents. We design a domain-specific language, Scenic, for describing "scenarios" that are distributions over scenes. As a probabilistic programming language, Scenic allows assigning distributions to features of the scene, as well as declaratively imposing hard and soft constraints over the scene. We develop specialized techniques for sampling from the resulting distribution, taking advantage of the structure provided by Scenic's domain-specific syntax. Finally, we apply Scenic in a case study on a convolutional neural network designed to detect cars in road images, improving its performance beyond that achieved by state-of-the-art synthetic data generation methods.Comment: 41 pages, 36 figures. Full version of a PLDI 2019 paper (extending UC Berkeley EECS Department Tech Report No. UCB/EECS-2018-8

    Sim2Real View Invariant Visual Servoing by Recurrent Control

    Full text link
    Humans are remarkably proficient at controlling their limbs and tools from a wide range of viewpoints and angles, even in the presence of optical distortions. In robotics, this ability is referred to as visual servoing: moving a tool or end-point to a desired location using primarily visual feedback. In this paper, we study how viewpoint-invariant visual servoing skills can be learned automatically in a robotic manipulation scenario. To this end, we train a deep recurrent controller that can automatically determine which actions move the end-point of a robotic arm to a desired object. The problem that must be solved by this controller is fundamentally ambiguous: under severe variation in viewpoint, it may be impossible to determine the actions in a single feedforward operation. Instead, our visual servoing system must use its memory of past movements to understand how the actions affect the robot motion from the current viewpoint, correcting mistakes and gradually moving closer to the target. This ability is in stark contrast to most visual servoing methods, which either assume known dynamics or require a calibration phase. We show how we can learn this recurrent controller using simulated data and a reinforcement learning objective. We then describe how the resulting model can be transferred to a real-world robot by disentangling perception from control and only adapting the visual layers. The adapted model can servo to previously unseen objects from novel viewpoints on a real-world Kuka IIWA robotic arm. For supplementary videos, see: https://fsadeghi.github.io/Sim2RealViewInvariantServoComment: Supplementary video: https://fsadeghi.github.io/Sim2RealViewInvariantServ

    WiseEye: next generation expandable and programmable camera trap platform for wildlife research

    Get PDF
    Funding: The work was supported by the RCUK Digital Economy programme to the dot.rural Digital Economy Hub; award reference: EP/G066051/1. The work of S. Newey and RJI was part funded by the Scottish Government's Rural and Environment Science and Analytical Services (RESAS). Details published as an Open Source Toolkit, PLOS Journals at: http://dx.doi.org/10.1371/journal.pone.0169758Peer reviewedPublisher PD

    User-centered design of a dynamic-autonomy remote interaction concept for manipulation-capable robots to assist elderly people in the home

    Get PDF
    In this article, we describe the development of a human-robot interaction concept for service robots to assist elderly people in the home with physical tasks. Our approach is based on the insight that robots are not yet able to handle all tasks autonomously with sufficient reliability in the complex and heterogeneous environments of private homes. We therefore employ remote human operators to assist on tasks a robot cannot handle completely autonomously. Our development methodology was user-centric and iterative, with six user studies carried out at various stages involving a total of 241 participants. The concept is under implementation on the Care-O-bot 3 robotic platform. The main contributions of this article are (1) the results of a survey in form of a ranking of the demands of elderly people and informal caregivers for a range of 25 robot services, (2) the results of an ethnography investigating the suitability of emergency teleassistance and telemedical centers for incorporating robotic teleassistance, and (3) a user-validated human-robot interaction concept with three user roles and corresponding three user interfaces designed as a solution to the problem of engineering reliable service robots for home environments
    • …
    corecore