346 research outputs found

    Independent Motion Detection with Event-driven Cameras

    Full text link
    Unlike standard cameras that send intensity images at a constant frame rate, event-driven cameras asynchronously report pixel-level brightness changes, offering low latency and high temporal resolution (both in the order of micro-seconds). As such, they have great potential for fast and low power vision algorithms for robots. Visual tracking, for example, is easily achieved even for very fast stimuli, as only moving objects cause brightness changes. However, cameras mounted on a moving robot are typically non-stationary and the same tracking problem becomes confounded by background clutter events due to the robot ego-motion. In this paper, we propose a method for segmenting the motion of an independently moving object for event-driven cameras. Our method detects and tracks corners in the event stream and learns the statistics of their motion as a function of the robot's joint velocities when no independently moving objects are present. During robot operation, independently moving objects are identified by discrepancies between the predicted corner velocities from ego-motion and the measured corner velocities. We validate the algorithm on data collected from the neuromorphic iCub robot. We achieve a precision of ~ 90 % and show that the method is robust to changes in speed of both the head and the target.Comment: 7 pages, 6 figure

    A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot

    Get PDF
    Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computational models into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors

    A hierarchical system for a distributed representation of the peripersonal space of a humanoid robot

    Get PDF
    Reaching a target object in an unknown and unstructured environment is easily performed by human beings. However, designing a humanoid robot that executes the same task requires the implementation of complex abilities, such as identifying the target in the visual field, estimating its spatial location, and precisely driving the motors of the arm to reach it. While research usually tackles the development of such abilities singularly, in this work we integrate a number of computational models into a unified framework, and demonstrate in a humanoid torso the feasibility of an integrated working representation of its peripersonal space. To achieve this goal, we propose a cognitive architecture that connects several models inspired by neural circuits of the visual, frontal and posterior parietal cortices of the brain. The outcome of the integration process is a system that allows the robot to create its internal model and its representation of the surrounding space by interacting with the environment directly, through a mutual adaptation of perception and action. The robot is eventually capable of executing a set of tasks, such as recognizing, gazing and reaching target objects, which can work separately or cooperate for supporting more structured and effective behaviors

    Gazing at the Solar System: Capturing the Evolution of Dunes, Faults, Volcanoes, and Ice from Space

    Get PDF
    Gazing imaging holds promise for improved understanding of surface characteristics and processes of Earth and solar system bodies. Evolution of earthquake fault zones, migration of sand dunes, and retreat of ice masses can be understood by observing changing features over time. To gaze or stare means to look steadily, intently, and with fixed attention, offering the ability to probe the characteristics of a target deeply, allowing retrieval of 3D structure and changes on fine and coarse scales. Observing surface reflectance and 3D structure from multiple perspectives allows for a more complete view of a surface than conventional remote imaging. A gaze from low Earth orbit (LEO) could last several minutes allowing for video capture of dynamic processes. Repeat passes enable monitoring time scales of days to years. Numerous vantage points are available during a gaze (Figure 1). Features in the scene are projected into each image frame enabling the recovery of dense 3D structure. The recovery is robust to errors in the spacecraft position and attitude knowledge, because features are from different perspectives. The combination of a varying look angle and the solar illumination allows recovering texture and reflectance properties and permits the separation of atmospheric effects. Applications are numerous and diverse, including, for example, glacier and ice sheet flux, sand dune migration, geohazards from earthquakes, volcanoes, landslides, rivers and floods, animal migrations, ecosystem changes, geysers on Enceladus, or ice structure on Europa. The Keck Institute for Space Studies (KISS) hosted a workshop in June of 2014 to explore opportunities and challenges of gazing imaging. The goals of the workshop were to develop and discuss the broad scientific questions that can be addressed using spaceborne gazing, specific types of targets and applications, the resolution and spectral bands needed to achieve the science objectives, and possible instrument configurations for future missions. The workshop participants found that gazing imaging offers the ability to measure morphology, composition, and reflectance simultaneously and to measure their variability over time. Gazing imaging can be applied to better understand the consequences of climate change and natural hazards processes, through the study of continuous and episodic processes in both domains

    Novel Approach to Ocular Photoscreening

    Get PDF
    Photoscreening is a technique that is typically applied in mass pediatric vision screening due to advantage of its objective, binocular, and cost-effective nature. Through the retinal reflex image, ocular alignment and refractive status are evaluated. In the USA, this method has screened millions of preschool children in the past years. Nevertheless, the efficiency of the screening has been contentious. In this dissertation, the technique is reviewed and reexamined. Revisions of photoscreening technique are developed to detect and quantify strabismus, refractive errors, and high-order ocular aberrations. These new optical designs overcome traditional design deficiencies in three areas: First, a Dynamic Hirschberg Test is conducted to detect strabismus. The test begins with both eyes following a moving fixation target under binocular viewing, and during the test each eye is designed to be unconscientiously occluded which forces refixation in strabismus subjects and reveals latent strabismus. Photoscreening images taken under monocular viewing are used to calculate deviations from the expected binocular eye movement path. A significant eye movement deviation from binocular to monocular viewing indicates the presence of strabismus. Second, a novel binocular adaptive photorefraction (APR) approach is developed to characterize the retinal reflex intensity profile according to the eye\u27s refractive state. This approach calculates the retinal reflex profile by integrating the retinal reflex intensity from a coaxial and several eccentric photorefraction images. Theoretical simulations evaluate the influence from several human factors. An experimental APR device is constructed with 21 light sources to increase the spherical refraction detection range. The additional light source angular meridians detect astigmatism. The experimentally measured distribution is characterized into relevant parameters to describe the ocular refraction state. Last, the APR design is further applied to detect vision problems that suffer from high-order aberrations (e.g. cataracts, dry eye, keratoconus). A monocular prototype APR device is constructed with coaxial and eccentric light sources to acquire 13 monocular photorefraction images. Light sources projected inside and along the camera aperture improve the detection sensitivity. The acquired reflex images are then decomposed into Zernike polynomials, and the complex reflex patterns are analyzed using the Zernike coefficient magnitudes
    corecore