462 research outputs found

    From Constraints to Opportunities: Efficient Object Detection Learning for Humanoid Robots

    Get PDF
    Reliable perception and efficient adaptation to novel conditions are priority skills for robots that function in ever-changing environments. Indeed, autonomously operating in real world scenarios raises the need of identifying different context\u2019s states and act accordingly. Moreover, the requested tasks might not be known a-priori, requiring the system to update on-line. Robotic platforms allow to gather various types of perceptual information due to the multiple sensory modalities they are provided with. Nonetheless, latest results in computer vision motivate a particular interest in visual perception. Specifically, in this thesis, I mainly focused on the object detection task since it can be at the basis of more sophisticated capabilities. The vast advancements in latest computer vision research, brought by deep learning methods, are appealing in a robotic setting. However, their adoption in applied domains is not straightforward since adapting them to new tasks is strongly demanding in terms of annotated data, optimization time and computational resources. These requirements do not generally meet current robotics constraints. Nevertheless, robotic platforms and especially humanoids present opportunities that can be exploited. The sensors they are provided with represent precious sources of additional information. Moreover, their embodiment in the workspace and their motion capabilities allow for a natural interaction with the environment. Motivated by these considerations, in this Ph.D project, I mainly aimed at devising and developing solutions able to integrate the worlds of computer vision and robotics, by focusing on the task of object detection. Specifically, I dedicated a large amount of effort in alleviating state-of-the-art methods requirements in terms of annotated data and training time, preserving their accuracy by exploiting robotics opportunity

    Independent Motion Detection with Event-driven Cameras

    Full text link
    Unlike standard cameras that send intensity images at a constant frame rate, event-driven cameras asynchronously report pixel-level brightness changes, offering low latency and high temporal resolution (both in the order of micro-seconds). As such, they have great potential for fast and low power vision algorithms for robots. Visual tracking, for example, is easily achieved even for very fast stimuli, as only moving objects cause brightness changes. However, cameras mounted on a moving robot are typically non-stationary and the same tracking problem becomes confounded by background clutter events due to the robot ego-motion. In this paper, we propose a method for segmenting the motion of an independently moving object for event-driven cameras. Our method detects and tracks corners in the event stream and learns the statistics of their motion as a function of the robot's joint velocities when no independently moving objects are present. During robot operation, independently moving objects are identified by discrepancies between the predicted corner velocities from ego-motion and the measured corner velocities. We validate the algorithm on data collected from the neuromorphic iCub robot. We achieve a precision of ~ 90 % and show that the method is robust to changes in speed of both the head and the target.Comment: 7 pages, 6 figure

    Better Vision Through Manipulation

    Get PDF
    For the purposes of manipulation, we would like to know what parts of the environment are physically coherent ensembles - that is, which parts will move together, and which are more or less independent. It takes a great deal of experience before this judgement can be made from purely visual information. This paper develops active strategies for acquiring that experience through experimental manipulation, using tight correlations between arm motion and optic flow to detect both the arm itself and the boundaries of objects with which it comes into contact. We argue that following causal chains of events out from the robot's body into the environment allows for a very natural developmental progression of visual competence, and relate this idea to results in neuroscience

    Tiny-YOLO distance measurement and object detection coordination system for the BarelangFC robot

    Get PDF
    A humanoid robot called BarelangFC was designed to take part in the Kontes Robot Indonesia (KRI) competition, in the robot coordination division. In this division, each robot is expected to recognize its opponents and to pass the ball towards a team member to establish coordination between the robots. In order to achieve this team coordination, a fast and accurate system is needed to detect and estimate the other robot’s position in real time. Moreover, each robot has to estimate its team members’ locations based on its camera reading, so that the ball can be passed without error. This research proposes a Tiny-YOLO deep learning method to detect the location of a team member robot and presents a real-time coordination system using a ZED camera. To establish the coordinate system, the distance between the robots was estimated using a trigonometric equation to ensure that the robot was able to pass the ball towards another robot. To verify our method, real-time experiments was carried out using an NVDIA Jetson NX Xavier, and the results showed that the robot could estimate the distance correctly before passing the ball toward another robot
    • …
    corecore