3,071 research outputs found

    Towards an Interactive Humanoid Companion with Visual Tracking Modalities

    Get PDF
    The idea of robots acting as human companions is not a particularly new or original one. Since the notion of “robot ” was created, the idea of robots replacing humans in dangerous, dirty and dull activities has been inseparably tied with the fantasy of human-like robots being friends and existing side by side with humans. In 1989, Engelberger (Engelberger

    Event-based Vision: A Survey

    Get PDF
    Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of microseconds), very high dynamic range (140 dB vs. 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world

    A motion control method for a differential drive robot based on human walking for immersive telepresence

    Get PDF
    Abstract. This thesis introduces an interface for controlling Differential Drive Robots (DDRs) for telepresence applications. Our goal is to enhance immersive experience while reducing user discomfort, when using Head Mounted Displays (HMDs) and body trackers. The robot is equipped with a 360° camera that captures the Robot Environment (RE). Users wear an HMD and use body trackers to navigate within a Local Environment (LE). Through a live video stream from the robot-mounted camera, users perceive the RE within a virtual sphere known as the Virtual Environment (VE). A proportional controller was employed to facilitate the control of the robot, enabling to replicate the movements of the user. The proposed method uses chest tracker to control the telepresence robot and focuses on minimizing vection and rotations induced by the robot’s motion by modifying the VE, such as rotating and translating it. Experimental results demonstrate the accuracy of the robot in reaching target positions when controlled through the body-tracker interface. Additionally, it also reveals an optimal VE size that effectively reduces VR sickness and enhances the sense of presence

    Robot guidance using machine vision techniques in industrial environments: A comparative review

    Get PDF
    In the factory of the future, most of the operations will be done by autonomous robots that need visual feedback to move around the working space avoiding obstacles, to work collaboratively with humans, to identify and locate the working parts, to complete the information provided by other sensors to improve their positioning accuracy, etc. Different vision techniques, such as photogrammetry, stereo vision, structured light, time of flight and laser triangulation, among others, are widely used for inspection and quality control processes in the industry and now for robot guidance. Choosing which type of vision system to use is highly dependent on the parts that need to be located or measured. Thus, in this paper a comparative review of different machine vision techniques for robot guidance is presented. This work analyzes accuracy, range and weight of the sensors, safety, processing time and environmental influences. Researchers and developers can take it as a background information for their future works

    Detecting Gaze Direction Using Robot-Mounted and Mobile-Device Cameras

    Get PDF
    Two common channels through which humans communicate are speech andgaze. Eye gaze is an important mode of communication: it allows people tobetter understand each others’ intentions, desires, interests, and so on. The goalof this research is to develop a framework for gaze triggered events which canbe executed on a robot and mobile devices and allows to perform experiments.We experimentally evaluate the framework and techniques for extracting gazedirection based on a robot-mounted camera or a mobile-device camera whichare implemented in the framework. We investigate the impact of light on theaccuracy of gaze estimation, and also how the overall accuracy depends on usereye and head movements. Our research shows that the light intensity is im-portant, and the placement of light source is crucial. All the robot-mountedgaze detection modules we tested were found to be similar with regard to ac-curacy. The framework we developed was tested in a human-robot interactionexperiment involving a job-interview scenario. The flexible structure of thisscenario allowed us to test different components of the framework in variedreal-world scenarios, which was very useful for progressing towards our long-term research goal of designing intuitive gaze-based interfaces for human robotcommunication

    Robust Single Object Tracking and Following by Fusion Strategy

    Get PDF
    Single Object Tracking methods are yet not robust enough because they may lose the target due to occlusions or changes in the target’s appearance, and it is difficult to detect automatically when they fail. To deal with these problems, we design a novel method to improve object tracking by fusing complementary types of trackers, taking advantage of each other’s strengths, with an Extended Kalman Filter to combine them in a probabilistic way. The environment perception is performed with a 3D LiDAR sensor, so we can track the object in the point cloud and also in the front-view image constructed from the point cloud. We use our tracker-fusion method in a mobile robot to follow pedestrians, also considering the dynamic obstacles in the environment to avoid them. We show that our method allows the robot to follow the target accurately during long experimental sessions where the trackers independently fail, demonstrating the robustness of our tracker-fusion strategy.This work has been supported by the Ministry of Science and Innovation of the Spanish Government through the research project PID2021-122685OB-I00 and through the Formación del Personal Investigador [Research Staff Formation (FPI)] under Grant PRE2019-088069

    2D laser-based probabilistic motion tracking in urban-like environments

    Get PDF
    All over the world traffic injuries and fatality rates are increasing every year. The combination of negligent and imprudent drivers, adverse road and weather conditions produces tragic results with dramatic loss of life. In this scenario, the use of mobile robotics technology onboard vehicles could reduce casualties. Obstacle motion tracking is an essential ability for car-like mobile robots. However, this task is not trivial in urban environments where a great quantity and variety of obstacles may induce the vehicle to take erroneous decisions. Unfortunately, obstacles close to its sensors frequently cause blind zones behind them where other obstacles could be hidden. In this situation, the robot may lose vital information about these obstructed obstacles that can provoke collisions. In order to overcome this problem, an obstacle motion tracking module based only on 2D laser scan data was developed. Its main parts consist of obstacle detection, obstacle classification, and obstacle tracking algorithms. A motion detection module using scan matching was developed aiming to improve the data quality for navigation purposes; a probabilistic grid representation of the environment was also implemented. The research was initially conducted using a MatLab simulator that reproduces a simple 2D urban-like environment. Then the algorithms were validated using data samplings in real urban environments. On average, the results proved the usefulness of considering obstacle paths and velocities while navigating at reasonable computational costs. This, undoubtedly, will allow future controllers to obtain a better performance in highly dynamic environments.Coordenacao de Aperfeicoamento de Pessoal de Nivel Superior (CAPES
    corecore