914 research outputs found

    Toward Robots with Peripersonal Space Representation for Adaptive Behaviors

    Get PDF
    The abilities to adapt and act autonomously in an unstructured and human-oriented environment are necessarily vital for the next generation of robots, which aim to safely cooperate with humans. While this adaptability is natural and feasible for humans, it is still very complex and challenging for robots. Observations and findings from psychology and neuroscience in respect to the development of the human sensorimotor system can inform the development of novel approaches to adaptive robotics. Among these is the formation of the representation of space closely surrounding the body, the Peripersonal Space (PPS) , from multisensory sources like vision, hearing, touch and proprioception, which helps to facilitate human activities within their surroundings. Taking inspiration from the virtual safety margin formed by the PPS representation in humans, this thesis first constructs an equivalent model of the safety zone for each body part of the iCub humanoid robot. This PPS layer serves as a distributed collision predictor, which translates visually detected objects approaching a robot\u2019s body parts (e.g., arm, hand) into the probabilities of a collision between those objects and body parts. This leads to adaptive avoidance behaviors in the robot via an optimization-based reactive controller. Notably, this visual reactive control pipeline can also seamlessly incorporate tactile input to guarantee safety in both pre- and post-collision phases in physical Human-Robot Interaction (pHRI). Concurrently, the controller is also able to take into account multiple targets (of manipulation reaching tasks) generated by a multiple Cartesian point planner. All components, namely the PPS, the multi-target motion planner (for manipulation reaching tasks), the reaching-with-avoidance controller and the humancentred visual perception, are combined harmoniously to form a hybrid control framework designed to provide safety for robots\u2019 interactions in a cluttered environment shared with human partners. Later, motivated by the development of manipulation skills in infants, in which the multisensory integration is thought to play an important role, a learning framework is proposed to allow a robot to learn the processes of forming sensory representations, namely visuomotor and visuotactile, from their own motor activities in the environment. Both multisensory integration models are constructed with Deep Neural Networks (DNNs) in such a way that their outputs are represented in motor space to facilitate the robot\u2019s subsequent actions

    Design and modeling of a stair climber smart mobile robot (MSRox)

    Full text link

    Computational intelligence approaches to robotics, automation, and control [Volume guest editors]

    Get PDF
    No abstract available

    Vođenje hodajućeg robota u strukturiranom prostoru zasnovano na računalnome vidu

    Get PDF
    Locomotion of a biped robot in a scenario with obstacles requires a high degree of coordination between perception and walking. This article presents key ideas of a vision-based strategy for guidance of walking robots in structured scenarios. Computer vision techniques are employed for reactive adaptation of step sequences allowing a robot to step over or upon or walk around obstacles. Highly accurate feedback information is achieved by a combination of line-based scene analysis and real-time feature tracking. The proposed vision-based approach was evaluated by experiments with a real humanoid robot.Lokomocija dvonoĆŸnog robota u prostoru s preprekama zahtijeva visoki stupanj koordinacije između percepcije i hodanja. U članku se opisuju ključne postavke strategije vođenja hodajućih robota zasnovane na računalnome vidu. Tehnike računalnoga vida primijenjene za reaktivnu adaptaciju slijeda koraka omogućuju robotu zaobilaĆŸenje prepreka, ali i njihovo prekoračivanje te penjanje na njih. Visoka točnost povratne informacije postignuta je kombinacijom analize linijskih segmenata u sceni i praćenjem značajki scene u stvarnome vremenu. PredloĆŸeni je sustav vođenja hodajućih robota eksperimentalno provjeren na stvarnome čovjekolikome robotu

    Towards One Shot Learning by Imitation for Humanoid Robots

    No full text

    A Comprehensive Review on Autonomous Navigation

    Full text link
    The field of autonomous mobile robots has undergone dramatic advancements over the past decades. Despite achieving important milestones, several challenges are yet to be addressed. Aggregating the achievements of the robotic community as survey papers is vital to keep the track of current state-of-the-art and the challenges that must be tackled in the future. This paper tries to provide a comprehensive review of autonomous mobile robots covering topics such as sensor types, mobile robot platforms, simulation tools, path planning and following, sensor fusion methods, obstacle avoidance, and SLAM. The urge to present a survey paper is twofold. First, autonomous navigation field evolves fast so writing survey papers regularly is crucial to keep the research community well-aware of the current status of this field. Second, deep learning methods have revolutionized many fields including autonomous navigation. Therefore, it is necessary to give an appropriate treatment of the role of deep learning in autonomous navigation as well which is covered in this paper. Future works and research gaps will also be discussed

    Multi-focal Vision and Gaze Control Improve Navigation Performance

    Get PDF
    • 

    corecore