208 research outputs found
The Future of Humanoid Robots
This book provides state of the art scientific and engineering research findings and developments in the field of humanoid robotics and its applications. It is expected that humanoids will change the way we interact with machines, and will have the ability to blend perfectly into an environment already designed for humans. The book contains chapters that aim to discover the future abilities of humanoid robots by presenting a variety of integrated research in various scientific and engineering fields, such as locomotion, perception, adaptive behavior, human-robot interaction, neuroscience and machine learning. The book is designed to be accessible and practical, with an emphasis on useful information to those working in the fields of robotics, cognitive science, artificial intelligence, computational methods and other fields of science directly or indirectly related to the development and usage of future humanoid robots. The editor of the book has extensive R&D experience, patents, and publications in the area of humanoid robotics, and his experience is reflected in editing the content of the book
Physical human-robot collaboration: Robotic systems, learning methods, collaborative strategies, sensors, and actuators
This article presents a state-of-the-art survey on the robotic systems, sensors, actuators, and collaborative strategies for physical human-robot collaboration (pHRC). This article starts with an overview of some robotic systems with cutting-edge technologies (sensors and actuators) suitable for pHRC operations and the intelligent assist devices employed in pHRC. Sensors being among the essential components to establish communication between a human and a robotic system are surveyed. The sensor supplies the signal needed to drive the robotic actuators. The survey reveals that the design of new generation collaborative robots and other intelligent robotic systems has paved the way for sophisticated learning techniques and control algorithms to be deployed in pHRC. Furthermore, it revealed the relevant components needed to be considered for effective pHRC to be accomplished. Finally, a discussion of the major advances is made, some research directions, and future challenges are presented
Asservissement d'un bras robotique d'assistance à l'aide d'un système de stéréo vision artificielle et d'un suiveur de regard
RÉSUMÉ
L’utilisation récente de bras robotiques sériels dans le but d’assister des personnes ayant des problèmes de motricités sévères des membres supérieurs soulève une nouvelle problématique au niveau de l’interaction humain-machine (IHM). En effet, jusqu’à maintenant le « joystick » est utilisé pour contrôler un bras robotiques d’assistance (BRA). Pour les utilisateurs ayant des problèmes de motricité sévères des membres supérieurs, ce type de contrôle n’est pas une option adéquate. Ce mémoire présente une autre option afin de pallier cette problématique.
La solution présentée est composée de deux composantes principales. La première est une caméra de stéréo vision utilisée afin d’informer le BRA des objets présents dans son espace de travail. Il est important qu’un BRA soit conscient de ce qui est présent dans son espace de travail puisqu’il doit être en mesure d’éviter les objets non voulus lorsqu’il parcourt un trajet afin d’atteindre l’objet d’intérêt pour l'utilisateur.
La deuxième composante est l’IHM qui est dans ce travail représentée par un suiveur de regard à bas coût. Effectivement, le suiveur de regard a été choisi puisque, généralement, les yeux d’un patient ayant des problèmes sévères de motricités au niveau des membres supérieurs restent toujours fonctionnels. Le suiveur de regard est généralement utilisé avec un écran pour des applications en 2D ce qui n’est pas intuitif pour l’utilisateur puisque celui-ci doit constamment regarder une reproduction 2D de la scène sur un écran. En d’autres mots, il faut rendre le suiveur de regard viable dans un environnement 3D sans l’utilisation d’un écran, ce qui a été fait dans ce mémoire.
Un système de stéréo vision, un suiveur de regard ainsi qu’un BRA sont les composantes principales du système présenté qui se nomme PoGARA qui est une abréviation pour Point of Gaze Assistive Robotic Arm. En utilisant PoGARA, l’utilisateur a été capable d’atteindre et de prendre un objet pour 80% des essais avec un temps moyen de 13.7 secondes sans obstacles, 15.3 secondes avec un obstacle et 16.3 secondes avec deux obstacles.----------ABSTRACT
The recent increased interest in the use of serial robots to assist individuals with severe upper limb disability brought-up an important issue which is the design of the right human computer interaction (HCI). Indeed, so far, the control of assistive robotic arms (ARA) is often done using a joystick. For the users who have a severe upper limb disability, this type of control is not a suitable option. In this master’s thesis, a novel solution is presented to overcome this issue.
The developed solution is composed of two main components. The first one is a stereo vision system which is used to inform the ARA of the content of its workspace. It is important for the ARA to be aware of what is present in its workspace since it needs to avoid the unwanted objects while it is on its way to grasp the object of interest.
The second component is the actual HCI, where an eye tracker is used. Indeed, the eye tracker was chosen since the eyes, often, remain functional even for patients with severe upper limb disability. However, usually, low-cost, commercially available eye trackers are mainly designed for 2D applications with a screen which is not intuitive for the user since he needs to constantly watch a reproduction of the scene on a 2D screen instead of the 3D scene itself. In other words, the eye tracker needs to be made viable for usage in a 3D environment without the use of a screen. This was achieved in this master thesis work.
A stereo vision system, an eye tracker as well as an ARA are the main components of the developed system named PoGARA which is short for Point of Gaze Assistive Robotic Arm. Using PoGARA, during the tests, the user was able to reach and grasp an object for 80% of the trials with an average time of 13.7 seconds without obstacles, 15.3 seconds with one obstacles and 16.3 seconds with two obstacles
Cable-driven parallel robot for transoral laser phonosurgery
Transoral laser phonosurgery (TLP) is a common surgical procedure in otolaryngology.
Currently, two techniques are commonly used: free beam and fibre delivery. For free beam
delivery, in combination with laser scanning techniques, accurate laser pattern scanning can
be achieved. However, a line-of-sight to the target is required. A suspension laryngoscope is
adopted to create a straight working channel for the scanning laser beam, which could
introduce lesions to the patient, and the manipulability and ergonomics are poor. For the fibre
delivery approach, a flexible fibre is used to transmit the laser beam, and the distal tip of the
laser fibre can be manipulated by a flexible robotic tool. The issues related to the limitation
of the line-of-sight can be avoided. However, the laser scanning function is currently lost in
this approach, and the performance is inferior to that of the laser scanning technique in the
free beam approach.
A novel cable-driven parallel robot (CDPR), LaryngoTORS, has been developed for TLP.
By using a curved laryngeal blade, a straight suspension laryngoscope will not be necessary
to use, which is expected to be less traumatic to the patient. Semi-autonomous free path
scanning can be executed, and high precision and high repeatability of the free path can be
achieved. The performance has been verified in various bench and ex vivo tests. The technical
feasibility of the LaryngoTORS robot for TLP was considered and evaluated in this thesis.
The LaryngoTORS robot has demonstrated the potential to offer an acceptable and feasible
solution to be used in real-world clinical applications of TLP.
Furthermore, the LaryngoTORS robot can combine with fibre-based optical biopsy
techniques. Experiments of probe-based confocal laser endomicroscopy (pCLE) and
hyperspectral fibre-optic sensing were performed. The LaryngoTORS robot demonstrates the
potential to be utilised to apply the fibre-based optical biopsy of the larynx.Open Acces
An analysis on controlling humanoid robot arm using Robot Operating System (ROS)
Humanoid robots are extensively discussed in modern days. The movement task and manipulation of Humanoid Robots is examined based on mobility of platforms and control of the arm. This project describes a robotic arm that is analogous to an arm of a human being. Some important parameters to be considered are reachability, stability and manipulability.
This thesis aims at adapting a humanoid robot arm for performing movement operation that can be used for various purposes. The proposed robot arm has 3 motors on the left arm and 3 motors on the right arm thereby constituting a total of 6 motors. This operation can be achieved by the use of sensor like ultrasonic sensor. Here Beaglebone Black, an open source linux based controller board is used. The Beaglebone Black acts as the main controller for the entire system. A research is also being made to implement the robotic arm using Robot Operating System (ROS) platform. ROS is preferred since it is modular, simple and easy to use tools for development, it provides good hardware support, lots of algorithms are implemented together as package, etc
Toward Robots with Peripersonal Space Representation for Adaptive Behaviors
The abilities to adapt and act autonomously in an unstructured and
human-oriented environment are necessarily vital for the next generation of
robots, which aim to safely cooperate with humans. While this adaptability
is natural and feasible for humans, it is still very complex and challenging
for robots. Observations and findings from psychology and neuroscience in
respect to the development of the human sensorimotor system can inform
the development of novel approaches to adaptive robotics.
Among these is the formation of the representation of space closely surrounding
the body, the Peripersonal Space (PPS) , from multisensory sources
like vision, hearing, touch and proprioception, which helps to facilitate human
activities within their surroundings.
Taking inspiration from the virtual safety margin formed by the PPS representation
in humans, this thesis first constructs an equivalent model of the
safety zone for each body part of the iCub humanoid robot. This PPS layer
serves as a distributed collision predictor, which translates visually detected
objects approaching a robot\u2019s body parts (e.g., arm, hand) into the probabilities
of a collision between those objects and body parts. This leads to
adaptive avoidance behaviors in the robot via an optimization-based reactive
controller. Notably, this visual reactive control pipeline can also seamlessly
incorporate tactile input to guarantee safety in both pre- and post-collision
phases in physical Human-Robot Interaction (pHRI). Concurrently, the controller
is also able to take into account multiple targets (of manipulation reaching tasks) generated by a multiple Cartesian point planner. All components,
namely the PPS, the multi-target motion planner (for manipulation
reaching tasks), the reaching-with-avoidance controller and the humancentred
visual perception, are combined harmoniously to form a hybrid control
framework designed to provide safety for robots\u2019 interactions in a cluttered
environment shared with human partners.
Later, motivated by the development of manipulation skills in infants, in
which the multisensory integration is thought to play an important role, a
learning framework is proposed to allow a robot to learn the processes of
forming sensory representations, namely visuomotor and visuotactile, from
their own motor activities in the environment. Both multisensory integration
models are constructed with Deep Neural Networks (DNNs) in such a
way that their outputs are represented in motor space to facilitate the robot\u2019s
subsequent actions
User Intent Detection and Control of a Soft Poly-Limb
abstract: This work presents the integration of user intent detection and control in the development of the fluid-driven, wearable, and continuum, Soft Poly-Limb (SPL). The SPL utilizes the numerous traits of soft robotics to enable a novel approach to provide safe and compliant mobile manipulation assistance to healthy and impaired users. This wearable system equips the user with an additional limb made of soft materials that can be controlled to produce complex three-dimensional motion in space, like its biological counterparts with hydrostatic muscles. Similar to the elephant trunk, the SPL is able to manipulate objects using various end effectors, such as suction adhesion or a soft grasper, and can also wrap its entire length around objects for manipulation. User control of the limb is demonstrated using multiple user intent detection modalities. Further, the performance of the SPL studied by testing its capability to interact safely and closely around a user through a spatial mobility test. Finally, the limb’s ability to assist the user is explored through multitasking scenarios and pick and place tests with varying mounting locations of the arm around the user’s body. The results of these assessments demonstrate the SPL’s ability to safely interact with the user while exhibiting promising performance in assisting the user with a wide variety of tasks, in both work and general living scenarios.Dissertation/ThesisMasters Thesis Biomedical Engineering 201
- …