833 research outputs found

    The Future of Humanoid Robots

    Get PDF
    This book provides state of the art scientific and engineering research findings and developments in the field of humanoid robotics and its applications. It is expected that humanoids will change the way we interact with machines, and will have the ability to blend perfectly into an environment already designed for humans. The book contains chapters that aim to discover the future abilities of humanoid robots by presenting a variety of integrated research in various scientific and engineering fields, such as locomotion, perception, adaptive behavior, human-robot interaction, neuroscience and machine learning. The book is designed to be accessible and practical, with an emphasis on useful information to those working in the fields of robotics, cognitive science, artificial intelligence, computational methods and other fields of science directly or indirectly related to the development and usage of future humanoid robots. The editor of the book has extensive R&D experience, patents, and publications in the area of humanoid robotics, and his experience is reflected in editing the content of the book

    Physical human-robot collaboration: Robotic systems, learning methods, collaborative strategies, sensors, and actuators

    Get PDF
    This article presents a state-of-the-art survey on the robotic systems, sensors, actuators, and collaborative strategies for physical human-robot collaboration (pHRC). This article starts with an overview of some robotic systems with cutting-edge technologies (sensors and actuators) suitable for pHRC operations and the intelligent assist devices employed in pHRC. Sensors being among the essential components to establish communication between a human and a robotic system are surveyed. The sensor supplies the signal needed to drive the robotic actuators. The survey reveals that the design of new generation collaborative robots and other intelligent robotic systems has paved the way for sophisticated learning techniques and control algorithms to be deployed in pHRC. Furthermore, it revealed the relevant components needed to be considered for effective pHRC to be accomplished. Finally, a discussion of the major advances is made, some research directions, and future challenges are presented

    A Framework for Interactive Teaching of Virtual Borders to Mobile Robots

    Full text link
    The increasing number of robots in home environments leads to an emerging coexistence between humans and robots. Robots undertake common tasks and support the residents in their everyday life. People appreciate the presence of robots in their environment as long as they keep the control over them. One important aspect is the control of a robot's workspace. Therefore, we introduce virtual borders to precisely and flexibly define the workspace of mobile robots. First, we propose a novel framework that allows a person to interactively restrict a mobile robot's workspace. To show the validity of this framework, a concrete implementation based on visual markers is implemented. Afterwards, the mobile robot is capable of performing its tasks while respecting the new virtual borders. The approach is accurate, flexible and less time consuming than explicit robot programming. Hence, even non-experts are able to teach virtual borders to their robots which is especially interesting in domains like vacuuming or service robots in home environments.Comment: 7 pages, 6 figure

    Supporting active and healthy aging with advanced robotics integrated in smart environment

    Get PDF
    The technological advances in the robotic and ICT fields represent an effective solution to address specific societal problems to support ageing and independent life. One of the key factors for these technologies is the integration of service robotics for optimising social services and improving quality of life of the elderly population. This chapter aims to underline the barriers of the state of the art, furthermore the authors present their concrete experiences to overcome these barriers gained at the RoboTown Living Lab of Scuola Superiore Sant'Anna within past and current projects. They analyse and discuss the results in order to give recommendations based on their experiences. Furthermore, this work highlights the trend of development from stand-alone solutions to cloud computing architecture, describing the future research directions

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    System Identification of Bipedal Locomotion in Robots and Humans

    Get PDF
    The ability to perform a healthy walking gait can be altered in numerous cases due to gait disorder related pathologies. The latter could lead to partial or complete mobility loss, which affects the patients’ quality of life. Wearable exoskeletons and active prosthetics have been considered as a key component to remedy this mobility loss. The control of such devices knows numerous challenges that are yet to be addressed. As opposed to fixed trajectories control, real-time adaptive reference generation control is likely to provide the wearer with more intent control over the powered device. We propose a novel gait pattern generator for the control of such devices, taking advantage of the inter-joint coordination in the human gait. Our proposed method puts the user in the control loop as it maps the motion of healthy limbs to that of the affected one. To design such control strategy, it is critical to understand the dynamics behind bipedal walking. We begin by studying the simple compass gait walker. We examine the well-known Virtual Constraints method of controlling bipedal robots in the image of the compass gait. In addition, we provide both the mechanical and control design of an affordable research platform for bipedal dynamic walking. We then extend the concept of virtual constraints to human locomotion, where we investigate the accuracy of predicting lower limb joints angular position and velocity from the motion of the other limbs. Data from nine healthy subjects performing specific locomotion tasks were collected and are made available online. A successful prediction of the hip, knee, and ankle joints was achieved in different scenarios. It was also found that the motion of the cane alone has sufficient information to help predict good trajectories for the lower limb in stairs ascent. Better estimates were obtained using additional information from arm joints. We also explored the prediction of knee and ankle trajectories from the motion of the hip joints
    • …
    corecore