273 research outputs found
Research regarding development and application of tactile sensing for robots
ć¶ćșŠ:æ° ; ć ±ćçȘć·:çČ3063ć· ; ćŠäœăźçšźéĄ:ć棫(ć·„ćŠ) ; æäžćčŽææ„:2010/2/25 ; æ©ć€§ćŠäœèšçȘć·:æ°532
Center of Pressure Feedback for Controlling the Walking Stability Bipedal Robots using Fuzzy Logic Controller
This paper presents a sensor-based stability walk for bipedal robots by using force sensitive resistor (FSR) sensor. To perform walk stability on uneven terrain conditions, FSR sensor is used as feedbacks to evaluate the stability of bipedal robot instead of the center of pressure (CoP). In this work, CoP that was generated from four FSR sensors placed on each foot-pad is used to evaluate the walking stability. The robot CoP position provided an indication of walk stability. The CoP position information was further evaluated with a fuzzy logic controller (FLC) to generate appropriate offset angles to be applied to meet a stable situation. Moreover, in this paper designed a FLC through CoP region's stability and stable compliance control are introduced. Finally, the performances of the proposed methods were verified with 18-degrees of freedom (DOF) kid-size bipedal robot
Overcoming barriers and increasing independence: service robots for elderly and disabled people
This paper discusses the potential for service robots to overcome barriers and increase independence of
elderly and disabled people. It includes a brief overview of the existing uses of service robots by disabled and elderly
people and advances in technology which will make new uses possible and provides suggestions for some of these new
applications. The paper also considers the design and other conditions to be met for user acceptance. It also discusses
the complementarity of assistive service robots and personal assistance and considers the types of applications and
users for which service robots are and are not suitable
Télé-opération Corps Complet de Robots Humanoïdes
This thesis aims to investigate systems and tools for teleoperating a humanoid robot. Robotteleoperation is crucial to send and control robots in environments that are dangerous or inaccessiblefor humans (e.g., disaster response scenarios, contaminated environments, or extraterrestrialsites). The term teleoperation most commonly refers to direct and continuous control of a robot.In this case, the human operator guides the motion of the robot with her/his own physical motionor through some physical input device. One of the main challenges is to control the robot in a waythat guarantees its dynamical balance while trying to follow the human references. In addition,the human operator needs some feedback about the state of the robot and its work site through remotesensors in order to comprehend the situation or feel physically present at the site, producingeffective robot behaviors. Complications arise when the communication network is non-ideal. Inthis case the commands from human to robot together with the feedback from robot to human canbe delayed. These delays can be very disturbing for the human operator, who cannot teleoperatetheir robot avatar in an effective way.Another crucial point to consider when setting up a teleoperation system is the large numberof parameters that have to be tuned to effectively control the teleoperated robots. Machinelearning approaches and stochastic optimizers can be used to automate the learning of some of theparameters.In this thesis, we proposed a teleoperation system that has been tested on the humanoid robotiCub. We used an inertial-technology-based motion capture suit as input device to control thehumanoid and a virtual reality headset connected to the robot cameras to get some visual feedback.We first translated the human movements into equivalent robot ones by developping a motionretargeting approach that achieves human-likeness while trying to ensure the feasibility of thetransferred motion. We then implemented a whole-body controller to enable the robot to trackthe retargeted human motion. The controller has been later optimized in simulation to achieve agood tracking of the whole-body reference movements, by recurring to a multi-objective stochasticoptimizer, which allowed us to find robust solutions working on the real robot in few trials.To teleoperate walking motions, we implemented a higher-level teleoperation mode in whichthe user can use a joystick to send reference commands to the robot. We integrated this setting inthe teleoperation system, which allows the user to switch between the two different modes.A major problem preventing the deployment of such systems in real applications is the presenceof communication delays between the human input and the feedback from the robot: evena few hundred milliseconds of delay can irremediably disturb the operator, let alone a few seconds.To overcome these delays, we introduced a system in which a humanoid robot executescommands before it actually receives them, so that the visual feedback appears to be synchronizedto the operator, whereas the robot executed the commands in the past. To do so, the robot continuouslypredicts future commands by querying a machine learning model that is trained on pasttrajectories and conditioned on the last received commands.Cette thĂšse vise Ă Ă©tudier des systĂšmes et des outils pour la tĂ©lĂ©-opĂ©ration dâun robot humanoĂŻde.La tĂ©lĂ©opĂ©ration de robots est cruciale pour envoyer et contrĂŽler les robots dans des environnementsdangereux ou inaccessibles pour les humains (par exemple, des scĂ©narios dâinterventionen cas de catastrophe, des environnements contaminĂ©s ou des sites extraterrestres). Le terme tĂ©lĂ©opĂ©rationdĂ©signe le plus souvent le contrĂŽle direct et continu dâun robot. Dans ce cas, lâopĂ©rateurhumain guide le mouvement du robot avec son propre mouvement physique ou via un dispositifde contrĂŽle. Lâun des principaux dĂ©fis est de contrĂŽler le robot de maniĂšre Ă garantir son Ă©quilibredynamique tout en essayant de suivre les rĂ©fĂ©rences humaines. De plus, lâopĂ©rateur humain abesoin dâun retour dâinformation sur lâĂ©tat du robot et de son site via des capteurs Ă distance afindâapprĂ©hender la situation ou de se sentir physiquement prĂ©sent sur le site, produisant des comportementsde robot efficaces. Des complications surviennent lorsque le rĂ©seau de communicationnâest pas idĂ©al. Dans ce cas, les commandes de lâhomme au robot ainsi que la rĂ©troaction du robotĂ lâhomme peuvent ĂȘtre retardĂ©es. Ces dĂ©lais peuvent ĂȘtre trĂšs gĂȘnants pour lâopĂ©rateur humain,qui ne peut pas tĂ©lĂ©-opĂ©rer efficacement son avatar robotique.Un autre point crucial Ă considĂ©rer lors de la mise en place dâun systĂšme de tĂ©lĂ©-opĂ©rationest le grand nombre de paramĂštres qui doivent ĂȘtre rĂ©glĂ©s pour contrĂŽler efficacement les robotstĂ©lĂ©-opĂ©rĂ©s. Des approches dâapprentissage automatique et des optimiseurs stochastiques peuventĂȘtre utilisĂ©s pour automatiser lâapprentissage de certains paramĂštres.Dans cette thĂšse, nous avons proposĂ© un systĂšme de tĂ©lĂ©-opĂ©ration qui a Ă©tĂ© testĂ© sur le robothumanoĂŻde iCub. Nous avons utilisĂ© une combinaison de capture de mouvement basĂ©e sur latechnologie inertielle comme pĂ©riphĂ©rique de contrĂŽle pour lâhumanoĂŻde et un casque de rĂ©alitĂ©virtuelle connectĂ© aux camĂ©ras du robot pour obtenir un retour visuel. Nous avons dâabord traduitles mouvements humains en mouvements robotiques Ă©quivalents en dĂ©veloppant une approchede retargeting de mouvement qui atteint la ressemblance humaine tout en essayant dâassurer lafaisabilitĂ© du mouvement transfĂ©rĂ©. Nous avons ensuite implĂ©mentĂ© un contrĂŽleur du corps entierpour permettre au robot de suivre le mouvement humain reciblĂ©. Le contrĂŽleur a ensuite Ă©tĂ©optimisĂ© en simulation pour obtenir un bon suivi des mouvements de rĂ©fĂ©rence du corps entier,en recourant Ă un optimiseur stochastique multi-objectifs, ce qui nous a permis de trouver dessolutions robustes fonctionnant sur le robot rĂ©el en quelques essais.Pour tĂ©lĂ©-opĂ©rer les mouvements de marche, nous avons implĂ©mentĂ© un mode de tĂ©lĂ©-opĂ©rationde niveau supĂ©rieur dans lequel lâutilisateur peut utiliser un joystick pour envoyer des commandesde rĂ©fĂ©rence au robot. Nous avons intĂ©grĂ© ce paramĂštre dans le systĂšme de tĂ©lĂ©-opĂ©ration, ce quipermet Ă lâutilisateur de basculer entre les deux modes diffĂ©rents.Un problĂšme majeur empĂȘchant le dĂ©ploiement de tels systĂšmes dans des applications rĂ©ellesest la prĂ©sence de retards de communication entre lâentrĂ©e humaine et le retour du robot: mĂȘmequelques centaines de millisecondes de retard peuvent irrĂ©mĂ©diablement perturber lâopĂ©rateur,encore plus quelques secondes. Pour surmonter ces retards, nous avons introduit un systĂšme danslequel un robot humanoĂŻde exĂ©cute des commandes avant de les recevoir, de sorte que le retourvisuel semble ĂȘtre synchronisĂ© avec lâopĂ©rateur, alors que le robot exĂ©cutait les commandes dansle passĂ©. Pour ce faire, le robot prĂ©dit en permanence les commandes futures en interrogeant unmodĂšle dâapprentissage automatique formĂ© sur les trajectoires passĂ©es et conditionnĂ© aux derniĂšrescommandes reçues
Enabling Human-Robot Collaboration via Holistic Human Perception and Partner-Aware Control
As robotic technology advances, the barriers to the coexistence of humans and robots are slowly coming down. Application domains like elderly care, collaborative manufacturing, collaborative manipulation, etc., are considered the need of the hour, and progress in robotics holds the potential to address many societal challenges. The future socio-technical systems constitute of blended workforce with a symbiotic relationship between human and robot partners working collaboratively. This thesis attempts to address some of the research challenges in enabling human-robot collaboration. In particular, the challenge of a holistic perception of a human partner to continuously communicate his intentions and needs in real-time to a robot partner is crucial for the successful realization of a collaborative task. Towards that end, we present a holistic human perception framework for real-time monitoring of whole-body human motion and dynamics. On the other hand, the challenge of leveraging assistance from a human partner will lead to improved human-robot collaboration. In this direction, we attempt at methodically defining what constitutes assistance from a human partner and propose partner-aware robot control strategies to endow robots with the capacity to meaningfully engage in a collaborative task
Legged Robots for Object Manipulation: A Review
Legged robots can have a unique role in manipulating objects in dynamic,
human-centric, or otherwise inaccessible environments. Although most legged
robotics research to date typically focuses on traversing these challenging
environments, many legged platform demonstrations have also included "moving an
object" as a way of doing tangible work. Legged robots can be designed to
manipulate a particular type of object (e.g., a cardboard box, a soccer ball,
or a larger piece of furniture), by themselves or collaboratively. The
objective of this review is to collect and learn from these examples, to both
organize the work done so far in the community and highlight interesting open
avenues for future work. This review categorizes existing works into four main
manipulation methods: object interactions without grasping, manipulation with
walking legs, dedicated non-locomotive arms, and legged teams. Each method has
different design and autonomy features, which are illustrated by available
examples in the literature. Based on a few simplifying assumptions, we further
provide quantitative comparisons for the range of possible relative sizes of
the manipulated object with respect to the robot. Taken together, these
examples suggest new directions for research in legged robot manipulation, such
as multifunctional limbs, terrain modeling, or learning-based control, to
support a number of new deployments in challenging indoor/outdoor scenarios in
warehouses/construction sites, preserved natural areas, and especially for home
robotics.Comment: Preprint of the paper submitted to Frontiers in Mechanical
Engineerin
é«éœąè ăšăšăă«æ©èĄăă瀟äŒçæŻæŽăă„ăŒăăă€ăă«éąăăç 究
çæłąć€§ćŠ (University of Tsukuba)201
Towards a Smart Semi-Active Prosthetic Leg: Preliminary Assessment and Testing
This paper presents a development of a semi-active prosthetic knee, which can work in both active and passive modes based on the energy required during the gait cycle of various activities of daily livings (ADLs). The prosthetic limb is equipped with various sensors to measure the kinematic and kinetic parameters of both prosthetic limbs. This prosthetic knee is designed to be back-drivable in passive mode to provide a potential use in energy regeneration when there negative energy across the knee joint. Preliminary test has been performed on transfemoral amputee in passive mode to provide some insight to the amputee/prosthesis interaction and performance with the designed prosthetic knee
Development of the Robotic Touch foot Sensor for 2D walking Robot, for Studying Rough Terrain Locomotion
Many researchers have been developing biped walking robots with excellent techniques and advanced technologies. However, ordinary locomotion is executed on even terrain such as flat surface. This means that many researches are focusing much on image processing, programing techniques and newly suggested devices, but not much research is being done on the proper sensors to robotic feet. Regarding robot gaits on unknown ground, one of the most significant determinations is to investigate balance by itself. A new sensor was studied for the robotic foot in order to allow walking on rough ground by sensing variations in the pressure profile on the foot. The primary purpose of this study is to provide a proper foot sensor for Jaywalker. The Jaywalker has been developed by the Intelligent Systems and Automation Laboratory. Jaywalker provides a good platform for the study of rough terrain walking. The sensor to be developed for the robot must be applicable to every structure and flexible and able to impulsive forces, as well as having a reasonable manufacturing cost.The main concept of this new type of sensor is to utilize the special application and design of inductive touch sensors. The Propeller microprocessor plays an important role in the control of the new sensor. The results of this study indicate that the new inductive sensor can be useful as a robotic foot sensor for the study of rough terrain walking
Do robots outperform humans in human-centered domains?
The incessant progress of robotic technology and rationalization of human manpower induces high expectations in society, but also resentment and even fear. In this paper, we present a quantitative normalized comparison of performance, to shine a light onto the pressing question, "How close is the current state of humanoid robotics to outperforming humans in their typical functions (e.g., locomotion, manipulation), and their underlying structures (e.g., actuators/muscles) in human-centered domains?" This is the most comprehensive comparison of the literature so far. Most state-of-the-art robotic structures required for visual, tactile, or vestibular perception outperform human structures at the cost of slightly higher mass and volume. Electromagnetic and fluidic actuation outperform human muscles w.r.t. speed, endurance, force density, and power density, excluding components for energy storage and conversion. Artificial joints and links can compete with the human skeleton. In contrast, the comparison of locomotion functions shows that robots are trailing behind in energy efficiency, operational time, and transportation costs. Robots are capable of obstacle negotiation, object manipulation, swimming, playing soccer, or vehicle operation. Despite the impressive advances of humanoid robots in the last two decades, current robots are not yet reaching the dexterity and versatility to cope with more complex manipulation and locomotion tasks (e.g., in confined spaces). We conclude that state-of-the-art humanoid robotics is far from matching the dexterity and versatility of human beings. Despite the outperforming technical structures, robot functions are inferior to human ones, even with tethered robots that could place heavy auxiliary components off-board. The persistent advances in robotics let us anticipate the diminishing of the gap
- âŠ