99 research outputs found

    Adaptive, fast walking in a biped robot under neuronal control and learning

    Get PDF
    Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e.g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control (e.g., cortical) arises only pointwise, as needed. This requires an architecture of several nested, sensori–motor loops where the walking process provides feedback signals to the walker's sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (> 3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks

    Review of Anthropomorphic Head Stabilisation and Verticality Estimation in Robots

    Get PDF
    International audienceIn many walking, running, flying, and swimming animals, including mammals, reptiles, and birds, the vestibular system plays a central role for verticality estimation and is often associated with a head sta-bilisation (in rotation) behaviour. Head stabilisation, in turn, subserves gaze stabilisation, postural control, visual-vestibular information fusion and spatial awareness via the active establishment of a quasi-inertial frame of reference. Head stabilisation helps animals to cope with the computational consequences of angular movements that complicate the reliable estimation of the vertical direction. We suggest that this strategy could also benefit free-moving robotic systems, such as locomoting humanoid robots, which are typically equipped with inertial measurements units. Free-moving robotic systems could gain the full benefits of inertial measurements if the measurement units are placed on independently orientable platforms, such as a human-like heads. We illustrate these benefits by analysing recent humanoid robots design and control approaches

    Enabling Force Sensing During Ground Locomotion: A Bio-Inspired, Multi-Axis, Composite Force Sensor Using Discrete Pressure Mapping

    Get PDF
    This paper presents a new force sensor design approach that maps the local sampling of pressure inside a composite polymeric footpad to forces in three axes, designed for running robots. Conventional multiaxis force sensors made of heavy metallic materials tend to be too bulky and heavy to be fitted in the feet of legged robots, and vulnerable to inertial noise upon high acceleration. To satisfy the requirements for high speed running, which include mitigating high impact forces, protecting the sensors from ground collision, and enhancing traction, these stiff sensors should be paired with additional layers of durable, soft materials; but this also degrades the integrity of the foot structure. The proposed foot sensor is manufactured as a monolithic, composite structure composed of an array of barometric pressure sensors completely embedded in a protective polyurethane rubber layer. This composite architecture allows the layers to provide compliance and traction for foot collision while the deformation and the sampled pressure distribution of the structure can be mapped into three axis force measurement. Normal and shear forces can be measured upon contact with the ground, which causes the footpad to deform and change the readings of the individual pressure sensors in the array. A one-time training process using an artificial neural network is all that is necessary to relate the normal and shear forces with the multiaxis foot sensor output. The results show that the sensor can predict normal forces in the Z-axis up to 300 N with a root mean squared error of 0.66% and up to 80 N in the X- and Y-axis. The experiment results demonstrates a proof-of-concept for a lightweight, low cost, yet robust footpad sensor suitable for use in legged robots undergoing ground locomotion.United States. Defense Advanced Research Projects Agency. Maximum Mobility and Manipulation (M3) ProgramSingapore. Agency for Science, Technology and Researc

    Adaptive, fast walking in a biped robot under neuronal control and learning

    Get PDF
    Human walking is a dynamic, partly self-stabilizing process relying on the interaction of the biomechanical design with its neuronal control. The coordination of this process is a very difficult problem, and it has been suggested that it involves a hierarchy of levels, where the lower ones, e.g., interactions between muscles and the spinal cord, are largely autonomous, and where higher level control (e.g., cortical) arises only pointwise, as needed. This requires an architecture of several nested, sensori–motor loops where the walking process provides feedback signals to the walker's sensory systems, which can be used to coordinate its movements. To complicate the situation, at a maximal walking speed of more than four leg-lengths per second, the cycle period available to coordinate all these loops is rather short. In this study we present a planar biped robot, which uses the design principle of nested loops to combine the self-stabilizing properties of its biomechanical design with several levels of neuronal control. Specifically, we show how to adapt control by including online learning mechanisms based on simulated synaptic plasticity. This robot can walk with a high speed (>3.0 leg length/s), self-adapting to minor disturbances, and reacting in a robust way to abruptly induced gait changes. At the same time, it can learn walking on different terrains, requiring only few learning experiences. This study shows that the tight coupling of physical with neuronal control, guided by sensory feedback from the walking pattern itself, combined with synaptic learning may be a way forward to better understand and solve coordination problems in other complex motor tasks

    Embedding runtime verification post-deployment for real-time health management of safety-critical systems

    Get PDF
    As cyber-physical systems increase in both complexity and criticality, formal methods have gained traction for design-time verification of safety properties. A lightweight formal method, runtime verification (RV), embeds checks necessary for safety-critical system health management; however, these techniques have been slow to appear in practice despite repeated calls by both industry and academia to leverage them. Additionally, the state-of-the-art in RV lacks a best practice approach when a deployed system requires increased flexibility due to a change in mission, or in response to an emergent condition not accounted for at design time. Human-robot interaction necessitates stringent safety guarantees to protect humans sharing the workspace, particularly in hazardous environments. For example, Robonaut2 (R2) developed an emergent fault while deployed to the International Space Station. Possibly-inaccurate actuator readings trigger the R2 safety system, preventing further motion of a joint until a ground-control operator determines the root-cause and initiates proper corrective action. Operator time is scarce and expensive; when waiting, R2 is an obstacle instead of an asset. We adapt the Realizable, Responsive, Unobtrusive Unit (R2U2) RV framework for resource-constrained environments. We retrofit the R2 motor controller, embedding R2U2 within the remaining resources of the Field-Programmable Gate Array (FPGA) controlling the joint actuator. We add online, stream-based, real-time system health monitoring in a provably unobtrusive way that does not interfere with the control of the joint. We design and embed formal temporal logic specifications that disambiguate the emergent faults and enable automated corrective actions. We overview the challenges and techniques for formally specifying behaviors of an existing command and data bus. We present our specification debugging, validation, and refinement steps. We demonstrate success in the Robonaut2 case study, then detail effective techniques and lessons learned from adding RV with real-time fault disambiguation under the constraints of a deployed system

    Humanoid Robots

    Get PDF
    For many years, the human being has been trying, in all ways, to recreate the complex mechanisms that form the human body. Such task is extremely complicated and the results are not totally satisfactory. However, with increasing technological advances based on theoretical and experimental researches, man gets, in a way, to copy or to imitate some systems of the human body. These researches not only intended to create humanoid robots, great part of them constituting autonomous systems, but also, in some way, to offer a higher knowledge of the systems that form the human body, objectifying possible applications in the technology of rehabilitation of human beings, gathering in a whole studies related not only to Robotics, but also to Biomechanics, Biomimmetics, Cybernetics, among other areas. This book presents a series of researches inspired by this ideal, carried through by various researchers worldwide, looking for to analyze and to discuss diverse subjects related to humanoid robots. The presented contributions explore aspects about robotic hands, learning, language, vision and locomotion
    corecore