11,803 research outputs found

    Stanford Aerospace Research Laboratory research overview

    Get PDF
    Over the last ten years, the Stanford Aerospace Robotics Laboratory (ARL) has developed a hardware facility in which a number of space robotics issues have been, and continue to be, addressed. This paper reviews two of the current ARL research areas: navigation and control of free flying space robots, and modelling and control of extremely flexible space structures. The ARL has designed and built several semi-autonomous free-flying robots that perform numerous tasks in a zero-gravity, drag-free, two-dimensional environment. It is envisioned that future generations of these robots will be part of a human-robot team, in which the robots will operate under the task-level commands of astronauts. To make this possible, the ARL has developed a graphical user interface (GUI) with an intuitive object-level motion-direction capability. Using this interface, the ARL has demonstrated autonomous navigation, intercept and capture of moving and spinning objects, object transport, multiple-robot cooperative manipulation, and simple assemblies from both free-flying and fixed bases. The ARL has also built a number of experimental test beds on which the modelling and control of flexible manipulators has been studied. Early ARL experiments in this arena demonstrated for the first time the capability to control the end-point position of both single-link and multi-link flexible manipulators using end-point sensing. Building on these accomplishments, the ARL has been able to control payloads with unknown dynamics at the end of a flexible manipulator, and to achieve high-performance control of a multi-link flexible manipulator

    Learning Task Constraints from Demonstration for Hybrid Force/Position Control

    Full text link
    We present a novel method for learning hybrid force/position control from demonstration. We learn a dynamic constraint frame aligned to the direction of desired force using Cartesian Dynamic Movement Primitives. In contrast to approaches that utilize a fixed constraint frame, our approach easily accommodates tasks with rapidly changing task constraints over time. We activate only one degree of freedom for force control at any given time, ensuring motion is always possible orthogonal to the direction of desired force. Since we utilize demonstrated forces to learn the constraint frame, we are able to compensate for forces not detected by methods that learn only from the demonstrated kinematic motion, such as frictional forces between the end-effector and the contact surface. We additionally propose novel extensions to the Dynamic Movement Primitive (DMP) framework that encourage robust transition from free-space motion to in-contact motion in spite of environment uncertainty. We incorporate force feedback and a dynamically shifting goal to reduce forces applied to the environment and retain stable contact while enabling force control. Our methods exhibit low impact forces on contact and low steady-state tracking error.Comment: Under revie

    Robot Impedance Control and Passivity Analysis with Inner Torque and Velocity Feedback Loops

    Full text link
    Impedance control is a well-established technique to control interaction forces in robotics. However, real implementations of impedance control with an inner loop may suffer from several limitations. Although common practice in designing nested control systems is to maximize the bandwidth of the inner loop to improve tracking performance, it may not be the most suitable approach when a certain range of impedance parameters has to be rendered. In particular, it turns out that the viable range of stable stiffness and damping values can be strongly affected by the bandwidth of the inner control loops (e.g. a torque loop) as well as by the filtering and sampling frequency. This paper provides an extensive analysis on how these aspects influence the stability region of impedance parameters as well as the passivity of the system. This will be supported by both simulations and experimental data. Moreover, a methodology for designing joint impedance controllers based on an inner torque loop and a positive velocity feedback loop will be presented. The goal of the velocity feedback is to increase (given the constraints to preserve stability) the bandwidth of the torque loop without the need of a complex controller.Comment: 14 pages in Control Theory and Technology (2016

    On Neuromechanical Approaches for the Study of Biological Grasp and Manipulation

    Full text link
    Biological and robotic grasp and manipulation are undeniably similar at the level of mechanical task performance. However, their underlying fundamental biological vs. engineering mechanisms are, by definition, dramatically different and can even be antithetical. Even our approach to each is diametrically opposite: inductive science for the study of biological systems vs. engineering synthesis for the design and construction of robotic systems. The past 20 years have seen several conceptual advances in both fields and the quest to unify them. Chief among them is the reluctant recognition that their underlying fundamental mechanisms may actually share limited common ground, while exhibiting many fundamental differences. This recognition is particularly liberating because it allows us to resolve and move beyond multiple paradoxes and contradictions that arose from the initial reasonable assumption of a large common ground. Here, we begin by introducing the perspective of neuromechanics, which emphasizes that real-world behavior emerges from the intimate interactions among the physical structure of the system, the mechanical requirements of a task, the feasible neural control actions to produce it, and the ability of the neuromuscular system to adapt through interactions with the environment. This allows us to articulate a succinct overview of a few salient conceptual paradoxes and contradictions regarding under-determined vs. over-determined mechanics, under- vs. over-actuated control, prescribed vs. emergent function, learning vs. implementation vs. adaptation, prescriptive vs. descriptive synergies, and optimal vs. habitual performance. We conclude by presenting open questions and suggesting directions for future research. We hope this frank assessment of the state-of-the-art will encourage and guide these communities to continue to interact and make progress in these important areas

    Time Scale Hierarchies in the Functional Organization of Complex Behaviors

    Get PDF
    Traditional approaches to cognitive modelling generally portray cognitive events in terms of ‘discrete’ states (point attractor dynamics) rather than in terms of processes, thereby neglecting the time structure of cognition. In contrast, more recent approaches explicitly address this temporal dimension, but typically provide no entry points into cognitive categorization of events and experiences. With the aim to incorporate both these aspects, we propose a framework for functional architectures. Our approach is grounded in the notion that arbitrary complex (human) behaviour is decomposable into functional modes (elementary units), which we conceptualize as low-dimensional dynamical objects (structured flows on manifolds). The ensemble of modes at an agent’s disposal constitutes his/her functional repertoire. The modes may be subjected to additional dynamics (termed operational signals), in particular, instantaneous inputs, and a mechanism that sequentially selects a mode so that it temporarily dominates the functional dynamics. The inputs and selection mechanisms act on faster and slower time scales then that inherent to the modes, respectively. The dynamics across the three time scales are coupled via feedback, rendering the entire architecture autonomous. We illustrate the functional architecture in the context of serial behaviour, namely cursive handwriting. Subsequently, we investigate the possibility of recovering the contributions of functional modes and operational signals from the output, which appears to be possible only when examining the output phase flow (i.e., not from trajectories in phase space or time)

    The connected brain: Causality, models and intrinsic dynamics

    Get PDF
    Recently, there have been several concerted international efforts - the BRAIN initiative, European Human Brain Project and the Human Connectome Project, to name a few - that hope to revolutionize our understanding of the connected brain. Over the past two decades, functional neuroimaging has emerged as the predominant technique in systems neuroscience. This is foreshadowed by an ever increasing number of publications on functional connectivity, causal modeling, connectomics, and multivariate analyses of distributed patterns of brain responses. In this article, we summarize pedagogically the (deep) history of brain mapping. We will highlight the theoretical advances made in the (dynamic) causal modelling of brain function - that may have escaped the wider audience of this article - and provide a brief overview of recent developments and interesting clinical applications. We hope that this article will engage the signal processing community by showcasing the inherently multidisciplinary nature of this important topic and the intriguing questions that are being addressed

    A versatile biomimetic controller for contact tooling and haptic exploration

    Get PDF
    International audienceThis article presents a versatile controller that enables various contact tooling tasks with minimal prior knowledge of the tooled surface. The controller is derived from results of neuroscience studies that investigated the neural mechanisms utilized by humans to control and learn complex interactions with the environment. We demonstrate here the versatility of this controller in simulations of cutting, drilling and surface exploration tasks, which would normally require different control paradigms. We also present results on the exploration of an unknown surface with a 7-DOF manipulator, where the robot builds a 3D surface map of the surface profile and texture while applying constant force during motion. Our controller provides a unified control framework encompassing behaviors expected from the different specialized control paradigms like position control, force control and impedance control

    Autonomy Infused Teleoperation with Application to BCI Manipulation

    Full text link
    Robot teleoperation systems face a common set of challenges including latency, low-dimensional user commands, and asymmetric control inputs. User control with Brain-Computer Interfaces (BCIs) exacerbates these problems through especially noisy and erratic low-dimensional motion commands due to the difficulty in decoding neural activity. We introduce a general framework to address these challenges through a combination of computer vision, user intent inference, and arbitration between the human input and autonomous control schemes. Adjustable levels of assistance allow the system to balance the operator's capabilities and feelings of comfort and control while compensating for a task's difficulty. We present experimental results demonstrating significant performance improvement using the shared-control assistance framework on adapted rehabilitation benchmarks with two subjects implanted with intracortical brain-computer interfaces controlling a seven degree-of-freedom robotic manipulator as a prosthetic. Our results further indicate that shared assistance mitigates perceived user difficulty and even enables successful performance on previously infeasible tasks. We showcase the extensibility of our architecture with applications to quality-of-life tasks such as opening a door, pouring liquids from containers, and manipulation with novel objects in densely cluttered environments
    corecore