44 research outputs found

    Pump it up : computer animation of a biomechanically based model of muscle using the finite element method

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Architecture, 1992.Includes bibliographical references (leaves 175-179).by David Tzu-Wei Chen.Ph.D

    Animation, Simulation, and Control of Soft Characters using Layered Representations and Simplified Physics-based Methods

    Get PDF
    Realistic behavior of computer generated characters is key to bringing virtual environments, computer games, and other interactive applications to life. The plausibility of a virtual scene is strongly influenced by the way objects move around and interact with each other. Traditionally, actions are limited to motion capture driven or pre-scripted motion of the characters. Physics enhance the sense of realism: physical simulation is required to make objects act as expected in real life. To make gaming and virtual environments truly immersive,it is crucial to simulate the response of characters to collisions and to produce secondary effects such as skin wrinkling and muscle bulging. Unfortunately, existing techniques cannot generally achieve these effects in real time, do not address the coupled response of a character's skeleton and skin to collisions nor do they support artistic control. In this dissertation, I present interactive algorithms that enable physical simulation of deformable characters with high surface detail and support for intuitive deformation control. I propose a novel unified framework for real-time modeling of soft objects with skeletal deformations and surface deformation due to contact, and their interplay for object surfaces with up to tens of thousands of degrees of freedom.I make use of layered models to reduce computational complexity. I introduce dynamic deformation textures, which map three dimensional deformations in the deformable skin layer to a two dimensional domain for extremely efficient parallel computation of the dynamic elasticity equations and optimized hierarchical collision detection. I also enhance layered models with responsive contact handling, to support the interplay between skeletal motion and surface contact and the resulting two-way coupling effects. Finally, I present dynamic morph targets, which enable intuitive control of dynamic skin deformations at run-time by simply sculpting pose-specific surface shapes. The resulting framework enables real-time and directable simulation of soft articulated characters with frictional contact response, capturing the interplay between skeletal dynamics and complex,non-linear skin deformations

    Multi-finger grasps in a dynamic environment

    Get PDF
    Most current state-of-the-art haptic devices render only a single force, however almost all human grasps are characterised by multiple forces and torques applied by the fingers and palms of the hand to the object. In this chapter we will begin by considering the different types of grasp and then consider the physics of rigid objects that will be needed for correct haptic rendering. We then describe an algorithm to represent the forces associated with grasp in a natural manner. The power of the algorithm is that it considers only the capabilities of the haptic device and requires no model of the hand, thus applies to most practical grasp types. The technique is sufficiently general that it would also apply to multi-hand interactions, and hence to collaborative interactions where several people interact with the same rigid object. Key concepts in friction and rigid body dynamics are discussed and applied to the problem of rendering multiple forces to allow the person to choose their grasp on a virtual object and perceive the resulting movement via the forces in a natural way. The algorithm also generalises well to support computation of multi-body physic

    Instantaneous Momentum-Based Control of Floating Base Systems

    Get PDF
    In the last two decades a growing number of robotic applications such as autonomous drones, wheeled robots and industrial manipulators started to be employed in several human environments. However, these machines often possess limited locomotion and/or manipulation capabilities, thus reducing the number of achievable tasks and increasing the complexity of robot-environment interaction. Augmenting robots locomotion and manipulation abilities is a fundamental research topic, with a view to enhance robots participation in complex tasks involving safe interaction and cooperation with humans. To this purpose, humanoid robots, aerial manipulators and the novel design of flying humanoid robots are among the most promising platforms researchers are studying in the attempt to remove the existing technological barriers. These robots are often modeled as floating base systems, and have lost the assumption -- typical of fixed base robots -- of having one link always attached to the ground. From the robot control side, contact forces regulation revealed to be fundamental for the execution of interaction tasks. Contact forces can be influenced by directly controlling the robot's momentum rate of change, and this fact gives rise to several momentum-based control strategies. Nevertheless, effective design of force and torque controllers still remains a complex challenge. The variability of sensor load during interaction, the inaccuracy of the force/torque sensing technology and the inherent nonlinearities of robot models are only a few complexities impairing efficient robot force control. This research project focuses on the design of balancing and flight controllers for floating base robots interacting with the surrounding environment. More specifically, the research is built upon the state-of-the-art of momentum-based controllers and applied to three robotic platforms: the humanoid robot iCub, the aerial manipulator OTHex and the jet-powered humanoid robot iRonCub. The project enforces the existing literature with both theoretical and experimental results, aimed at achieving high robot performances and improved stability and robustness, in presence of different physical robot-environment interactions

    Patient-specific simulation for autonomous surgery

    Get PDF
    An Autonomous Robotic Surgical System (ARSS) has to interact with the complex anatomical environment, which is deforming and whose properties are often uncertain. Within this context, an ARSS can benefit from the availability of patient-specific simulation of the anatomy. For example, simulation can provide a safe and controlled environment for the design, test and validation of the autonomous capabilities. Moreover, it can be used to generate large amounts of patient-specific data that can be exploited to learn models and/or tasks. The aim of this Thesis is to investigate the different ways in which simulation can support an ARSS and to propose solutions to favor its employability in robotic surgery. We first address all the phases needed to create such a simulation, from model choice in the pre-operative phase based on the available knowledge to its intra-operative update to compensate for inaccurate parametrization. We propose to rely on deep neural networks trained with synthetic data both to generate a patient-specific model and to design a strategy to update model parametrization starting directly from intra-operative sensor data. Afterwards, we test how simulation can assist the ARSS, both for task learning and during task execution. We show that simulation can be used to efficiently train approaches that require multiple interactions with the environment, compensating for the riskiness to acquire data from real surgical robotic systems. Finally, we propose a modular framework for autonomous surgery that includes deliberative functions to handle real anatomical environments with uncertain parameters. The integration of a personalized simulation proves fundamental both for optimal task planning and to enhance and monitor real execution. The contributions presented in this Thesis have the potential to introduce significant step changes in the development and actual performance of autonomous robotic surgical systems, making them closer to applicability to real clinical conditions

    TMTDyn: A Matlab package for modeling and control of hybrid rigid–continuum robots based on discretized lumped systems and reduced-order models

    Get PDF
    A reliable, accurate, and yet simple dynamic model is important to analyzing, designing, and controlling hybrid rigid–continuum robots. Such models should be fast, as simple as possible, and user-friendly to be widely accepted by the evergrowing robotics research community. In this study, we introduce two new modeling methods for continuum manipulators: a general reduced-order model (ROM) and a discretized model with absolute states and Euler–Bernoulli beam segments (EBA). In addition, a new formulation is presented for a recently introduced discretized model based on Euler–Bernoulli beam segments and relative states (EBR). We implement these models in a Matlab software package, named TMTDyn, to develop a modeling tool for hybrid rigid–continuum systems. The package features a new high-level language (HLL) text-based interface, a CAD-file import module, automatic formation of the system equation of motion (EOM) for different modeling and control tasks, implementing Matlab C-mex functionality for improved performance, and modules for static and linear modal analysis of a hybrid system. The underlying theory and software package are validated for modeling experimental results for (i) dynamics of a continuum appendage, and (ii) general deformation of a fabric sleeve worn by a rigid link pendulum. A comparison shows higher simulation accuracy (8–14% normalized error) and numerical robustness of the ROM model for a system with a small number of states, and computational efficiency of the EBA model with near real-time performances that makes it suitable for large systems. The challenges and necessary modules to further automate the design and analysis of hybrid systems with a large number of states are briefly discussed

    From Underactuation to Quasi‐Full Actuation: A Unifying Control Framework for Rigid and Elastic Joint Robot

    Get PDF
    The quest for animal-like performance in robots has driven the integration of elastic elements in their drive trains, sparking a revolution in robot design. Elastic robots can store and release potential energy, providing distinct advantages over traditional robots, such as enhanced safety in human-robot interaction, resilience to mechanical shocks, improved energy efficiency in cyclic tasks, and dynamic motion capabilities. Exploiting their full potential, however, necessitates novel control methods. This thesis advances the field of nonlinear control for underactuated systems and utilizes the results to push the boundaries of motion and interaction performance of elastic robots. Through real-life experiments and applications, the proposed controllers demonstrate that compliant robots hold promise as groundbreaking robotic technology. To achieve these objectives, we first derive a simultaneous phase space and input transformation that enables a specific class of underactuated Lagrangian systems to be treated as if fully actuated. These systems can be represented as the interconnection of actuated and underactuated subsystems, with the kinetic energy of each subsystem depending only on its own velocity. Elastic robots are typical representatives. We refer to the transformed system as quasi-fully actuated due to weak constraints on the new inputs. Fundamental aspects of the transforming equations are 1) the same Lagrangian function characterizes both the original and transformed systems, 2) the transformed system establishes a passive mapping between inputs and outputs, and 3) the solutions of both systems are in a one-to-one correspondence, describing the same physical reality. This correspondence allows us to study and control the behavior of the quasi-fully actuated system instead of the underactuated one. Thus, this approach unifies the control design for rigid and elastic joint robots, enabling the direct application of control results inherited from the fully-actuated case while ensuring closed-loop system stability and passivity. Unlike existing methods, the quasi-full actuation concept does not rely on inner control loops or the neglect and cancellation of dynamics. Notably, as joint stiffness values approach infinity, the control equivalent of a rigid robot is recovered. Building upon the quasi-full actuation concept, we extend energy-based control schemes such as energy shaping and damping injection, Euler-Lagrange controllers, and impedance control. Moreover, we introduce Elastic Structure Preserving (ESP) control, a passivity-based control scheme designed for robots with elastic or viscoelastic joints, guided by the principle of ``do as little as possible''. The underlying hope is that reducing the system shaping, i.e., having a closed-loop dynamics match in some way the robot's intrinsic structure, will award high performance with little control effort. By minimizing the system shaping, we obtain low-gain designs, which are favorable concerning robustness and facilitate the emergence of natural motions. A comparison with state-of-the-art controllers highlights the minimalistic nature of ESP control. Additionally, we present a synthesis method, based on purely geometric arguments, for achieving time-optimal rest-to-rest motions of an elastic joint with bounded input. Finally, we showcase the remarkable performance and robustness of the proposed ESP controllers on DLR David, an anthropomorphic robot implemented with variable impedance actuators. Experimental evidence reveals that ESP designs enable safe and compliant interaction with the environment and rigid-robot-level accuracy in free motion. Additionally, we introduce a control framework that allows DLR David to perform commercially relevant tasks, such as pick and place, teleoperation, hammer drilling into a concrete block, and unloading a dishwasher. The successful execution of these tasks provides compelling evidence that compliant robots have a promising future in commercial applications

    TMTDyn: A Matlab package for modeling and control of hybrid rigid-continuum robots based on discretized lumped systems and reduced-order models

    Get PDF
    A reliable, accurate, and yet simple dynamic model is important to analyzing, designing, and controlling hybrid rigid–continuum robots. Such models should be fast, as simple as possible, and user-friendly to be widely accepted by the ever-growing robotics research community. In this study, we introduce two new modeling methods for continuum manipulators: a general reduced-order model (ROM) and a discretized model with absolute states and Euler–Bernoulli beam segments (EBA). In addition, a new formulation is presented for a recently introduced discretized model based on Euler–Bernoulli beam segments and relative states (EBR). We implement these models in a Matlab software package, named TMTDyn, to develop a modeling tool for hybrid rigid–continuum systems. The package features a new high-level language (HLL) text-based interface, a CAD-file import module, automatic formation of the system equation of motion (EOM) for different modeling and control tasks, implementing Matlab C-mex functionality for improved performance, and modules for static and linear modal analysis of a hybrid system. The underlying theory and software package are validated for modeling experimental results for (i) dynamics of a continuum appendage, and (ii) general deformation of a fabric sleeve worn by a rigid link pendulum. A comparison shows higher simulation accuracy (8–14% normalized error) and numerical robustness of the ROM model for a system with a small number of states, and computational efficiency of the EBA model with near real-time performances that makes it suitable for large systems. The challenges and necessary modules to further automate the design and analysis of hybrid systems with a large number of states are briefly discussed

    Book of Abstracts 15th International Symposium on Computer Methods in Biomechanics and Biomedical Engineering and 3rd Conference on Imaging and Visualization

    Get PDF
    In this edition, the two events will run together as a single conference, highlighting the strong connection with the Taylor & Francis journals: Computer Methods in Biomechanics and Biomedical Engineering (John Middleton and Christopher Jacobs, Eds.) and Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization (JoãoManuel R.S. Tavares, Ed.). The conference has become a major international meeting on computational biomechanics, imaging andvisualization. In this edition, the main program includes 212 presentations. In addition, sixteen renowned researchers will give plenary keynotes, addressing current challenges in computational biomechanics and biomedical imaging. In Lisbon, for the first time, a session dedicated to award the winner of the Best Paper in CMBBE Journal will take place. We believe that CMBBE2018 will have a strong impact on the development of computational biomechanics and biomedical imaging and visualization, identifying emerging areas of research and promoting the collaboration and networking between participants. This impact is evidenced through the well-known research groups, commercial companies and scientific organizations, who continue to support and sponsor the CMBBE meeting series. In fact, the conference is enriched with five workshops on specific scientific topics and commercial software.info:eu-repo/semantics/draf
    corecore