703 research outputs found

    Foot-Controlled Supernumerary Robotic Arm: Control Methods and Human Abilities

    Get PDF
    Supernumerary robotic limbs (SRLs) are extra robotic appendages that help a user with various tasks. A challenge with SRLs is how to operate them effectively. One solution is to use the foot to teleoperate the arm, freeing the person to use their arms for other tasks. However, unlike hand interfaces, it is not known how to create effective foot control for robotic teleoperation. A foot interface is developed for an experiment to compare position and rate control with the foot. Position control is shown to be more effective than rate control for 2D positioning tasks. Even if an effective control strategy is implemented, it is currently unknown if a person has the ability to control a robot with their foot while simultaneously using both arms. A second experiment shows that humans can operate an SRL with the foot while performing a task with both hands

    Supernumerary Robotic Arm for Three-Handed Surgical Application: Behavioral Study and Design of Human-Machine Interface

    Get PDF
    In surgical to industrial manipulation, the operator needs assistance for tasks requiring more than two hands. Teamwork may be the source of errors and inefficiency, especially if the assistant is a novice or unfamiliar with the main operator. The need for assistance may become problematic in case of lack of human resources e.g. in emergency surgical cases in the late hours of the night. Our objective is to improve the surgeon's autonomy and dexterity by a robotic arm under his own control. Although a number of robotic instrument holders have been developed, the best way to control such devices is still an open question. No behavioral study has been conducted on the best control strategy and human performance in three-handed tasks. We have selected the foot for commanding the third arm on the basis of a literature review. A series of experiments in virtual environments has been conducted to study the feasibility of this choice. The first experiment compares performance in the same task using two or three hands. Results show that three-handed manipulation is preferred to two-handed manipulation in demanding tasks. The second experiment investigated the type of tasks to be aimed in three-handed manipulation and the learning curve of users. Moving the hands and a foot simultaneously in opposite directions was perceived as difficult compared to a more active task with liberty in choosing the limbs coordination. Limbs were moved in parallel rather than serially. The performance improved within a few minutes of practice. Also, the sense of ownership improved constantly during the experiment. Two other experiments were aimed at handling the endoscope in laparoscopic surgery. Surgeons and medical students participated in these studies. Residents had a more positive approach towards foot usage and performed better compared to more experienced surgeons. This proves that the best training period for surgeons to use a foot controlled robotic arm is during their residency. A realistic virtual abdominal cavity has been developed for the last experiment. This had a positive influence on the participants' performance and emphasizes the importance of using a familiar context for training such a "three-handed surgery". Finally, two different foot interfaces were developed to investigate the most intuitive third arm commanding strategy. A robotic arm is hence controlled by the foot's translation or rotation in one interface (isotonic interface), and by force or torque in the other one (isometric interface). An experimental behavioral study was conducted to compare the two devices. Isometric rate control was preferred to isotonic position control due to the lower physical burden and higher movement accuracy of the robot. It was shown that the proposed device for isometric rate control could be used for intuitive control of four DoFs of a slave robotic arm. This thesis is the first step in a systematic investigation of a three-handed manipulation, two biological hands and a foot controlled robotic assistant. Findings suggest a high potential in using the foot to become more autonomous in surgery as well as other fields. Users can learn the control paradigm in a short period of time with little mental and physical burden. We expect the developed foot interfaces to be the basis of future development of more intuitive control interfaces. We believe that foot controlled robotic arms will be commonly used in various surgical as well as industrial applications

    Kinisi: A Platform for Autonomizing Off-Road Vehicles

    Get PDF
    This project proposed a modular system that would autonomize off-road vehicles while retaining full manual operability. This MQP team designed and developed a Level 3 autonomous vehicle prototype using an SAE Baja vehicle outfitted with actuators and exteroceptive sensors. At the end of the project, the vehicle had a drive-by-wire system, could localize itself using sensors, generate a map of its surroundings, and plan a path to follow a desired trajectory. Given a map, the vehicle could traverse a series of obstacles in an enclosed environment. The long- term goal is to alter the software system to make it modular and operate in real-time, so the vehicle can autonomously navigate off-road terrain to rescue and aid a distressed individual

    Bringing a Humanoid Robot Closer to Human Versatility : Hard Realtime Software Architecture and Deep Learning Based Tactile Sensing

    Get PDF
    For centuries, it has been a vision of man to create humanoid robots, i.e., machines that not only resemble the shape of the human body, but have similar capabilities, especially in dextrously manipulating their environment. But only in recent years it has been possible to build actual humanoid robots with many degrees of freedom (DOF) and equipped with torque controlled joints, which are a prerequisite for sensitively acting in the world. In this thesis, we extend DLR's advanced mobile torque controlled humanoid robot Agile Justin into two important directions to get closer to human versatility. First, we enable Agile Justin, which was originally built as a research platform for dextrous mobile manipulation, to also be able to execute complex dynamic manipulation tasks. We demonstrate this with the challenging task of catching up to two simultaneously thrown balls with its hands. Second, we equip Agile Justin with highly developed and deep learning based tactile sensing capabilities that are critical for dextrous fine manipulation. We demonstrate its tactile capabilities with the delicate task of identifying an objects material simply by gently sweeping with a fingertip over its surface. Key for the realization of complex dynamic manipulation tasks is a software framework that allows for a component based system architecture to cope with the complexity and parallel and distributed computational demands of deep sensor-perception-planning-action loops -- but under tight timing constraints. This thesis presents the communication layer of our aRDx (agile robot development -- next generation) software framework that provides hard realtime determinism and optimal transport of data packets with zero-copy for intra- and inter-process and copy-once for distributed communication. In the implementation of the challenging ball catching application on Agile Justin, we take full advantage of aRDx's performance and advanced features like channel synchronization. Besides developing the challenging visual ball tracking using only onboard sensing while everything is moving and the automatic and self-contained calibration procedure to provide the necessary precision, the major contribution is the unified generation of the reaching motion for the arms. The catch point selection, motion planning and the joint interpolation steps are subsumed in one nonlinear constrained optimization problem which is solved in realtime and allows for the realization of different catch behaviors. For the highly sensitive task of tactile material classification with a flexible pressure-sensitive skin on Agile Justin's fingertip, we present our deep convolutional network architecture TactNet-II. The input is the raw 16000 dimensional complex and noisy spatio-temporal tactile signal generated when sweeping over an object's surface. For comparison, we perform a thorough human performance experiment with 15 subjects which shows that Agile Justin reaches superhuman performance in the high-level material classification task (What material id?), as well as in the low-level material differentiation task (Are two materials the same?). To increase the sample efficiency of TactNet-II, we adapt state of the art deep end-to-end transfer learning to tactile material classification leading to an up to 15 fold reduction in the number of training samples needed. The presented methods led to six publication awards and award finalists and international media coverage but also worked robustly at many trade fairs and lab demos

    Robotics 2010

    Get PDF
    Without a doubt, robotics has made an incredible progress over the last decades. The vision of developing, designing and creating technical systems that help humans to achieve hard and complex tasks, has intelligently led to an incredible variety of solutions. There are barely technical fields that could exhibit more interdisciplinary interconnections like robotics. This fact is generated by highly complex challenges imposed by robotic systems, especially the requirement on intelligent and autonomous operation. This book tries to give an insight into the evolutionary process that takes place in robotics. It provides articles covering a wide range of this exciting area. The progress of technical challenges and concepts may illuminate the relationship between developments that seem to be completely different at first sight. The robotics remains an exciting scientific and engineering field. The community looks optimistically ahead and also looks forward for the future challenges and new development

    Actuators and sensors for application in agricultural robots: A review

    Get PDF
    In recent years, with the rapid development of science and technology, agricultural robots have gradually begun to replace humans, to complete various agricultural operations, changing traditional agricultural production methods. Not only is the labor input reduced, but also the production efficiency can be improved, which invariably contributes to the development of smart agriculture. This paper reviews the core technologies used for agricultural robots in non-structural environments. In addition, we review the technological progress of drive systems, control strategies, end-effectors, robotic arms, environmental perception, and other related systems. This research shows that in a non-structured agricultural environment, using cameras and light detection and ranging (LiDAR), as well as ultrasonic and satellite navigation equipment, and by integrating sensing, transmission, control, and operation, different types of actuators can be innovatively designed and developed to drive the advance of agricultural robots, to meet the delicate and complex requirements of agricultural products as operational objects, such that better productivity and standardization of agriculture can be achieved. In summary, agricultural production is developing toward a data-driven, standardized, and unmanned approach, with smart agriculture supported by actuator-driven-based agricultural robots. This paper concludes with a summary of the main existing technologies and challenges in the development of actuators for applications in agricultural robots, and the outlook regarding the primary development directions of agricultural robots in the near future

    Sensors for Robotic Hands: A Survey of State of the Art

    Get PDF
    Recent decades have seen significant progress in the field of artificial hands. Most of the surveys, which try to capture the latest developments in this field, focused on actuation and control systems of these devices. In this paper, our goal is to provide a comprehensive survey of the sensors for artificial hands. In order to present the evolution of the field, we cover five year periods starting at the turn of the millennium. At each period, we present the robot hands with a focus on their sensor systems dividing them into categories, such as prosthetics, research devices, and industrial end-effectors.We also cover the sensors developed for robot hand usage in each era. Finally, the period between 2010 and 2015 introduces the reader to the state of the art and also hints to the future directions in the sensor development for artificial hands

    Robocatch: Design and Making of a Hand-Held Spillage-Free Specimen Retrieval Robot for Laparoscopic Surgery

    Get PDF
    Specimen retrieval is an important step in laparoscopy, a minimally invasive surgical procedure performed to diagnose and treat a myriad of medical pathologies in fields ranging from gynecology to oncology. Specimen retrieval bags (SRBs) are used to facilitate this task, while minimizing contamination of neighboring tissues and port-sites in the abdominal cavity. This manual surgical procedure requires usage of multiple ports, creating a traffic of simultaneous operations of multiple instruments in a limited shared workspace. The skill-demanding nature of this procedure makes it time-consuming, leading to surgeons’ fatigue and operational inefficiency. This thesis presents the design and making of RoboCatch, a novel hand-held robot that aids a surgeon in performing spillage-free retrieval of operative specimens in laparoscopic surgery. The proposed design significantly modifies and extends conventional instruments that are currently used by surgeons for the retrieval task: The core instrumentation of RoboCatch comprises a webbed three-fingered grasper and atraumatic forceps that are concentrically situated in a folded configuration inside a trocar. The specimen retrieval task is achieved in six stages: 1) The trocar is introduced into the surgical site through an instrument port, 2) the three webbed fingers slide out of the tube and simultaneously unfold in an umbrella like-fashion, 3) the forceps slide toward, and grasp, the excised specimen, 4) the forceps retract the grasped specimen into the center of the surrounding grasper, 5) the grasper closes to achieve a secured containment of the specimen, and 6) the grasper, along with the contained specimen, is manually removed from the abdominal cavity. The resulting reduction in the number of active ports reduces obstruction of the port-site and increases the procedure’s efficiency. The design process was initiated by acquiring crucial parameters from surgeons and creating a design table, which informed the CAD modeling of the robot structure and selection of actuation units and fabrication material. The robot prototype was first examined in CAD simulation and then fabricated using an Objet30 Prime 3D printer. Physical validation experiments were conducted to verify the functionality of different mechanisms of the robot. Further, specimen retrieval experiments were conducted with porcine meat samples to test the feasibility of the proposed design. Experimental results revealed that the robot was capable of retrieving masses of specimen ranging from 1 gram to 50 grams. The making of RoboCatch represents a significant step toward advancing the frontiers of hand-held robots for performing specimen retrieval tasks in minimally invasive surgery

    Dyadic collaborative manipulation formalism for optimizing human-robot teaming

    Get PDF
    Dyadic collaborative Manipulation (DcM) is a term we use to refer to a team of two individuals, the agent and the partner, jointly manipulating an object. The two individuals partner together to form a distributed system, augmenting their manipulation abilities. Effective collaboration between the two individuals during joint action depends on: (i) the breadth of the agent’s action repertoire, (ii) the level of model acquaintance between the two individuals, (iii) the ability to adapt online of one’s own actions to the actions of their partner, and (iv) the ability to estimate the partner’s intentions and goals. Key to the successful completion of co-manipulation tasks with changing goals is the agent’s ability to change grasp-holds, especially in large object co-manipulation scenarios. Hence, in this work we developed a Trajectory Optimization (TO) method to enhance the repertoire of actions of robotic agents, by enabling them to plan and execute hybrid motions, i.e. motions that include discrete contact transitions, continuous trajectories and force profiles. The effectiveness of the TO method is investigated numerically and in simulation, in a number of manipulation scenarios with both a single and a bimanual robot. In addition, it is worth noting that transitions from free motion to contact is a challenging problem in robotics, in part due to its hybrid nature. Additionally, disregarding the effects of impacts at the motion planning level often results in intractable impulsive contact forces. To address this challenge, we introduce an impact-aware multi-mode TO method that combines hybrid dynamics and hybrid control in a coherent fashion. A key concept in our approach is the incorporation of an explicit contact force transmission model into the TO method. This allows the simultaneous optimization of the contact forces, contact timings, continuous motion trajectories and compliance, while satisfying task constraints. To demonstrate the benefits of our method, we compared our method against standard compliance control and an impact-agnostic TO method in physical simulations. Also, we experimentally validated the proposed method with a robot manipulator on the task of halting a large-momentum object. Further, we propose a principled formalism to address the joint planning problem in DcM scenarios and we solve the joint problem holistically via model-based optimization by representing the human's behavior as task space forces. The task of finding the partner-aware contact points, forces and the respective timing of grasp-hold changes are carried out by a TO method using non-linear programming. Using simulations, the capability of the optimization method is investigated in terms of robot policy changes (trajectories, timings, grasp-holds) to potential changes of the collaborative partner policies. We also realized, in hardware, effective co-manipulation of a large object by the human and the robot, including eminent grasp changes as well as optimal dyadic interactions to realize the joint task. To address the online adaptation challenge of joint motion plans in dyads, we propose an efficient bilevel formulation which combines graph search methods with trajectory optimization, enabling robotic agents to adapt their policy on-the-fly in accordance to changes of the dyadic task. This method is the first to empower agents with the ability to plan online in hybrid spaces; optimizing over discrete contact locations, contact sequence patterns, continuous trajectories, and force profiles for co-manipulation tasks. This is particularly important in large object co-manipulation tasks that require on-the-fly plan adaptation. We demonstrate in simulation and with robot experiments the efficacy of the bilevel optimization by investigating the effect of robot policy changes in response to real-time alterations of the goal. This thesis provides insight into joint manipulation setups performed by human-robot teams. In particular, it studies computational models of joint action and exploits the uncharted hybrid action space, that is especially relevant in general manipulation and co-manipulation tasks. It contributes towards developing a framework for DcM, capable of planning motions in the contact-force space, realizing these motions while considering impacts and joint action relations, as well as adapting on-the-fly these motion plans with respect to changes of the co-manipulation goals

    Development of an Underground Mine Scout Robot

    No full text
    Despite increased safety and improved technology in the mining industry, fatal disasters still occur. Robots have the potential to be an invaluable resource for search and rescue teams to scout dangerous or difficult situations. Existing underground mine search and rescue robots have demonstrated limited success. Identified through literature, the two primary concerns are unreliable locomotion systems and a lack of underground mine environment consideration. HADES, an underground mine disaster scout, addresses these issues with a unique chassis and novel locomotion. A system level design is carried out, addressing the difficulties of underground mine environments. To operate in an explosive atmosphere, a purge and pressurisation system is applied to a fibre glass chassis, with intrinsic safety incorporated into the sensor design. To prevent dust, dirt and water damaging the electronics, ingress protection is applied through sealing. The chassis is invertible, with a low centre of gravity and a roll-axis pivot. This chassis design, in combination with spoked-wheels allows traversal of the debris and rubble of a disaster site. Electrochemical gas sensors are incorporated, along with RGB-D cameras, two-way audio and various other environment sensors. A communication system combining a tether and mesh network is designed, with wireless nodes to increase wireless range and reliability. Electronic hardware and software control are implemented to produce an operational scout robot. HADES is 0.7 × 0.6 × 0.4 m, with a sealed IP65 chassis. The locomotion system is robust and effective, able to traverse most debris and rubble, as tested on the university grounds and at a clean landfill. Bottoming out is the only problem encountered, but can be avoided by approaching obstacles correctly. The motor drive system is able to drive HADES at walking speed (1.4 m/s) and it provides more torque than traction allows. Six Lithium-Polymer batteries enable 2 hours 28 minutes of continuous operation. At 20 kg and ~$7000, HADES is a portable, inexpensive scout robot for underground mine disasters
    corecore