45 research outputs found

    Image-Based Flexible Endoscope Steering

    Get PDF
    Manually steering the tip of a flexible endoscope to navigate through an endoluminal path relies on the physician’s dexterity and experience. In this paper we present the realization of a robotic flexible endoscope steering system that uses the endoscopic images to control the tip orientation towards the direction of the lumen. Two image-based control algorithms are investigated, one is based on the optical flow and the other is based on the image intensity. Both are evaluated using simulations in which the endoscope was steered through the lumen. The RMS distance to the lumen center was less than 25% of the lumen width. An experimental setup was built using a standard flexible endoscope, and the image-based control algorithms were used to actuate the wheels of the endoscope for tip steering. Experiments were conducted in an anatomical model to simulate gastroscopy. The image intensity- based algorithm was capable of steering the endoscope tip through an endoluminal path from the mouth to the duodenum accurately. Compared to manual control, the robotically steered endoscope performed 68% better in terms of keeping the lumen centered in the image

    A study on selecting the method of constructing the information to be exchanged in unlimited-workspace bilateral teleoperation

    Get PDF
    In this paper, a study on selecting the mapping method for information exchange in unlimitedworkspace teleoperation is presented. In spite the fact that in most of the bilateral teleoperation systems, the master system sends motion demands and receives interaction force from the slave system, the information to be exchanged through the communication line between master and slave is selected to be force in both directions in this study. This approach is expected to ease the navigation of the user when a limited-workspace master system is used to control an unlimited-workspace slave system. As the unlimited-workspace slave system, a virtual flying scalpel is used and human skin is modeled to represent the environment around the slave system. Two methods of constructing force information, or in other words, information mapping between the two systems, are developed and evaluated via user studies. However, one of them comes out to provide acceptable results in the selected unlimited-workspace teleoperation task. Experimental results for the method that provides acceptable results are presented

    Vision based virtual fixture generation for teleoperated robotic manipulation

    Get PDF
    In this paper we present a vision-based system for online virtual fixture generation suitable for manipulation tasks using remote controlled robots. This system makes use of a stereo camera system which provides accurate pose estimation of parts within the surrounding environment of the robot using features detection algorithms. The proposed approach is suitable for fast adaptation of the teleoperation system to different manipulation tasks without the need of tedious reimplementation of virtual constraints. Our main goal is to improve the efficiency of bilateral teleoperation systems by reducing the human operator effort in programming the system. In fact, using this method virtual guidances do not need to be programmed a priori but they can be instead dynamically generated on-the-fly and updated at any time making, in the end, the system suitable for any unstructured environment. In addition, this methodology is easily adaptable to any kind of teleoperation system since it is independent from the used master/slave robots. In order to validate our approach we performed a series of experiments in an emulated industrial scenario. We show how through the use of our approach a generic telemanipulation task can be easily accomplished without influencing the transparency of the system

    Enhancing bilateral teleoperation using camera-based online virtual fixtures generation

    Get PDF
    In this paper we present an interactive system to enhance bilateral teleoperation through online virtual fixtures generation and task switching. This is achieved using a stereo camera system which provides accurate information of the surrounding environment of the robot and of the tasks that have to be performed in it. The use of the proposed approach aims at improving the performances of bilateral teleoperation systems by reducing the human operator workload and increasing both the implementation and the execution efficiency. In fact, using our method virtual guidances do not need to be programmed a priori but they can be instead automatically generated and updated making the system suitable for unstructured environments. We strengthen the proposed method using passivity control in order to safely switch between different tasks while teleoperating under active constraints. A series of experiments emulating real industrial scenarios are used to show that the switch between multiple tasks can be passively and safely achieved and handled by the system

    Haptic guidance for microrobotic intracellular injection

    Full text link
    The ability for a bio-operator to utilise a haptic device to manipulate a microrobot for intracellular injection offers immense benefits. One significant benefit is for the bio-operator to receive haptic guidance while performing the injection process. In order to address this, this paper investigates the use of haptic virtual fixtures for cell injection and proposes a novel force field virtual fixture. The guidance force felt by the bio-operator is determined by force field analysis within the virtual fixture. The proposed force field virtual fixture assists the bio-operator when performing intracellular injection by limiting the micropipette tip\u27s motion to a conical volume as well as recommending the desired path for optimal injection. A virtual fixture plane is also introduced to prevent the bio-operator from moving the micropipette tip beyond the deposition target inside the cell. Simulation results demonstrate the operation of the guidance system.<br /

    Reinforcement Learning-based Virtual Fixtures for Teleoperation of Hydraulic Construction Machine

    Full text link
    The utilization of teleoperation is a crucial aspect of the construction industry, as it enables operators to control machines safely from a distance. However, remote operation of these machines at a joint level using individual joysticks necessitates extensive training for operators to achieve proficiency due to their multiple degrees of freedom. Additionally, verifying the machine resulting motion is only possible after execution, making optimal control challenging. In addressing this issue, this study proposes a reinforcement learning-based approach to optimize task performance. The control policy acquired through learning is used to provide instructions on efficiently controlling and coordinating multiple joints. To evaluate the effectiveness of the proposed framework, a user study is conducted with a Brokk 170 construction machine by assessing its performance in a typical construction task involving inserting a chisel into a borehole. The effectiveness of the proposed framework is evaluated by comparing the performance of participants in the presence and absence of virtual fixtures. This study results demonstrate the proposed framework potential in enhancing the teleoperation process in the construction industry

    Mobility Experiments With Microrobots for Minimally Invasive Intraocular Surgery

    Get PDF
    Purpose.: To investigate microrobots as an assistive tool for minimally invasive intraocular surgery and to demonstrate mobility and controllability inside the living rabbit eye. / Methods.: A system for wireless magnetic control of untethered microrobots was developed. Mobility and controllability of a microrobot are examined in different media, specifically vitreous, balanced salt solution (BSS), and silicone oil. This is demonstrated through ex vivo and in vivo animal experiments. / Results.: The developed electromagnetic system enables precise control of magnetic microrobots over a workspace that covers the posterior eye segment. The system allows for rotation and translation of the microrobot in different media (vitreous, BSS, silicone oil) inside the eye. / Conclusions.: Intravitreal introduction of untethered mobile microrobots can enable sutureless and precise ophthalmic procedures. Ex vivo and in vivo experiments demonstrate that microrobots can be manipulated inside the eye. Potential applications are targeted drug delivery for maculopathies such as AMD, intravenous deployment of anticoagulation agents for retinal vein occlusion (RVO), and mechanical applications, such as manipulation of epiretinal membrane peeling (ERM). The technology has the potential to reduce the invasiveness of ophthalmic surgery and assist in the treatment of a variety of ophthalmic diseases

    Emerging Challenges in Technology-based Support for Surgical Training

    Get PDF
    This paper stipulates several technological research and development thrusts that can assist in modern day approaches to simulated training of minimally invasive laparoscopic and robot surgery. Basic tenets of such training are explained, and specific areas of research are enumerated. Specifically, augmented and mixed reality are proposed as a means of improving perceptual and clinical decision-making skills, haptics are proposed as mechanism not only to provide force feedback and guidance, but also as a means of reflecting a tactile feel of surgery in simulated training scenarios. Learning optimization is discussed to fine tune the difficulty levels of various exercises. All the above elements can serve as the foundation for building computer-based virtual coaching environments that can reduce the training costs and provide a broader access to learning highly complex, technology driven surgical techniques

    Learning to perform a new movement with robotic assistance: comparison of haptic guidance and visual demonstration

    Get PDF
    BACKGROUND: Mechanical guidance with a robotic device is a candidate technique for teaching people desired movement patterns during motor rehabilitation, surgery, and sports training, but it is unclear how effective this approach is as compared to visual demonstration alone. Further, little is known about motor learning and retention involved with either robot-mediated mechanical guidance or visual demonstration alone. METHODS: Healthy subjects (n = 20) attempted to reproduce a novel three-dimensional path after practicing it with mechanical guidance from a robot. Subjects viewed their arm as the robot guided it, so this "haptic guidance" training condition provided both somatosensory and visual input. Learning was compared to reproducing the movement following only visual observation of the robot moving along the path, with the hand in the lap (the "visual demonstration" training condition). Retention was assessed periodically by instructing the subjects to reproduce the path without robotic demonstration. RESULTS: Subjects improved in ability to reproduce the path following practice in the haptic guidance or visual demonstration training conditions, as evidenced by a 30–40% decrease in spatial error across 126 movement attempts in each condition. Performance gains were not significantly different between the two techniques, but there was a nearly significant trend for the visual demonstration condition to be better than the haptic guidance condition (p = 0.09). The 95% confidence interval of the mean difference between the techniques was at most 25% of the absolute error in the last cycle. When asked to reproduce the path repeatedly following either training condition, the subjects' performance degraded significantly over the course of a few trials. The tracing errors were not random, but instead were consistent with a systematic evolution toward another path, as if being drawn to an "attractor path". CONCLUSION: These results indicate that both forms of robotic demonstration can improve short-term performance of a novel desired path. The availability of both haptic and visual input during the haptic guidance condition did not significantly improve performance compared to visual input alone in the visual demonstration condition. Further, the motor system is inclined to repeat its previous mistakes following just a few movements without robotic demonstration, but these systematic errors can be reduced with periodic training
    corecore