231 research outputs found

    Stereoscopic camera and viewing systems with undistorted depth presentation and reduced or eliminated erroneous acceleration and deceleration perceptions, or with perceptions produced or enhanced for special effects

    Get PDF
    Methods for providing stereoscopic image presentation and stereoscopic configurations using stereoscopic viewing systems having converged or parallel cameras may be set up to reduce or eliminate erroneously perceived accelerations and decelerations by proper selection of parameters, such as an image magnification factor, q, and intercamera distance, 2w. For converged cameras, q is selected to be equal to Ve - qwl = 0, where V is the camera distance, e is half the interocular distance of an observer, w is half the intercamera distance, and l is the actual distance from the first nodal point of each camera to the convergence point, and for parallel cameras, q is selected to be equal to e/w. While converged cameras cannot be set up to provide fully undistorted three-dimensional views, they can be set up to provide a linear relationship between real and apparent depth and thus minimize erroneously perceived accelerations and decelerations for three sagittal planes, x = -w, x = 0, and x = +w which are indicated to the observer. Parallel cameras can be set up to provide fully undistorted three-dimensional views by controlling the location of the observer and by magnification and shifting of left and right images. In addition, the teachings of this disclosure can be used to provide methods of stereoscopic image presentation and stereoscopic camera configurations to produce a nonlinear relation between perceived and real depth, and erroneously produce or enhance perceived accelerations and decelerations in order to provide special effects for entertainment, training, or educational purposes

    A continuum robotic platform for endoscopic non-contact laser surgery: design, control, and preclinical evaluation

    Get PDF
    The application of laser technologies in surgical interventions has been accepted in the clinical domain due to their atraumatic properties. In addition to manual application of fibre-guided lasers with tissue contact, non-contact transoral laser microsurgery (TLM) of laryngeal tumours has been prevailed in ENT surgery. However, TLM requires many years of surgical training for tumour resection in order to preserve the function of adjacent organs and thus preserve the patient’s quality of life. The positioning of the microscopic laser applicator outside the patient can also impede a direct line-of-sight to the target area due to anatomical variability and limit the working space. Further clinical challenges include positioning the laser focus on the tissue surface, imaging, planning and performing laser ablation, and motion of the target area during surgery. This dissertation aims to address the limitations of TLM through robotic approaches and intraoperative assistance. Although a trend towards minimally invasive surgery is apparent, no highly integrated platform for endoscopic delivery of focused laser radiation is available to date. Likewise, there are no known devices that incorporate scene information from endoscopic imaging into ablation planning and execution. For focusing of the laser beam close to the target tissue, this work first presents miniaturised focusing optics that can be integrated into endoscopic systems. Experimental trials characterise the optical properties and the ablation performance. A robotic platform is realised for manipulation of the focusing optics. This is based on a variable-length continuum manipulator. The latter enables movements of the endoscopic end effector in five degrees of freedom with a mechatronic actuation unit. The kinematic modelling and control of the robot are integrated into a modular framework that is evaluated experimentally. The manipulation of focused laser radiation also requires precise adjustment of the focal position on the tissue. For this purpose, visual, haptic and visual-haptic assistance functions are presented. These support the operator during teleoperation to set an optimal working distance. Advantages of visual-haptic assistance are demonstrated in a user study. The system performance and usability of the overall robotic system are assessed in an additional user study. Analogous to a clinical scenario, the subjects follow predefined target patterns with a laser spot. The mean positioning accuracy of the spot is 0.5 mm. Finally, methods of image-guided robot control are introduced to automate laser ablation. Experiments confirm a positive effect of proposed automation concepts on non-contact laser surgery.Die Anwendung von Lasertechnologien in chirurgischen Interventionen hat sich aufgrund der atraumatischen Eigenschaften in der Klinik etabliert. Neben manueller Applikation von fasergeführten Lasern mit Gewebekontakt hat sich die kontaktfreie transorale Lasermikrochirurgie (TLM) von Tumoren des Larynx in der HNO-Chirurgie durchgesetzt. Die TLM erfordert zur Tumorresektion jedoch ein langjähriges chirurgisches Training, um die Funktion der angrenzenden Organe zu sichern und damit die Lebensqualität der Patienten zu erhalten. Die Positionierung des mikroskopis chen Laserapplikators außerhalb des Patienten kann zudem die direkte Sicht auf das Zielgebiet durch anatomische Variabilität erschweren und den Arbeitsraum einschränken. Weitere klinische Herausforderungen betreffen die Positionierung des Laserfokus auf der Gewebeoberfläche, die Bildgebung, die Planung und Ausführung der Laserablation sowie intraoperative Bewegungen des Zielgebietes. Die vorliegende Dissertation zielt darauf ab, die Limitierungen der TLM durch robotische Ansätze und intraoperative Assistenz zu adressieren. Obwohl ein Trend zur minimal invasiven Chirurgie besteht, sind bislang keine hochintegrierten Plattformen für die endoskopische Applikation fokussierter Laserstrahlung verfügbar. Ebenfalls sind keine Systeme bekannt, die Szeneninformationen aus der endoskopischen Bildgebung in die Ablationsplanung und -ausführung einbeziehen. Für eine situsnahe Fokussierung des Laserstrahls wird in dieser Arbeit zunächst eine miniaturisierte Fokussieroptik zur Integration in endoskopische Systeme vorgestellt. Experimentelle Versuche charakterisieren die optischen Eigenschaften und das Ablationsverhalten. Zur Manipulation der Fokussieroptik wird eine robotische Plattform realisiert. Diese basiert auf einem längenveränderlichen Kontinuumsmanipulator. Letzterer ermöglicht in Kombination mit einer mechatronischen Aktuierungseinheit Bewegungen des Endoskopkopfes in fünf Freiheitsgraden. Die kinematische Modellierung und Regelung des Systems werden in ein modulares Framework eingebunden und evaluiert. Die Manipulation fokussierter Laserstrahlung erfordert zudem eine präzise Anpassung der Fokuslage auf das Gewebe. Dafür werden visuelle, haptische und visuell haptische Assistenzfunktionen eingeführt. Diese unterstützen den Anwender bei Teleoperation zur Einstellung eines optimalen Arbeitsabstandes. In einer Anwenderstudie werden Vorteile der visuell-haptischen Assistenz nachgewiesen. Die Systemperformanz und Gebrauchstauglichkeit des robotischen Gesamtsystems werden in einer weiteren Anwenderstudie untersucht. Analog zu einem klinischen Einsatz verfolgen die Probanden mit einem Laserspot vorgegebene Sollpfade. Die mittlere Positioniergenauigkeit des Spots beträgt dabei 0,5 mm. Zur Automatisierung der Ablation werden abschließend Methoden der bildgestützten Regelung vorgestellt. Experimente bestätigen einen positiven Effekt der Automationskonzepte für die kontaktfreie Laserchirurgie

    Intuitive Robot Teleoperation Based on Haptic Feedback and 3D Visualization

    Get PDF
    Robots are required in many jobs. The jobs related to tele-operation may be very challenging and often require reaching a destination quickly and with minimum collisions. In order to succeed in these jobs, human operators are asked to tele-operate a robot manually through a user interface. The design of a user interface and of the information provided in it, become therefore critical elements for the successful completion of robot tele-operation tasks. Effective and timely robot tele-navigation mainly relies on the intuitiveness provided by the interface and on the richness and presentation of the feedback given. This project investigated the use of both haptic and visual feedbacks in a user interface for robot tele-navigation. The aim was to overcome some of the limitations observed in a state of the art works, turning what is sometimes described as contrasting into an added value to improve tele-navigation performance. The key issue is to combine different human sensory modalities in a coherent way and to benefit from 3-D vision too. The proposed new approach was inspired by how visually impaired people use walking sticks to navigate. Haptic feedback may provide helpful input to a user to comprehend distances to surrounding obstacles and information about the obstacle distribution. This was proposed to be achieved entirely relying on on-board range sensors, and by processing this input through a simple scheme that regulates magnitude and direction of the environmental force-feedback provided to the haptic device. A specific algorithm was also used to render the distribution of very close objects to provide appropriate touch sensations. Scene visualization was provided by the system and it was shown to a user coherently to haptic sensation. Different visualization configurations, from multi-viewpoint observation to 3-D visualization, were proposed and rigorously assessed through experimentations, to understand the advantages of the proposed approach and performance variations among different 3-D display technologies. Over twenty users were invited to participate in a usability study composed by two major experiments. The first experiment focused on a comparison between the proposed haptic-feedback strategy and a typical state of the art approach. It included testing with a multi-viewpoint visual observation. The second experiment investigated the performance of the proposed haptic-feedback strategy when combined with three different stereoscopic-3D visualization technologies. The results from the experiments were encouraging and showed good performance with the proposed approach and an improvement over literature approaches to haptic feedback in robot tele-operation. It was also demonstrated that 3-D visualization can be beneficial for robot tele-navigation and it will not contrast with haptic feedback if it is properly aligned to it. Performance may vary with different 3-D visualization technologies, which is also discussed in the presented work

    Design and implementation of a vision system for microassembly workstation

    Get PDF
    Rapid development of micro/nano technologies and the evolvement of biotechnology have led to the research of assembling micro components into complex microsystems and manipulation of cells, genes or similar biological components. In order to develop advanced inspection/handling systems and methods for manipulation and assembly of micro products and micro components, robust micromanipulation and microassembly strategies can be implemented on a high-speed, repetitive, reliable, reconfigurable, robust and open-architecture microassembly workstation. Due to high accuracy requirements and specific mechanical and physical laws which govern the microscale world, micromanipulation and microassembly tasks require robust control strategies based on real-time sensory feedback. Vision as a passive sensor can yield high resolutions of micro objects and micro scenes along with a stereoscopic optical microscope. Visual data contains useful information for micromanipulation and microassembly tasks, and can be processed using various image processing and computer vision algorithms. In this thesis, the initial work on the design and implementation of a vision system for microassembly workstation is introduced. Both software and hardware issues are considered. Emphasis is put on the implementation of computer vision algorithms and vision based control techniques which help to build strong basis for the vision part of the microassembly workstation. The main goal of designing such a vision system is to perform automated micromanipulation and microassembly tasks for a variety of applications. Experiments with some teleoperated and semiautomated tasks, which aim to manipulate micro particles manually or automatically by microgripper or probe as manipulation tools, show quite promising results

    Design and Evaluation of a Contact-Free Interface for Minimally Invasive Robotics Assisted Surgery

    Get PDF
    Robotic-assisted minimally invasive surgery (RAMIS) is becoming increasingly more common for many surgical procedures. These minimally invasive techniques offer the benefit of reduced patient recovery time, mortality and scarring compared to traditional open surgery. Teleoperated procedures have the added advantage of increased visualization, and enhanced accuracy for the surgeon through tremor filtering and scaling down hand motions. There are however still limitations in these techniques preventing the widespread growth of the technology. In RAMIS, the surgeon is limited in their movement by the operating console or master device, and the cost of robotic surgery is often too high to justify for many procedures. Sterility issues arise as well, as the surgeon must be in contact with the master device, preventing a smooth transition between traditional and robotic modes of surgery. This thesis outlines the design and analysis of a novel method of interaction with the da Vinci Surgical Robot. Using the da Vinci Research Kit (DVRK), an open source research platform for the da Vinci robot, an interface was developed for controlling the robotic arms with the Leap Motion Controller. This small device uses infrared LEDs and two cameras to detect the 3D positions of the hand and fingers. This data from the hands is mapped to the da Vinci surgical tools in real time, providing the surgeon with an intuitive method of controlling the instruments. An analysis of the tracking workspace is provided, to give a solution to occlusion issues. Multiple sensors are fused together in order to increase the range of trackable motion over a single sensor. Additional work involves replacing the current viewing screen with a virtual reality (VR) headset (Oculus Rift), to provide the surgeon with a stereoscopic 3D view of the surgical site without the need for a large monitor. The headset also provides the user with a more intuitive and natural method of positioning the camera during surgery, using the natural motions of the head. The large master console of the da Vinci system has been replaced with an inexpensive vision based tracking system, and VR headset, allowing the surgeon to operate the da Vinci Surgical Robot with more natural movements for the user. A preliminary evaluation of the system is provided, with recommendations for future work

    The Challenges in Modeling Human Performance in 3D Space with Fitts’ Law

    Get PDF
    With the rapid growth in virtual reality technologies, object interaction is becoming increasingly more immersive, elucidating human perception and leading to promising directions towards evaluating human performance under different settings. This spike in technological growth exponentially increased the need for a human performance metric in 3D space. Fitts' law is perhaps the most widely used human prediction model in HCI history attempting to capture human movement in lower dimensions. Despite the collective effort towards deriving an advanced extension of a 3D human performance model based on Fitts' law, a standardized metric is still missing. Moreover, most of the extensions to date assume or limit their findings to certain settings, effectively disregarding important variables that are fundamental to 3D object interaction. In this review, we investigate and analyze the most prominent extensions of Fitts' law and compare their characteristics pinpointing to potentially important aspects for deriving a higher-dimensional performance model. Lastly, we mention the complexities, frontiers as well as potential challenges that may lay ahead.Comment: Accepted at ACM CHI 2021 Conference on Human Factors in Computing Systems (CHI '21 Extended Abstracts

    Multistream realtime control of a distributed telerobotic system

    Get PDF

    Human Machine Interfaces for Teleoperators and Virtual Environments

    Get PDF
    In Mar. 1990, a meeting organized around the general theme of teleoperation research into virtual environment display technology was conducted. This is a collection of conference-related fragments that will give a glimpse of the potential of the following fields and how they interplay: sensorimotor performance; human-machine interfaces; teleoperation; virtual environments; performance measurement and evaluation methods; and design principles and predictive models

    A Magnetic Laser Scanner for Endoscopic Microsurgery

    Get PDF
    Laser scanners increase the quality of the laser microsurgery enabling fast tissue ablation with less thermal damage. Such technology is part of state-of-the-art freebeam surgical laser systems. However, laser scanning has not been incorporated to fiber-based lasers yet. This is a combination that has potential to greatly improve the quality of laser microsurgeries on difficult-to-reach surgical sites. Current fiberbased tissue ablations are performed in contact with the tissue, resulting in excessive thermal damage to healthy tissue in the vicinity of the ablated tissue. This is far from ideal for delicate microsurgeries, which require high-quality tissue incisions without any thermal damage or char formation. However, the possibility to perform scanning laser microsurgery in confined workspaces is restricted by the large size of currently available actuators, which are typically located outside the patient and require direct line-of-sight to the microsurgical area. Thus, it is desired to have the laser scanning feature in an endoscopic system to provide high incision quality in hard-to-reach surgical sites. This thesis aims to introduce a new endoscopic laser scanner to perform 2D position control and high-speed scanning of a fiber-based laser for operation in narrow workspaces. It also presents a technology concept aimed at assisting in incision depth control during soft-tissue microsurgery. The main objective of the work presented in this thesis is to bring the benefits of free-beam lasers to laser-based endoscopic surgery by designing an end-effector module to be placed at the distal tip of a flexible robot arm. To this end, the design and control of a magnetic laser scanner for endoscopic microsurgeries is presented. The system involves an optical fiber, electromagnetic coils, a permanent magnet and optical lenses in a compact system for laser beam deflection. The actuation mechanism is based on the interaction between the electromagnetic field and the permanent magnets. A cantilevered optical fiber is bended with the magnetic field induced by the electromagnetic coils by creating magnetic torque on the permanent magnet. The magnetic laser scanner provides 2D position control and high-speed scanning of the laser beam. The device includes laser focusing optics to allow non-contact incisions. A proof-of-concept device was manufactured and evaluated. It includes four electromagnetic coils and two plano-convex lenses, and has an external diameter of 13 mm. A 4 74 mm2 scanning range was achieved at a 30 mm distance from the scanner tip. Computer-controlled trajectory executions demonstrated repeatable results with 75 m precision for challenging trajectories. Frequency analysis demonstrated stable response up to 33 Hz for 3 dB limit. The system is able to ablate tissue substitutes with a 1940 nm wavelength surgical diode laser. Tablet-based control interface has been developed for intuitive teleoperation. The performance of the proof-of-concept device is analysed through control accuracy and usability studies. Teleoperation user trials consisting in trajectory-following tasks involved 12 subjects. Results demonstrated users could achieve an accuracy of 39 m with the magnetic laser scanner system. For minimally invasive surgeries, it is essential to perform accurate laser position control. Therefore, a model based feed-forward position control of magnetic laser scanner was developed for automated trajectory executions. First, the dynamical model of the system was identified using the electromagnets current (input) and the laser position (output). Then, the identified model was used to perform feedforward control. Validation experiments were performed with different trajectory types, frequencies and amplitudes. Results showed that desired trajectories can be executed in high-speed scanning mode with less than 90 m (1.4 mrad bending angle) accuracy for frequencies up to 15 Hz. State-of-the-art systems do not provide incision depth control, thus the quality of such control relies entirely on the experience and visual perception of the surgeons. In order to provide intuitive incision depth control in endoscopic microsurgeries, the concept of a technology was presented for the automated laser incisions given a desired depth based on a commercial laser scanner. The technology aims at automatically controlling laser incisions based on high-level commands from the surgeon, i.e. desired incision shape, length and depth. A feed-forward controller provides (i) commands to the robotic laser system and (ii) regulates the parameters of the laser source to achieve the desired results. The controller for the incision depth is extracted from experimental data. The required energy density and the number of passes are calculated to reach the targeted depth. Experimental results demonstrate that targeted depths can be achieved with \ub1100 m accuracy, which proves the feasibility of this approach. The proposed technology has the potential to facilitate the surgeon\u2019s control over laser incisions. The magnetic laser scanner enables high-speed laser positioning in narrow and difficult-to-reach workspaces, promising to bring the benefits of scanning laser microsurgery to flexible endoscopic procedures. In addition, the same technology can be potentially used for optical fiber based imaging, enabling for example the creation of new family of scanning endoscopic OCT or hyperspectral probes
    corecore