416 research outputs found

    Autonomous Medical Needle Steering In Vivo

    Full text link
    The use of needles to access sites within organs is fundamental to many interventional medical procedures both for diagnosis and treatment. Safe and accurate navigation of a needle through living tissue to an intra-tissue target is currently often challenging or infeasible due to the presence of anatomical obstacles in the tissue, high levels of uncertainty, and natural tissue motion (e.g., due to breathing). Medical robots capable of automating needle-based procedures in vivo have the potential to overcome these challenges and enable an enhanced level of patient care and safety. In this paper, we show the first medical robot that autonomously navigates a needle inside living tissue around anatomical obstacles to an intra-tissue target. Our system leverages an aiming device and a laser-patterned highly flexible steerable needle, a type of needle capable of maneuvering along curvilinear trajectories to avoid obstacles. The autonomous robot accounts for anatomical obstacles and uncertainty in living tissue/needle interaction with replanning and control and accounts for respiratory motion by defining safe insertion time windows during the breathing cycle. We apply the system to lung biopsy, which is critical in the diagnosis of lung cancer, the leading cause of cancer-related death in the United States. We demonstrate successful performance of our system in multiple in vivo porcine studies and also demonstrate that our approach leveraging autonomous needle steering outperforms a standard manual clinical technique for lung nodule access.Comment: 22 pages, 6 figure

    THE UNIVERSAL MEDIA BOOK

    Get PDF
    We explore the integration of projected imagery with a physical book that acts as a tangible interface to multimedia data. Using a camera and projector pair, a tracking framework is presented wherein the 3D position of planar pages are monitored as they are turned back and forth by a user, and data is correctly warped and projected onto each page at interactive rates to provide the user with an intuitive mixed-reality experience. The book pages are blank, so traditional camera-based approaches to tracking physical features on the display surface do not apply. Instead, in each frame, feature points are independently extracted from the camera and projector images, and matched to recover the geometry of the pages in motion. The book can be loaded with multimedia content, including images and videos. In addition, volumetric datasets can be explored by removing a page from the book and using it as a tool to navigate through a virtual 3D volume

    Interactive ubiquitous displays based on steerable projection

    Get PDF
    The ongoing miniaturization of computers and their embedding into the physical environment require new means of visual output. In the area of Ubiquitous Computing, flexible and adaptable display options are needed in order to enable the presentation of visual content in the physical environment. In this dissertation, we introduce the concepts of Display Continuum and Virtual Displays as new means of human-computer interaction. In this context, we present a realization of a Display Continuum based on steerable projection, and we describe a number of different interaction methods for manipulating this Display Continuum and the Virtual Displays placed on it.Mit zunehmender Miniaturisierung der Computer und ihrer Einbettung in der physikalischen Umgebung werden neue Arten der visuellen Ausgabe notwendig. Im Bereich des Ubiquitous Computing (Rechnerallgegenwart) werden flexible und anpassungsfähige Displays benötigt, um eine Anzeige von visuellen Inhalten unmittelbar in der physikalischen Umgebung zu ermöglichen. In dieser Dissertation führen wir das Konzept des Display-Kontinuums und der Virtuellen Displays als Instrument der Mensch-Maschine-Interaktion ein. In diesem Zusammenhang präsentieren wir eine mögliche Display-Kontinuum-Realisierung, die auf der Verwendung steuerbarer Projektion basiert, und wir beschreiben mehrere verschiedene Interaktionsmethoden, mit denen man das Display-Kontinuum und die darauf platzierten Virtuellen Displays steuern kann

    Compact and kinetic projected augmented reality interface

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 143-150).For quite some time, researchers and designers in the field of human computer interaction have strived to better integrate information interfaces into our physical environment. They envisioned a future where computing and interface components would be integrated into the physical environment, creating a seamless experience that uses all our senses. One possible approach to this problem employs projected augmented reality. Such systems project digital information and interfaces onto the physical world and are typically implemented using interactive projector-camera systems. This thesis work is centered on design and implementation of a new form factor for computing, a system we call LuminAR. LuminAR is a compact and kinetic projected augmented reality interface embodied in familiar everyday objects, namely a light bulb and a task light. It allows users to dynamically augment physical surfaces and objects with superimposed digital information using gestural and multi-touch interfaces. This thesis documents LuminAR's design process, hardware and software implementation and interaction techniques. The work is motivated through a set of applications that explore scenarios for interactive and kinetic projected augmented reality interfaces. It also opens the door for further explorations of kinetic interaction and promotes the adoption of projected augmented reality as a commonplace user interface modality. This thesis work was partially supported by a research grant from Intel Corporation.Supported by a research grant from Intel Corporationby Natan Linder.S.M

    AwareMirror: A Personalized Display Using a Mirror

    Full text link

    Artefact: A UML-Based Framework for Model-Driven Development of Interactive Surface Prototypes

    Get PDF

    MirrorForge: Rapid Prototyping of Complex Mirrors for Camera and Projector Systems

    Get PDF

    Prevalence of haptic feedback in robot-mediated surgery : a systematic review of literature

    Get PDF
    © 2017 Springer-Verlag. This is a post-peer-review, pre-copyedit version of an article published in Journal of Robotic Surgery. The final authenticated version is available online at: https://doi.org/10.1007/s11701-017-0763-4With the successful uptake and inclusion of robotic systems in minimally invasive surgery and with the increasing application of robotic surgery (RS) in numerous surgical specialities worldwide, there is now a need to develop and enhance the technology further. One such improvement is the implementation and amalgamation of haptic feedback technology into RS which will permit the operating surgeon on the console to receive haptic information on the type of tissue being operated on. The main advantage of using this is to allow the operating surgeon to feel and control the amount of force applied to different tissues during surgery thus minimising the risk of tissue damage due to both the direct and indirect effects of excessive tissue force or tension being applied during RS. We performed a two-rater systematic review to identify the latest developments and potential avenues of improving technology in the application and implementation of haptic feedback technology to the operating surgeon on the console during RS. This review provides a summary of technological enhancements in RS, considering different stages of work, from proof of concept to cadaver tissue testing, surgery in animals, and finally real implementation in surgical practice. We identify that at the time of this review, while there is a unanimous agreement regarding need for haptic and tactile feedback, there are no solutions or products available that address this need. There is a scope and need for new developments in haptic augmentation for robot-mediated surgery with the aim of improving patient care and robotic surgical technology further.Peer reviewe

    Experimental high-dimensional entanglement certification and quantum steering with time-energy measurements

    Full text link
    High-dimensional entanglement provides unique ways of transcending the limitations of current approaches in quantum information processing, quantum communications based on qubits. The generation of time-frequency qudit states offer significantly increased quantum capacities while keeping the number of photons constant, but pose significant challenges regarding the possible measurements for certification of entanglement. Here, we develop a new scheme and experimentally demonstrate the certification of 24-dimensional entanglement and a 9-dimensional quantum steering. We then subject our photon-pairs to dispersion conditions equivalent to the transmission through 600-km of fiber and still certify 21-dimensional entanglement. Furthermore, we use a steering inequality to prove 7-dimensional entanglement in a semi-device independent manner, proving that large chromatic dispersion is not an obstacle in distributing and certifying high-dimensional entanglement and quantum steering. Our highly scalable scheme is based on commercial telecommunication optical fiber components and recently developed low-jitter high-efficiency single-photon detectors, thus opening new pathways towards advanced large-scale quantum information processing and high-performance, noise-tolerant quantum communications with time-energy measurementsComment: 30 pages, 4 figure
    corecore