1,849 research outputs found

    A gaze-contingent framework for perceptually-enabled applications in healthcare

    Get PDF
    Patient safety and quality of care remain the focus of the smart operating room of the future. Some of the most influential factors with a detrimental effect are related to suboptimal communication among the staff, poor flow of information, staff workload and fatigue, ergonomics and sterility in the operating room. While technological developments constantly transform the operating room layout and the interaction between surgical staff and machinery, a vast array of opportunities arise for the design of systems and approaches, that can enhance patient safety and improve workflow and efficiency. The aim of this research is to develop a real-time gaze-contingent framework towards a "smart" operating suite, that will enhance operator's ergonomics by allowing perceptually-enabled, touchless and natural interaction with the environment. The main feature of the proposed framework is the ability to acquire and utilise the plethora of information provided by the human visual system to allow touchless interaction with medical devices in the operating room. In this thesis, a gaze-guided robotic scrub nurse, a gaze-controlled robotised flexible endoscope and a gaze-guided assistive robotic system are proposed. Firstly, the gaze-guided robotic scrub nurse is presented; surgical teams performed a simulated surgical task with the assistance of a robot scrub nurse, which complements the human scrub nurse in delivery of surgical instruments, following gaze selection by the surgeon. Then, the gaze-controlled robotised flexible endoscope is introduced; experienced endoscopists and novice users performed a simulated examination of the upper gastrointestinal tract using predominately their natural gaze. Finally, a gaze-guided assistive robotic system is presented, which aims to facilitate activities of daily living. The results of this work provide valuable insights into the feasibility of integrating the developed gaze-contingent framework into clinical practice without significant workflow disruptions.Open Acces

    Estimating and understanding motion : from diagnostic to robotic surgery

    Get PDF
    Estimating and understanding motion from an image sequence is a central topic in computer vision. The high interest in this topic is because we are living in a world where many events that occur in the environment are dynamic. This makes motion estimation and understanding a natural component and a key factor in a widespread of applications including object recognition , 3D shape reconstruction, autonomous navigation and medica! diagnosis. Particularly, we focus on the medical domain in which understanding the human body for clinical purposes requires retrieving the organs' complex motion patterns, which is in general a hard problem when using only image data. In this thesis, we cope with this problem by posing the question - How to achieve a realistic motion estimation to offer a better clinical understanding? We focus this thesis on answering this question by using a variational formulation as a basis to understand one of the most complex motions in the human's body, the heart motion, through three different applications: (i) cardiac motion estimation for diagnostic, (ii) force estimation and (iii) motion prediction, both for robotic surgery. Firstly, we focus on a central topic in cardiac imaging that is the estimation of the cardiac motion. The main aim is to offer objective and understandable measures to physicians for helping them in the diagnostic of cardiovascular diseases. We employ ultrafast ultrasound data and tools for imaging motion drawn from diverse areas such as low-rank analysis and variational deformation to perform a realistic cardiac motion estimation. The significance is that by taking low-rank data with carefully chosen penalization, synergies in this complex variational problem can be created. We demonstrate how our proposed solution deals with complex deformations through careful numerical experiments using realistic and simulated data. We then move from diagnostic to robotic surgeries where surgeons perform delicate procedures remotely through robotic manipulators without directly interacting with the patients. As a result, they lack force feedback, which is an important primary sense for increasing surgeon-patient transparency and avoiding injuries and high mental workload. To solve this problem, we follow the conservation principies of continuum mechanics in which it is clear that the change in shape of an elastic object is directly proportional to the force applied. Thus, we create a variational framework to acquire the deformation that the tissues undergo due to an applied force. Then, this information is used in a learning system to find the nonlinear relationship between the given data and the applied force. We carried out experiments with in-vivo and ex-vivo data and combined statistical, graphical and perceptual analyses to demonstrate the strength of our solution. Finally, we explore robotic cardiac surgery, which allows carrying out complex procedures including Off-Pump Coronary Artery Bypass Grafting (OPCABG). This procedure avoids the associated complications of using Cardiopulmonary Bypass (CPB) since the heart is not arrested while performing the surgery on a beating heart. Thus, surgeons have to deal with a dynamic target that compromisetheir dexterity and the surgery's precision. To compensate the heart motion, we propase a solution composed of three elements: an energy function to estimate the 3D heart motion, a specular highlight detection strategy and a prediction approach for increasing the robustness of the solution. We conduct evaluation of our solution using phantom and realistic datasets. We conclude the thesis by reporting our findings on these three applications and highlight the dependency between motion estimation and motion understanding at any dynamic event, particularly in clinical scenarios.L’estimació i comprensió del moviment dins d’una seqüència d’imatges és un tema central en la visió per ordinador, el que genera un gran interès perquè vivim en un entorn ple d’esdeveniments dinàmics. Per aquest motiu és considerat com un component natural i factor clau dins d’un ampli ventall d’aplicacions, el qual inclou el reconeixement d’objectes, la reconstrucció de formes tridimensionals, la navegació autònoma i el diagnòstic de malalties. En particular, ens situem en l’àmbit mèdic en el qual la comprensió del cos humà, amb finalitats clíniques, requereix l’obtenció de patrons complexos de moviment dels òrgans. Aquesta és, en general, una tasca difícil quan s’utilitzen només dades de tipus visual. En aquesta tesi afrontem el problema plantejant-nos la pregunta - Com es pot aconseguir una estimació realista del moviment amb l’objectiu d’oferir una millor comprensió clínica? La tesi se centra en la resposta mitjançant l’ús d’una formulació variacional com a base per entendre un dels moviments més complexos del cos humà, el del cor, a través de tres aplicacions: (i) estimació del moviment cardíac per al diagnòstic, (ii) estimació de forces i (iii) predicció del moviment, orientant-se les dues últimes en cirurgia robòtica. En primer lloc, ens centrem en un tema principal en la imatge cardíaca, que és l’estimació del moviment cardíac. L’objectiu principal és oferir als metges mesures objectives i comprensibles per ajudar-los en el diagnòstic de les malalties cardiovasculars. Fem servir dades d’ultrasons ultraràpids i eines per al moviment d’imatges procedents de diverses àrees, com ara l’anàlisi de baix rang i la deformació variacional, per fer una estimació realista del moviment cardíac. La importància rau en que, en prendre les dades de baix rang amb una penalització acurada, es poden crear sinergies en aquest problema variacional complex. Mitjançant acurats experiments numèrics, amb dades realístiques i simulades, hem demostrat com les nostres propostes solucionen deformacions complexes. Després passem del diagnòstic a la cirurgia robòtica, on els cirurgians realitzen procediments delicats remotament, a través de manipuladors robòtics, sense interactuar directament amb els pacients. Com a conseqüència, no tenen la percepció de la força com a resposta, que és un sentit primari important per augmentar la transparència entre el cirurgià i el pacient, per evitar lesions i per reduir la càrrega de treball mental. Resolem aquest problema seguint els principis de conservació de la mecànica del medi continu, en els quals està clar que el canvi en la forma d’un objecte elàstic és directament proporcional a la força aplicada. Per això hem creat un marc variacional que adquireix la deformació que pateixen els teixits per l’aplicació d’una força. Aquesta informació s’utilitza en un sistema d’aprenentatge, per trobar la relació no lineal entre les dades donades i la força aplicada. Hem dut a terme experiments amb dades in-vivo i ex-vivo i hem combinat l’anàlisi estadístic, gràfic i de percepció que demostren la robustesa de la nostra solució. Finalment, explorem la cirurgia cardíaca robòtica, la qual cosa permet realitzar procediments complexos, incloent la cirurgia coronària sense bomba (off-pump coronary artery bypass grafting o OPCAB). Aquest procediment evita les complicacions associades a l’ús de circulació extracorpòria (Cardiopulmonary Bypass o CPB), ja que el cor no s’atura mentre es realitza la cirurgia. Això comporta que els cirurgians han de tractar amb un objectiu dinàmic que compromet la seva destresa i la precisió de la cirurgia. Per compensar el moviment del cor, proposem una solució composta de tres elements: un funcional d’energia per estimar el moviment tridimensional del cor, una estratègia de detecció de les reflexions especulars i una aproximació basada en mètodes de predicció, per tal d’augmentar la robustesa de la solució. L’avaluació de la nostra solució s’ha dut a terme mitjançant conjunts de dades sintètiques i realistes. La tesi conclou informant dels nostres resultats en aquestes tres aplicacions i posant de relleu la dependència entre l’estimació i la comprensió del moviment en qualsevol esdeveniment dinàmic, especialment en escenaris clínics.Postprint (published version

    Combining brain-computer interfaces and assistive technologies: state-of-the-art and challenges

    Get PDF
    In recent years, new research has brought the field of EEG-based Brain-Computer Interfacing (BCI) out of its infancy and into a phase of relative maturity through many demonstrated prototypes such as brain-controlled wheelchairs, keyboards, and computer games. With this proof-of-concept phase in the past, the time is now ripe to focus on the development of practical BCI technologies that can be brought out of the lab and into real-world applications. In particular, we focus on the prospect of improving the lives of countless disabled individuals through a combination of BCI technology with existing assistive technologies (AT). In pursuit of more practical BCIs for use outside of the lab, in this paper, we identify four application areas where disabled individuals could greatly benefit from advancements in BCI technology, namely,“Communication and Control”, “Motor Substitution”, “Entertainment”, and “Motor Recovery”. We review the current state of the art and possible future developments, while discussing the main research issues in these four areas. In particular, we expect the most progress in the development of technologies such as hybrid BCI architectures, user-machine adaptation algorithms, the exploitation of users’ mental states for BCI reliability and confidence measures, the incorporation of principles in human-computer interaction (HCI) to improve BCI usability, and the development of novel BCI technology including better EEG devices

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    An Outlook into the Future of Egocentric Vision

    Full text link
    What will the future be? We wonder! In this survey, we explore the gap between current research in egocentric vision and the ever-anticipated future, where wearable computing, with outward facing cameras and digital overlays, is expected to be integrated in our every day lives. To understand this gap, the article starts by envisaging the future through character-based stories, showcasing through examples the limitations of current technology. We then provide a mapping between this future and previously defined research tasks. For each task, we survey its seminal works, current state-of-the-art methodologies and available datasets, then reflect on shortcomings that limit its applicability to future research. Note that this survey focuses on software models for egocentric vision, independent of any specific hardware. The paper concludes with recommendations for areas of immediate explorations so as to unlock our path to the future always-on, personalised and life-enhancing egocentric vision.Comment: We invite comments, suggestions and corrections here: https://openreview.net/forum?id=V3974SUk1

    Experimental user interface design toolkit for interaction research (IDTR).

    Get PDF
    The research reported and discussed in this thesis represents a novel approach to User Interface evaluation and optimisation through cognitive modelling. This is achieved through the development and testing of a toolkit or platform titled Toolkit for Optimisation of Interface System Evolution (TOISE). The research is conducted in two main phases. In phase 1, the Adaptive Control of Thought Rational (ACT-R) cognitive architecture is used to design Simulated Users (SU) models. This allows models of user interaction to be tested on a specific User Interface (UI). In phase 2, an evolutionary algorithm is added and used to evolve and test an optimised solution to User Interface layout based on the original interface design. The thesis presents a technical background, followed by an overview of some applications in their respective fields. The core concepts behind TOISE are introduced through a discussion of the Adaptive Control of Thought “ Rational (ACT-R) architecture with a focus on the ACT-R models that are used to simulate users. The notion of adding a Genetic Algorithm optimiser is introduced and discussed in terms of the feasibility of using simulated users as the basis for automated evaluation to optimise usability. The design and implementation of TOISE is presented and discussed followed by a series of experiments that evaluate the TOISE system. While the research had to address and solve a large number of technical problems the resulting system does demonstrate potential as a platform for automated evaluation and optimisation of user interface layouts. The limitations of the system and the approach are discussed and further work is presented. It is concluded that the research is novel and shows considerable promise in terms of feasibility and potential for optimising layout for enhanced usability

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    AdaptiX -- A Transitional XR Framework for Development and Evaluation of Shared Control Applications in Assistive Robotics

    Full text link
    With the ongoing efforts to empower people with mobility impairments and the increase in technological acceptance by the general public, assistive technologies, such as collaborative robotic arms, are gaining popularity. Yet, their widespread success is limited by usability issues, specifically the disparity between user input and software control along the autonomy continuum. To address this, shared control concepts provide opportunities to combine the targeted increase of user autonomy with a certain level of computer assistance. This paper presents the free and open-source AdaptiX XR framework for developing and evaluating shared control applications in a high-resolution simulation environment. The initial framework consists of a simulated robotic arm with an example scenario in Virtual Reality (VR), multiple standard control interfaces, and a specialized recording/replay system. AdaptiX can easily be extended for specific research needs, allowing Human-Robot Interaction (HRI) researchers to rapidly design and test novel interaction methods, intervention strategies, and multi-modal feedback techniques, without requiring an actual physical robotic arm during the early phases of ideation, prototyping, and evaluation. Also, a Robot Operating System (ROS) integration enables the controlling of a real robotic arm in a PhysicalTwin approach without any simulation-reality gap. Here, we review the capabilities and limitations of AdaptiX in detail and present three bodies of research based on the framework. AdaptiX can be accessed at https://adaptix.robot-research.de.Comment: Accepted submission at The 16th ACM SIGCHI Symposium on Engineering Interactive Computing Systems (EICS'24
    corecore