2,248 research outputs found

    Augmented reality in open surgery

    Get PDF
    Augmented reality (AR) has been successfully providing surgeons an extensive visual information of surgical anatomy to assist them throughout the procedure. AR allows surgeons to view surgical field through the superimposed 3D virtual model of anatomical details. However, open surgery presents new challenges. This study provides a comprehensive overview of the available literature regarding the use of AR in open surgery, both in clinical and simulated settings. In this way, we aim to analyze the current trends and solutions to help developers and end/users discuss and understand benefits and shortcomings of these systems in open surgery. We performed a PubMed search of the available literature updated to January 2018 using the terms (1) “augmented reality” AND “open surgery”, (2) “augmented reality” AND “surgery” NOT “laparoscopic” NOT “laparoscope” NOT “robotic”, (3) “mixed reality” AND “open surgery”, (4) “mixed reality” AND “surgery” NOT “laparoscopic” NOT “laparoscope” NOT “robotic”. The aspects evaluated were the following: real data source, virtual data source, visualization processing modality, tracking modality, registration technique, and AR display type. The initial search yielded 502 studies. After removing the duplicates and by reading abstracts, a total of 13 relevant studies were chosen. In 1 out of 13 studies, in vitro experiments were performed, while the rest of the studies were carried out in a clinical setting including pancreatic, hepatobiliary, and urogenital surgeries. AR system in open surgery appears as a versatile and reliable tool in the operating room. However, some technological limitations need to be addressed before implementing it into the routine practice

    A Joint 3D-2D based Method for Free Space Detection on Roads

    Full text link
    In this paper, we address the problem of road segmentation and free space detection in the context of autonomous driving. Traditional methods either use 3-dimensional (3D) cues such as point clouds obtained from LIDAR, RADAR or stereo cameras or 2-dimensional (2D) cues such as lane markings, road boundaries and object detection. Typical 3D point clouds do not have enough resolution to detect fine differences in heights such as between road and pavement. Image based 2D cues fail when encountering uneven road textures such as due to shadows, potholes, lane markings or road restoration. We propose a novel free road space detection technique combining both 2D and 3D cues. In particular, we use CNN based road segmentation from 2D images and plane/box fitting on sparse depth data obtained from SLAM as priors to formulate an energy minimization using conditional random field (CRF), for road pixels classification. While the CNN learns the road texture and is unaffected by depth boundaries, the 3D information helps in overcoming texture based classification failures. Finally, we use the obtained road segmentation with the 3D depth data from monocular SLAM to detect the free space for the navigation purposes. Our experiments on KITTI odometry dataset, Camvid dataset, as well as videos captured by us, validate the superiority of the proposed approach over the state of the art.Comment: Accepted for publication at IEEE WACV 201

    Optimization of computer-assisted intraoperative guidance for complex oncological procedures

    Get PDF
    Mención Internacional en el título de doctorThe role of technology inside the operating room is constantly increasing, allowing surgical procedures previously considered impossible or too risky due to their complexity or limited access. These reliable tools have improved surgical efficiency and safety. Cancer treatment is one of the surgical specialties that has benefited most from these techniques due to its high incidence and the accuracy required for tumor resections with conservative approaches and clear margins. However, in many cases, introducing these technologies into surgical scenarios is expensive and entails complex setups that are obtrusive, invasive, and increase the operative time. In this thesis, we proposed convenient, accessible, reliable, and non-invasive solutions for two highly complex regions for tumor resection surgeries: pelvis and head and neck. We explored how the introduction of 3D printing, surgical navigation, and augmented reality in these scenarios provided high intraoperative precision. First, we presented a less invasive setup for osteotomy guidance in pelvic tumor resections based on small patient-specific instruments (PSIs) fabricated with a desktop 3D printer at a low cost. We evaluated their accuracy in a cadaveric study, following a realistic workflow, and obtained similar results to previous studies with more invasive setups. We also identified the ilium as the region more prone to errors. Then, we proposed surgical navigation using these small PSIs for image-to-patient registration. Artificial landmarks included in the PSIs substitute the anatomical landmarks and the bone surface commonly used for this step, which require additional bone exposure and is, therefore, more invasive. We also presented an alternative and more convenient installation of the dynamic reference frame used to track the patient movements in surgical navigation. The reference frame is inserted in a socket included in the PSIs and can be attached and detached without losing precision and simplifying the installation. We validated the setup in a cadaveric study, evaluating the accuracy and finding the optimal PSI configuration in the three most common scenarios for pelvic tumor resection. The results demonstrated high accuracy, where the main source of error was again incorrect placements of PSIs in regular and homogeneous regions such as the ilium. The main limitation of PSIs is the guidance error resulting from incorrect placements. To overcome this issue, we proposed augmented reality as a tool to guide PSI installation in the patient’s bone. We developed an application for smartphones and HoloLens 2 that displays the correct position intraoperatively. We measured the placement errors in a conventional and a realistic phantom, including a silicone layer to simulate tissue. The results demonstrated a significant reduction of errors with augmented reality compared to freehand placement, ensuring an installation of the PSI close to the target area. Finally, we proposed three setups for surgical navigation in palate tumor resections, using optical trackers and augmented reality. The tracking tools for the patient and surgical instruments were fabricated with low-cost desktop 3D printers and designed to provide less invasive setups compared to previous solutions. All setups presented similar results with high accuracy when tested in a 3D-printed patient-specific phantom. They were then validated in the real surgical case, and one of the solutions was applied for intraoperative guidance. Postoperative results demonstrated high navigation accuracy, obtaining optimal surgical outcomes. The proposed solution enabled a conservative surgical approach with a less invasive navigation setup. To conclude, in this thesis we have proposed new setups for intraoperative navigation in two complex surgical scenarios for tumor resection. We analyzed their navigation precision, defining the optimal configurations to ensure accuracy. With this, we have demonstrated that computer-assisted surgery techniques can be integrated into the surgical workflow with accessible and non-invasive setups. These results are a step further towards optimizing the procedures and continue improving surgical outcomes in complex surgical scenarios.Programa de Doctorado en Ciencia y Tecnología Biomédica por la Universidad Carlos III de MadridPresidente: Raúl San José Estépar.- Secretario: Alba González Álvarez.- Vocal: Simon Droui

    Automated Map Reading: Image Based Localisation in 2-D Maps Using Binary Semantic Descriptors

    Get PDF
    We describe a novel approach to image based localisation in urban environments using semantic matching between images and a 2-D map. It contrasts with the vast majority of existing approaches which use image to image database matching. We use highly compact binary descriptors to represent semantic features at locations, significantly increasing scalability compared with existing methods and having the potential for greater invariance to variable imaging conditions. The approach is also more akin to human map reading, making it more suited to human-system interaction. The binary descriptors indicate the presence or not of semantic features relating to buildings and road junctions in discrete viewing directions. We use CNN classifiers to detect the features in images and match descriptor estimates with a database of location tagged descriptors derived from the 2-D map. In isolation, the descriptors are not sufficiently discriminative, but when concatenated sequentially along a route, their combination becomes highly distinctive and allows localisation even when using non-perfect classifiers. Performance is further improved by taking into account left or right turns over a route. Experimental results obtained using Google StreetView and OpenStreetMap data show that the approach has considerable potential, achieving localisation accuracy of around 85% using routes corresponding to approximately 200 meters.Comment: 8 pages, submitted to IEEE/RSJ International Conference on Intelligent Robots and Systems 201

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    A survey of usability issues in mobile map-based systems

    Get PDF
    Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial TechnologiesHow geospatial information could be represented in map or other forms of communication to display in mobile phones to convey spatial knowledge to users more effective and efficient with less ambiguity? This triggering question stems from the usability problems available in mobile map-based systems, that made using mobile navigation services and applications for pedestrians, a tedious and complicated task which is rather confusing to be helpful. Problems such as; losing the spatial overview of the area, overload of information in small screens of mobile phones, visibility issue of off-screen entities, weaknesses in orienting users with real environment, too much engagement of users with interface which causes environment distraction and so on. There are a lot of solutions have proposed to mitigate these available issues in mobile map-based systems, but each one has its pros and cons that is not complete enough to tackle above mentioned issues alone, and most of the time a combination of them is proposing. We tried with systematic literature review (SLR) that is more reliable, replicable and valid [1], find the most frequently applied usability evaluation method in the available studies to detect the usability issues in mobile map-based systems (MMSs), then find the most frequently usability issues that detected among the reviewed literatures and how to categorize them, in what contexts they mostly happened and what solutions proposed so far to resolve them. We operated tree iterations of systematic literature review (SLR) with totally 8667 identified publications (within 6 relevant databases and a search engine with priority of 4 most prominent and relevant journals and conferences in the field of mobile HCI and location based services), that 196 one of them included in first screening1, were thoroughly read in order to check with predefined inclusion criteria and overall, 56 papers (between those 196 papers) that qualified with our well-defined and updated inclusion criteria properties read in-depth at least two times to extract the data. In the first iteration 25 papers have reviewed and relevant data with considering our research questions has extracted and reflected in the first iteration table. In the second iteration, 24 papers which had adjusted inclusion criteria parameters have included to data extraction for filling the updated table. The last iteration according to the scarcity of publications in this realm and time limitation, has operated only with 7 literatures and relevant data extracted to fill in the last updated table. Results of the SLR showed the most frequently usability evaluation method was “Questionnaire” to achieve effectiveness and efficiency of the system, and the most frequently usability issue that detected within available literatures was “losing the spatial overview” which followed by “too much zooming and panning operations by users” that stems from the same problem; small screen size of mobile devices. We categorized the issues into two main groups of technological and spatial issues, which we only here focused on the usability issues relevant to map interfaces in mobile phones (spatial issues), not the technological problems relevant to the server or the hardware perspective (sensors, connectivity, battery drainage, GPS accuracy etc.). We have noticed the most frequently usability issue has happened in the mobile phone with average screen size of 3.83 inches, 87% of the cases in the laboratory environment, with users (not experts) with average age of 26 years old that 64.2% of them had relevant knowledge (GI2 knowledge). The low amount of field-based studies highlights the lack of considering real context in available case studies that in usability evaluation of location based mobile systems is highly important. Some traditional solutions have proposed to address the most frequently occurred usability problem in mobile map-based systems such as the techniques for visualizing the off-screen objects (such as Overview&Detail, Scaled Arrows, Wedge etc.) and some techniques for enhancing the zoom and pan operations (such as vario-scale maps, semi-automatic zooming (SAZ), tilt zooming, content zooming, anchored zoom etc.) that none of them were not completely suitable enough to be applied in these systems and the most famous systems such as Google Maps still working without taking advantage of such approaches, techniques and widgets, with a lot of usability issues
    corecore