2,322 research outputs found

    A haptic-enabled multimodal interface for the planning of hip arthroplasty

    Get PDF
    Multimodal environments help fuse a diverse range of sensory modalities, which is particularly important when integrating the complex data involved in surgical preoperative planning. The authors apply a multimodal interface for preoperative planning of hip arthroplasty with a user interface that integrates immersive stereo displays and haptic modalities. This article overviews this multimodal application framework and discusses the benefits of incorporating the haptic modality in this area

    Robotic right ventricle is a biohybrid platform that simulates right ventricular function in (patho)physiological conditions and intervention

    Get PDF
    The increasing recognition of the right ventricle (RV) necessitates the development of RV-focused interventions, devices and testbeds. In this study, we developed a soft robotic model of the right heart that accurately mimics RV biomechanics and hemodynamics, including free wall, septal and valve motion. This model uses a biohybrid approach, combining a chemically treated endocardial scaffold with a soft robotic synthetic myocardium. When connected to a circulatory flow loop, the robotic right ventricle (RRV) replicates real-time hemodynamic changes in healthy and pathological conditions, including volume overload, RV systolic failure and pressure overload. The RRV also mimics clinical markers of RV dysfunction and is validated using an in vivo porcine model. Additionally, the RRV recreates chordae tension, simulating papillary muscle motion, and shows the potential for tricuspid valve repair and replacement in vitro. This work aims to provide a platform for developing tools for research and treatment for RV pathophysiology.</p

    A comprehensive survey on recent deep learning-based methods applied to surgical data

    Full text link
    Minimally invasive surgery is highly operator dependant with a lengthy procedural time causing fatigue to surgeon and risks to patients such as injury to organs, infection, bleeding, and complications of anesthesia. To mitigate such risks, real-time systems are desired to be developed that can provide intra-operative guidance to surgeons. For example, an automated system for tool localization, tool (or tissue) tracking, and depth estimation can enable a clear understanding of surgical scenes preventing miscalculations during surgical procedures. In this work, we present a systematic review of recent machine learning-based approaches including surgical tool localization, segmentation, tracking, and 3D scene perception. Furthermore, we provide a detailed overview of publicly available benchmark datasets widely used for surgical navigation tasks. While recent deep learning architectures have shown promising results, there are still several open research problems such as a lack of annotated datasets, the presence of artifacts in surgical scenes, and non-textured surfaces that hinder 3D reconstruction of the anatomical structures. Based on our comprehensive review, we present a discussion on current gaps and needed steps to improve the adaptation of technology in surgery.Comment: This paper is to be submitted to International journal of computer visio

    Maritime Computing Transportation, Environment, and Development: Trends of Data Visualization and Computational Methodologies

    Get PDF
    This research aims to characterize the field of maritime computing (MC) transportation, environment, and development. It is the first report to discover how MC domain configurations support management technologies. An aspect of this research is the creation of drivers of ocean-based businesses. Systematic search and meta-analysis are employed to classify and define the MC domain. MC developments were first identified in the 1990s, representing maritime development for designing sailboats, submarines, and ship hydrodynamics. The maritime environment is simulated to predict emission reductions, coastal waste particles, renewable energy, and engineer robots to observe the ocean ecosystem. Maritime transportation focuses on optimizing ship speed, maneuvering ships, and using liquefied natural gas and submarine pipelines. Data trends with machine learning can be obtained by collecting a big data of similar computational results for implementing artificial intelligence strategies. Research findings show that modeling is an essential skill set in the 21st century

    Hand posture prediction using neural networks within a biomechanical model

    Get PDF
    This paper proposes the use of artificial neural networks (ANNs) in the framework of a biomechanical hand model for grasping. ANNs enhance the model capabilities as they substitute estimated data for the experimental inputs required by the grasping algorithm used. These inputs are the tentative grasping posture and the most open posture during grasping. As a consequence, more realistic grasping postures are predicted by the grasping algorithm, along with the contact information required by the dynamic biomechanical model (contact points and normals). Several neural network architectures are tested and compared in terms of prediction errors, leading to encouraging results. The performance of the overall proposal is also shown through simulation, where a grasping experiment is replicated and compared to the real grasping data collected by a data glove device. 

    Viewfinder: final activity report

    Get PDF
    The VIEW-FINDER project (2006-2009) is an 'Advanced Robotics' project that seeks to apply a semi-autonomous robotic system to inspect ground safety in the event of a fire. Its primary aim is to gather data (visual and chemical) in order to assist rescue personnel. A base station combines the gathered information with information retrieved from off-site sources. The project addresses key issues related to map building and reconstruction, interfacing local command information with external sources, human-robot interfaces and semi-autonomous robot navigation. The VIEW-FINDER system is a semi-autonomous; the individual robot-sensors operate autonomously within the limits of the task assigned to them, that is, they will autonomously navigate through and inspect an area. Human operators monitor their operations and send high level task requests as well as low level commands through the interface to any nodes in the entire system. The human interface has to ensure the human supervisor and human interveners are provided a reduced but good and relevant overview of the ground and the robots and human rescue workers therein

    Enabling Human-Robot Collaboration via Holistic Human Perception and Partner-Aware Control

    Get PDF
    As robotic technology advances, the barriers to the coexistence of humans and robots are slowly coming down. Application domains like elderly care, collaborative manufacturing, collaborative manipulation, etc., are considered the need of the hour, and progress in robotics holds the potential to address many societal challenges. The future socio-technical systems constitute of blended workforce with a symbiotic relationship between human and robot partners working collaboratively. This thesis attempts to address some of the research challenges in enabling human-robot collaboration. In particular, the challenge of a holistic perception of a human partner to continuously communicate his intentions and needs in real-time to a robot partner is crucial for the successful realization of a collaborative task. Towards that end, we present a holistic human perception framework for real-time monitoring of whole-body human motion and dynamics. On the other hand, the challenge of leveraging assistance from a human partner will lead to improved human-robot collaboration. In this direction, we attempt at methodically defining what constitutes assistance from a human partner and propose partner-aware robot control strategies to endow robots with the capacity to meaningfully engage in a collaborative task

    Image-Guided Robotic Dental Implantation With Natural-Root-Formed Implants

    Get PDF
    Dental implantation is now recognized as the standard of the care for tooth replacement. Although many studies show high short term survival rates greater than 95%, long term studies (\u3e 5 years) have shown success rates as low as 41.9%. Reasons affecting the long term success rates might include surgical factors such as limited accuracy of implant placement, lack of spacing controls, and overheating during the placement. In this dissertation, a comprehensive solution for improving the outcome of current dental implantation is presented, which includes computer-aided preoperative planning for better visualization of patient-specific information and automated robotic site-preparation for superior placement and orientation accuracy. Surgical planning is generated using patient-specific three-dimensional (3D) models which are reconstructed from Cone-beam CT images. An innovative image-guided robotic site-preparation system for implants insertion is designed and implemented. The preoperative plan of the implant insertion is transferred into intra-operative operations of the robot using a two-step registration procedure with the help of a Coordinate Measurement Machine (CMM). The natural-root implants mimic the root structure of natural teeth and were proved by Finite Element Method (FEM) to provide superior stress distribution than current cylinder-shape implants. However, due to their complicated geometry, manual site-preparation for these implants cannot be accomplished. Our innovative image-guided robotic implantation system provides the possibility of using this advanced type of implant. Phantom experiments with patient-specific jaw models were performed to evaluate the accuracy of positioning and orientation. Fiducial Registration Error (FRE) values less than 0.20 mm and final Target Registration Error (TRE) values after the two-step registration of 0.36±0.13 mm (N=5) were achieved. Orientation error was 1.99±1.27° (N=14). Robotic milling of the natural-root implant shape with single- and double-root was also tested, and the results proved that their complicated volumes can be removed as designed by the robot. The milling time for single- and double-root shape was 177 s and 1522 s, respectively
    corecore