316 research outputs found

    VR-Caps: A Virtual Environment for Capsule Endoscopy

    Full text link
    Current capsule endoscopes and next-generation robotic capsules for diagnosis and treatment of gastrointestinal diseases are complex cyber-physical platforms that must orchestrate complex software and hardware functions. The desired tasks for these systems include visual localization, depth estimation, 3D mapping, disease detection and segmentation, automated navigation, active control, path realization and optional therapeutic modules such as targeted drug delivery and biopsy sampling. Data-driven algorithms promise to enable many advanced functionalities for capsule endoscopes, but real-world data is challenging to obtain. Physically-realistic simulations providing synthetic data have emerged as a solution to the development of data-driven algorithms. In this work, we present a comprehensive simulation platform for capsule endoscopy operations and introduce VR-Caps, a virtual active capsule environment that simulates a range of normal and abnormal tissue conditions (e.g., inflated, dry, wet etc.) and varied organ types, capsule endoscope designs (e.g., mono, stereo, dual and 360{\deg}camera), and the type, number, strength, and placement of internal and external magnetic sources that enable active locomotion. VR-Caps makes it possible to both independently or jointly develop, optimize, and test medical imaging and analysis software for the current and next-generation endoscopic capsule systems. To validate this approach, we train state-of-the-art deep neural networks to accomplish various medical image analysis tasks using simulated data from VR-Caps and evaluate the performance of these models on real medical data. Results demonstrate the usefulness and effectiveness of the proposed virtual platform in developing algorithms that quantify fractional coverage, camera trajectory, 3D map reconstruction, and disease classification.Comment: 18 pages, 14 figure

    Appearance Modelling and Reconstruction for Navigation in Minimally Invasive Surgery

    Get PDF
    Minimally invasive surgery is playing an increasingly important role for patient care. Whilst its direct patient benefit in terms of reduced trauma, improved recovery and shortened hospitalisation has been well established, there is a sustained need for improved training of the existing procedures and the development of new smart instruments to tackle the issue of visualisation, ergonomic control, haptic and tactile feedback. For endoscopic intervention, the small field of view in the presence of a complex anatomy can easily introduce disorientation to the operator as the tortuous access pathway is not always easy to predict and control with standard endoscopes. Effective training through simulation devices, based on either virtual reality or mixed-reality simulators, can help to improve the spatial awareness, consistency and safety of these procedures. This thesis examines the use of endoscopic videos for both simulation and navigation purposes. More specifically, it addresses the challenging problem of how to build high-fidelity subject-specific simulation environments for improved training and skills assessment. Issues related to mesh parameterisation and texture blending are investigated. With the maturity of computer vision in terms of both 3D shape reconstruction and localisation and mapping, vision-based techniques have enjoyed significant interest in recent years for surgical navigation. The thesis also tackles the problem of how to use vision-based techniques for providing a detailed 3D map and dynamically expanded field of view to improve spatial awareness and avoid operator disorientation. The key advantage of this approach is that it does not require additional hardware, and thus introduces minimal interference to the existing surgical workflow. The derived 3D map can be effectively integrated with pre-operative data, allowing both global and local 3D navigation by taking into account tissue structural and appearance changes. Both simulation and laboratory-based experiments are conducted throughout this research to assess the practical value of the method proposed

    Review of photoacoustic imaging plus X

    Full text link
    Photoacoustic imaging (PAI) is a novel modality in biomedical imaging technology that combines the rich optical contrast with the deep penetration of ultrasound. To date, PAI technology has found applications in various biomedical fields. In this review, we present an overview of the emerging research frontiers on PAI plus other advanced technologies, named as PAI plus X, which includes but not limited to PAI plus treatment, PAI plus new circuits design, PAI plus accurate positioning system, PAI plus fast scanning systems, PAI plus novel ultrasound sensors, PAI plus advanced laser sources, PAI plus deep learning, and PAI plus other imaging modalities. We will discuss each technology's current state, technical advantages, and prospects for application, reported mostly in recent three years. Lastly, we discuss and summarize the challenges and potential future work in PAI plus X area

    Design of colon phantoms and a colon phantom motorized measuring system to verify the operation of a microwave-based diagnostic device

    Full text link
    Treballs Finals de Grau d'Enginyeria Biomèdica. Facultat de Medicina i Ciències de la Salut. Universitat de Barcelona. Curs: 2022-2023. Tutor/Director: Fernández Esparrach, Glòria, Guardiola García, MartaMicrowave imaging (MWI) is an emerging medical imaging technique with promising biomedical applications. In line with this research area, MiWEndo Solutions is a spinoff devoted to the development of a MWI device for colorectal cancer diagnosis called MiWEndo, which is attached to the regular colonoscope. Before its actual commercialization, a fundamental step towards the validation of the MiWEndo prototype is the design of realistic colon tissue-mimicking phantoms. Such phantoms allow the assessment of the imaging system performance under well controlled and reproducible conditions. In this thesis, a new, simple, and highly reproducible phantom recipe based on polyvinylpyrrolidone (PVP) has been developed to mimic healthy colon mucosa. Moreover, a comparative study between the currently used oil-based phantom recipe and the new PVP phantom has been conducted to assess the lifespan and stability of each type of fabricated phantom. It has been concluded that PVP phantoms must be conserved in the fridge for increased stability. With this conservation protocol, the PVP recipe shows a similar lifespan as the oil-based one. Nevertheless, due to both the limited number of samples and time, further studies must be carried out to conclusively determine the recipe with larger stability. Furthermore, a colon phantom motorized measuring system has been designed to improve repeatability and reliability in the measurements of the phantoms. The system consists of an XYZ motorized positioning system, an Arduino Nano, stepper motor drivers and Arduino IDE firmware. Even though further improvements will be required once implemented, a first conception of a system that fulfils the established requirements has been obtained, based on reasonable-cost and simple-implementation components. Besides, a successful proof of concept of the measuring system has been carried out, concluding that a further implementation of the system seems viable

    Estimation of gastrointestinal polyp size in video endoscopy

    Get PDF
    Abstract Worldwide the colorectal cancer is one of the most common public health problems, constituting in 2010 the seventh cause of death. This aggressive cancer is firstly identified during an endoscopy routine examination by characterizing a set of polyps that appear along the digestive tract, mainly in the colon and rectum. The polyp size is one of the most important features that determines the surgical endoscopy management and even can be used to predict the level of aggressiveness. For instance, the gastroenterologists only send a polyp sample to the pathology examination if the polyp diameter is larger than 10 mm, a measure that is achieved typically by examining the lesion with a calibrated endoscopy tool. However, the polyp size measure is very challenging because it must be performed during a procedure subjected to a complex mix of noise sources, such as: the distorted optical characteristics of the endoscopy, the exacerbated physiological conditions and abrupt motion. The main goal of this thesis was estimated the polyp size during an endoscopy video sequence using a spatio-temporal characterization. Firstly, the method estimated the region with more motion within which the polyp shape is approximated by those pixels with the largest temporal variance. On the above, an initial manual polyp delineation in the first frame captures the main features to be follow in posterior frames by a cross correlation procedure. Afterwards, a bayesian tracking strategy is used to refine the polyp segmentation. Finally a defocus strategy allows to estimate on the clear cut frame at a certain depth as a reference to determine the polyp size obtaining reliable results. In the segmentation task, the approach achieved a Dice Score of 0.7 in real endoscopy video-sequences, when comparing with an expert. In addition, the results polyp size estimation obtained a Root Mean Square Error (RMSE) of 0.87 mm with spheres of known size that simulated the polyps, and in real endoscopy sequences obtaining a RMSE of 4.7 mm compared with measures obtained by a group of four experts with similar experience.El cáncer colorectal es uno de los problemas de salud pública más comunes a nivel mundial, ocupando la séptima causa de muerte en el 2010. Este tipo de cáncer tan agresivo es identificado prematuramente por un conjunto de pólipos que crecen a lo largo del tracto digestivo, principalmente en el colon y el recto. El tamaño de los pólipos es una de las características mas importantes, con la cual se determina el manejo quirúrgico de la lesión e incluso puede ser usado para predecir el grado de malignidad. Acorde a esto, el experto solo envía una muestra del pólipo para un examen patológico, sí el diámetro del pólipo es más largo que 10 mm. típica mente, esta medida es tomada examinando la lesión con una herramienta endoscópica calibrada. Sin embargo, la medición del tamaño del pólipo es realmente difícil debido a que el procedimiento está sujeto a fuentes de ruido bastante complejas, tales como: la distorsión óptica que es característica del endoscopio, las condiciones fisiológicas del tracto digestivo y los movimientos abruptos con el dispositivo. La contribución principal de este trabajo fue la estimación del tamaño de los pólipos, sobre una secuencia de vídeo de un procedimiento de endoscopia usando una caracterización espacio-temporal. En primera parte, el método estima la región con mayor movimiento que corresponde aproximadamente a la región del pólipo, tomando aquellos pixeles con mayor varianza temporal. Sobre lo anterior, una delineación manual de la lesión es realizada en el primer cuadro para establecer las principales características, para ser seguidas en los cuadros posteriores usando un método de correlación cruzada. Después, se usó una estrategia de seguimiento bayesiana para refinar la segmentación del pólipo. Finalmente, una estrategia basada en la correspondencia del desenfoque de las imágenes de una secuencia a una profundidad o distancia determinada, se pudo obtener una referencia para determinar el tamaño de los pólipos, obteniendo resultados fiables. En la etapa de segmentación, la estrategia logra un Dice score de 0, 7 al comparar con un experto en secuencias de endoscopia reales. Y en la estimación del tamaño de los pólipo se obtuvo un error cuadrático medio (RMSE) de 0.87 mm, comparando con esferas de tamaño conocido que simulaban los pólipos, y en secuencias de endoscopia reales se obtuvo un RMSE de 4.7 mm comparando con las medidas obtenidas por un grupo de cuatro. expertos con experiencia similar.Maestrí

    Surgical Subtask Automation for Intraluminal Procedures using Deep Reinforcement Learning

    Get PDF
    Intraluminal procedures have opened up a new sub-field of minimally invasive surgery that use flexible instruments to navigate through complex luminal structures of the body, resulting in reduced invasiveness and improved patient benefits. One of the major challenges in this field is the accurate and precise control of the instrument inside the human body. Robotics has emerged as a promising solution to this problem. However, to achieve successful robotic intraluminal interventions, the control of the instrument needs to be automated to a large extent. The thesis first examines the state-of-the-art in intraluminal surgical robotics and identifies the key challenges in this field, which include the need for safe and effective tool manipulation, and the ability to adapt to unexpected changes in the luminal environment. To address these challenges, the thesis proposes several levels of autonomy that enable the robotic system to perform individual subtasks autonomously, while still allowing the surgeon to retain overall control of the procedure. The approach facilitates the development of specialized algorithms such as Deep Reinforcement Learning (DRL) for subtasks like navigation and tissue manipulation to produce robust surgical gestures. Additionally, the thesis proposes a safety framework that provides formal guarantees to prevent risky actions. The presented approaches are evaluated through a series of experiments using simulation and robotic platforms. The experiments demonstrate that subtask automation can improve the accuracy and efficiency of tool positioning and tissue manipulation, while also reducing the cognitive load on the surgeon. The results of this research have the potential to improve the reliability and safety of intraluminal surgical interventions, ultimately leading to better outcomes for patients and surgeons

    MEMS-Based Endomicroscopes for High Resolution in vivo Imaging

    Full text link
    Intravital microscopy is an emerging methodology for performing real time imaging in live animals. This technology is playing a greater role in the study of cellular and molecular biology because in vitro systems cannot adequately recapitulate the microenvironment of living tissues and systems. Conventional intravital microscopes use large, bulky objectives that require wide surgical exposure to image internal organs and result in terminal experiments. If these instruments can be reduced sufficiently in size, biological phenomena can be observed in a longitudinal fashion without animal sacrifice. The epithelium is a thin layer of tissue in hollow organs, and is the origin of many types of human diseases. In vivo assessment of biomarkers expressed in the epithelium in animal models can provide valuable information of disease development and drug efficacy. The overall goal of this work is to develop miniature imaging instruments capable of visualizing the epithelium in live animals with subcellular resolution. The dissertation is divided into four projects, where each contains an imaging system developed for small animal imaging. These systems are all designed using laser beam scanning technology with tiny mirrors developed with microelectromechanical systems (MEMS) technology. By using these miniature scanners, we are able to develop endomicroscopes small enough for hollow organs in small animals. The performance of these systems has been demonstrated by imaging either excised tissue or colon of live mice. The final version of the instrument can collect horizontal/oblique plane images in the mouse colon in real time (>10 frames/sec) with sub-micron resolution (<1 um), deep tissue penetration (~200 um) and large field of view (700 x 500 um). A novel side-viewing architecture with distal MEMS scanning was developed to create clear and stable image in the mouse colon. With the use of the instrument, it is convenient to pinpoint location of interest and create a map of the colon using image mosaicking. Multispectral fluorescence images can by collected at excitation wavelength ranging from 445 nm to 780 nm. The instruments have been used to 1) validate specific binding of a cancer targeting agent in the mouse colon and 2) study the tumor development in a mouse model with endogenous fluorescence protein expression. We use these studies to show that we have developed an enabling technology which will allow biologist to perform longitudinal imaging in animal models with subcellular resolution.PHDBiomedical EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttps://deepblue.lib.umich.edu/bitstream/2027.42/136954/2/dxy_1.pd
    corecore