2,848 research outputs found

    Microultrasound characterisation of <i>ex vivo</i> porcine tissue for ultrasound capsule endoscopy

    Get PDF
    Gastrointestinal (GI) disease development and progression is often characterised by cellular and tissue architectural changes within the mucosa and sub-mucosa layers. Current clinical capsule endoscopy and other approaches are heavily reliant on optical techniques which cannot detect disease progression below the surface layer of the tissue. To enhance the ability of clinicians to detect cellular changes earlier and more confidently, both quantitative and qualitative microultrasound (μUS) techniques are investigated in healthy ex vivo porcine GI tissue. This work is based on the use of single-element, focussed μUS transducers made with micromoulded piezocomposite operating at around 48 MHz. To explore the possibility that μUS can detect Crohn's disease and other inflammatory bowel diseases, ex vivo porcine small bowel tissue samples were cannulised and perfused with phosphate-buffered saline followed by various dilutions of polystyrene microspheres. Comparison with fluorescent imaging showed that the microspheres had infiltrated the microvasculature of the samples and that μUS was able to successfully detect this as a mimic of inflammation. Samples without microspheres were analysed using quantitative ultrasound to assess mechanical properties. Attenuation coefficients of 1.78 ± 0.66 dB/mm and 1.92 ± 0.77 dB/mm were obtained from reference samples which were surgically separated from the muscle layer. Six intact samples were segmented using a software algorithm and the acoustic impedance, Z, for varying tissue thicknesses, and backscattering coefficient, BSC, were calculated using the reference attenuation values and tabulated

    Acoustic Sensing and Ultrasonic Drug Delivery in Multimodal Theranostic Capsule Endoscopy

    Get PDF
    Video capsule endoscopy (VCE) is now a clinically accepted diagnostic modality in which miniaturized technology, an on-board power supply and wireless telemetry stand as technological foundations for other capsule endoscopy (CE) devices. However, VCE does not provide therapeutic functionality, and research towards therapeutic CE (TCE) has been limited. In this paper, a route towards viable TCE is proposed, based on multiple CE devices including important acoustic sensing and drug delivery components. In this approach, an initial multimodal diagnostic device with high-frequency quantitative microultrasound that complements video imaging allows surface and subsurface visualization and computer-assisted diagnosis. Using focused ultrasound (US) to mark sites of pathology with exogenous fluorescent agents permits follow-up with another device to provide therapy. This is based on an US-mediated targeted drug delivery system with fluorescence imaging guidance. An additional device may then be utilized for treatment verification and monitoring, exploiting the minimally invasive nature of CE. While such a theranostic patient pathway for gastrointestinal treatment is presently incomplete, the description in this paper of previous research and work under way to realize further components for the proposed pathway suggests it is feasible and provides a framework around which to structure further work

    A wireless platform for in vivo measurement of resistance properties of the gastrointestinal tract

    Get PDF
    Active locomotion of wireless capsule endoscopes has the potential to improve the diagnostic yield of this painless technique for the diagnosis of gastrointestinal tract disease. In order to design effective locomotion mechanisms, a quantitative measure of the propelling force required to effectively move a capsule inside the gastrointestinal tract is necessary. In this study, we introduce a novel wireless platform that is able to measure the force opposing capsule motion, without perturbing the physiologic conditions with physical connections to the outside of the gastrointestinal tract. The platform takes advantage of a wireless capsule that is magnetically coupled with an external permanent magnet. A secondary contribution of this manuscript is to present a real-time method to estimate the axial magnetic force acting on a wireless capsule manipulated by an external magnetic field. In addition to the intermagnetic force, the platform provides real-time measurements of the capsule position, velocity, and acceleration. The platform was assessed with benchtop trials within a workspace that extends 15 cm from each side of the external permanent magnet, showing average error in estimating the force and the position of less than 0.1 N and 10 mm, respectively. The platform was also able to estimate the dynamic behavior of a known resistant force with an error of 5.45%. Finally, an in vivo experiment on a porcine colon model validated the feasibility of measuring the resistant force in opposition to magnetic propulsion of a wireless capsule

    On Simultaneous Localization and Mapping inside the Human Body (Body-SLAM)

    Get PDF
    Wireless capsule endoscopy (WCE) offers a patient-friendly, non-invasive and painless investigation of the entire small intestine, where other conventional wired endoscopic instruments can barely reach. As a critical component of the capsule endoscopic examination, physicians need to know the precise position of the endoscopic capsule in order to identify the position of intestinal disease after it is detected by the video source. To define the position of the endoscopic capsule, we need to have a map of inside the human body. However, since the shape of the small intestine is extremely complex and the RF signal propagates differently in the non-homogeneous body tissues, accurate mapping and localization inside small intestine is very challenging. In this dissertation, we present an in-body simultaneous localization and mapping technique (Body-SLAM) to enhance the positioning accuracy of the WCE inside the small intestine and reconstruct the trajectory the capsule has traveled. In this way, the positions of the intestinal diseases can be accurately located on the map of inside human body, therefore, facilitates the following up therapeutic operations. The proposed approach takes advantage of data fusion from two sources that come with the WCE: image sequences captured by the WCE\u27s embedded camera and the RF signal emitted by the capsule. This approach estimates the speed and orientation of the endoscopic capsule by analyzing displacements of feature points between consecutive images. Then, it integrates this motion information with the RF measurements by employing a Kalman filter to smooth the localization results and generate the route that the WCE has traveled. The performance of the proposed motion tracking algorithm is validated using empirical data from the patients and this motion model is later imported into a virtual testbed to test the performance of the alternative Body-SLAM algorithms. Experimental results show that the proposed Body-SLAM technique is able to provide accurate tracking of the WCE with average error of less than 2.3cm

    VR-Caps: A Virtual Environment for Capsule Endoscopy

    Full text link
    Current capsule endoscopes and next-generation robotic capsules for diagnosis and treatment of gastrointestinal diseases are complex cyber-physical platforms that must orchestrate complex software and hardware functions. The desired tasks for these systems include visual localization, depth estimation, 3D mapping, disease detection and segmentation, automated navigation, active control, path realization and optional therapeutic modules such as targeted drug delivery and biopsy sampling. Data-driven algorithms promise to enable many advanced functionalities for capsule endoscopes, but real-world data is challenging to obtain. Physically-realistic simulations providing synthetic data have emerged as a solution to the development of data-driven algorithms. In this work, we present a comprehensive simulation platform for capsule endoscopy operations and introduce VR-Caps, a virtual active capsule environment that simulates a range of normal and abnormal tissue conditions (e.g., inflated, dry, wet etc.) and varied organ types, capsule endoscope designs (e.g., mono, stereo, dual and 360{\deg}camera), and the type, number, strength, and placement of internal and external magnetic sources that enable active locomotion. VR-Caps makes it possible to both independently or jointly develop, optimize, and test medical imaging and analysis software for the current and next-generation endoscopic capsule systems. To validate this approach, we train state-of-the-art deep neural networks to accomplish various medical image analysis tasks using simulated data from VR-Caps and evaluate the performance of these models on real medical data. Results demonstrate the usefulness and effectiveness of the proposed virtual platform in developing algorithms that quantify fractional coverage, camera trajectory, 3D map reconstruction, and disease classification.Comment: 18 pages, 14 figure

    In-Vivo Evaluation of Microultrasound and Thermometric Capsule Endoscopes

    Get PDF
    Clinical endoscopy and colonoscopy are commonly used to investigate and diagnose disorders in the upper gastrointestinal tract and colon respectively. However, examination of the anatomically remote small bowel with conventional endoscopy is challenging. This and advances in miniaturization led to the development of video capsule endoscopy (VCE) to allow small bowel examination in a non-invasive manner. Available since 2001, current capsule endoscopes are limited to viewing the mucosal surface only due to their reliance on optical imaging. To overcome this limitation with submucosal imaging, work is under way to implement microultrasound (μUS) imaging in the same form as VCE devices. This paper describes two prototype capsules, termed Sonocap and Thermocap, which were developed respectively to assess the quality of μUS imaging and the maximum power consumption that can be tolerated for such a system. The capsules were tested in vivo in the oesophagus and small bowel of porcine models. Results are presented in the form of μUS B-scans and safe temperature readings observed up to 100 mW in both biological regions. These results demonstrate that acoustic coupling and μUS imaging can be achieved in vivo in the lumen of the bowel and the maximum power consumption that is possible for miniature μUS systems

    EndoSLAM Dataset and An Unsupervised Monocular Visual Odometry and Depth Estimation Approach for Endoscopic Videos: Endo-SfMLearner

    Full text link
    Deep learning techniques hold promise to develop dense topography reconstruction and pose estimation methods for endoscopic videos. However, currently available datasets do not support effective quantitative benchmarking. In this paper, we introduce a comprehensive endoscopic SLAM dataset consisting of 3D point cloud data for six porcine organs, capsule and standard endoscopy recordings as well as synthetically generated data. A Panda robotic arm, two commercially available capsule endoscopes, two conventional endoscopes with different camera properties, and two high precision 3D scanners were employed to collect data from 8 ex-vivo porcine gastrointestinal (GI)-tract organs. In total, 35 sub-datasets are provided with 6D pose ground truth for the ex-vivo part: 18 sub-dataset for colon, 12 sub-datasets for stomach and 5 sub-datasets for small intestine, while four of these contain polyp-mimicking elevations carried out by an expert gastroenterologist. Synthetic capsule endoscopy frames from GI-tract with both depth and pose annotations are included to facilitate the study of simulation-to-real transfer learning algorithms. Additionally, we propound Endo-SfMLearner, an unsupervised monocular depth and pose estimation method that combines residual networks with spatial attention module in order to dictate the network to focus on distinguishable and highly textured tissue regions. The proposed approach makes use of a brightness-aware photometric loss to improve the robustness under fast frame-to-frame illumination changes. To exemplify the use-case of the EndoSLAM dataset, the performance of Endo-SfMLearner is extensively compared with the state-of-the-art. The codes and the link for the dataset are publicly available at https://github.com/CapsuleEndoscope/EndoSLAM. A video demonstrating the experimental setup and procedure is accessible through https://www.youtube.com/watch?v=G_LCe0aWWdQ.Comment: 27 pages, 16 figure
    • …
    corecore