9,473 research outputs found

    Development of a Three-Dimensional Image-Guided Needle Positioning System for Small Animal Interventions

    Get PDF
    Conventional needle positioning techniques for small animal microinjections are fraught with issues of repeatability and targeting accuracy. To improve the outcomes of these interventions a small animal needle positioning system guided by micro-computed tomography (micro-CT) imaging was developed. A phantom was developed to calibrate the geometric accuracy of micro-CT scanners to a traceable standard of measurement. Use of the phantom ensures the geometric fidelity of micro-CT images for use in image-guided interventions or other demanding quantitative applications. The design of a robot is described which features a remote center of motion architecture and is compact enough to operate within a micro-CT bore. Methods to calibrate the robot and register it to a micro-CT scanner are introduced. The performance of the robot is characterized and a mean targeting accuracy of 149 ± 41 µm estimated. The robot is finally demonstrated by completing an in vivo biomedical application

    Ultrasound-Guided Mechatronic System for Targeted Delivery of Cell-Based Cancer Vaccine Immunotherapy in Preclinical Models

    Get PDF
    Injection of dendritic cell (DC) vaccines into lymph nodes (LN) is a promising strategy for eliciting immune responses against cancer, but these injections in mouse cancer models are challenging due to the small target scale (~ 1 mm × 2 mm). Direct manual intranodal injection is difficult and can cause architectural damage to the LN, potentially disrupting crucial interactions between DC and T cells. Therefore, a second-generation ultrasound-guided mechatronic device has been developed to perform this intervention. A targeting accuracy of \u3c 500 μm will enable targeted delivery of the DCs specifically to a LN subcapsular space. The device was redesigned from its original CT-guided edition, which used a remote centre of motion architecture, to be easily integrated onto a commercially available VisualSonics imaging rail system. Subtle modifications were made to ensure simple workflow that allows for live-animal interventions that fall within the knockout periods stated in study protocols. Several calibration and registration techniques were developed in order to achieve an overall targeting accuracy appropriate for the intended application. A variety of methods to quantify the positioning accuracy of the device were investigated. The method chosen involved validating a guided injection into a tissue-mimicking phantom using ultrasound imaging post-operatively to localize the end-point position of the needle tip in the track left behind by the needle. Ultrasound-guided injections into a tissue-mimicking phantom revealed a targeting accuracy of 285 ± 94 μm for the developed robot compared to 508 ± 166 μm for a commercial-available manually-actuated injection device from VisuailSonics. The utility of the robot was also demonstrated by performing in vivo injections into the lymph nodes of mice

    Ultra-High Field Strength MR Image-Guided Robotic Needle Delivery Device for In-Bore Small Animal Interventions

    Get PDF
    Current methods of accurate soft tissue injections in small animals are prone to many sources of error. Although efforts have been made to improve the accuracy of needle deliveries, none of the efforts have provided accurate soft tissue references. An MR image-guided robot was designed to function inside the bore of a 9.4T MR scanner to accurately deliver needles to locations within the mouse brain. The robot was designed to have no noticeable negative effects on the image quality and was localized in the MR images through the use of an MR image visible fiducial. The robot was mechanically calibrated and subsequently validated in an image-guided phantom experiment, where the mean needle targeting accuracy and needle trajectory accuracy were calculated to be 178 ± 54µm and 0.27 ± 0.65º, respectively. Finally, the device successfully demonstrated an image-guided needle targeting procedure in situ

    New Mechatronic Systems for the Diagnosis and Treatment of Cancer

    Get PDF
    Both two dimensional (2D) and three dimensional (3D) imaging modalities are useful tools for viewing the internal anatomy. Three dimensional imaging techniques are required for accurate targeting of needles. This improves the efficiency and control over the intervention as the high temporal resolution of medical images can be used to validate the location of needle and target in real time. Relying on imaging alone, however, means the intervention is still operator dependent because of the difficulty of controlling the location of the needle within the image. The objective of this thesis is to improve the accuracy and repeatability of needle-based interventions over conventional techniques: both manual and automated techniques. This includes increasing the accuracy and repeatability of these procedures in order to minimize the invasiveness of the procedure. In this thesis, I propose that by combining the remote center of motion concept using spherical linkage components into a passive or semi-automated device, the physician will have a useful tracking and guidance system at their disposal in a package, which is less threatening than a robot to both the patient and physician. This design concept offers both the manipulative transparency of a freehand system, and tremor reduction through scaling currently offered in automated systems. In addressing each objective of this thesis, a number of novel mechanical designs incorporating an remote center of motion architecture with varying degrees of freedom have been presented. Each of these designs can be deployed in a variety of imaging modalities and clinical applications, ranging from preclinical to human interventions, with an accuracy of control in the millimeter to sub-millimeter range

    Optical coherence tomography-based consensus definition for lamellar macular hole.

    Get PDF
    BackgroundA consensus on an optical coherence tomography definition of lamellar macular hole (LMH) and similar conditions is needed.MethodsThe panel reviewed relevant peer-reviewed literature to reach an accord on LMH definition and to differentiate LMH from other similar conditions.ResultsThe panel reached a consensus on the definition of three clinical entities: LMH, epiretinal membrane (ERM) foveoschisis and macular pseudohole (MPH). LMH definition is based on three mandatory criteria and three optional anatomical features. The three mandatory criteria are the presence of irregular foveal contour, the presence of a foveal cavity with undermined edges and the apparent loss of foveal tissue. Optional anatomical features include the presence of epiretinal proliferation, the presence of a central foveal bump and the disruption of the ellipsoid zone. ERM foveoschisis definition is based on two mandatory criteria: the presence of ERM and the presence of schisis at the level of Henle's fibre layer. Three optional anatomical features can also be present: the presence of microcystoid spaces in the inner nuclear layer (INL), an increase of retinal thickness and the presence of retinal wrinkling. MPH definition is based on three mandatory criteria and two optional anatomical features. Mandatory criteria include the presence of a foveal sparing ERM, the presence of a steepened foveal profile and an increased central retinal thickness. Optional anatomical features are the presence of microcystoid spaces in the INL and a normal retinal thickness.ConclusionsThe use of the proposed definitions may provide uniform language for clinicians and future research

    FSS-1000: A 1000-Class Dataset for Few-Shot Segmentation

    Full text link
    Over the past few years, we have witnessed the success of deep learning in image recognition thanks to the availability of large-scale human-annotated datasets such as PASCAL VOC, ImageNet, and COCO. Although these datasets have covered a wide range of object categories, there are still a significant number of objects that are not included. Can we perform the same task without a lot of human annotations? In this paper, we are interested in few-shot object segmentation where the number of annotated training examples are limited to 5 only. To evaluate and validate the performance of our approach, we have built a few-shot segmentation dataset, FSS-1000, which consists of 1000 object classes with pixelwise annotation of ground-truth segmentation. Unique in FSS-1000, our dataset contains significant number of objects that have never been seen or annotated in previous datasets, such as tiny daily objects, merchandise, cartoon characters, logos, etc. We build our baseline model using standard backbone networks such as VGG-16, ResNet-101, and Inception. To our surprise, we found that training our model from scratch using FSS-1000 achieves comparable and even better results than training with weights pre-trained by ImageNet which is more than 100 times larger than FSS-1000. Both our approach and dataset are simple, effective, and easily extensible to learn segmentation of new object classes given very few annotated training examples. Dataset is available at https://github.com/HKUSTCV/FSS-1000

    Towards Closed-loop, Robot Assisted Percutaneous Interventions under MRI Guidance

    Get PDF
    Image guided therapy procedures under MRI guidance has been a focused research area over past decade. Also, over the last decade, various MRI guided robotic devices have been developed and used clinically for percutaneous interventions, such as prostate biopsy, brachytherapy, and tissue ablation. Though MRI provides better soft tissue contrast compared to Computed Tomography and Ultrasound, it poses various challenges like constrained space, less ergonomic patient access and limited material choices due to its high magnetic field. Even after, advancements in MRI compatible actuation methods and robotic devices using them, most MRI guided interventions are still open-loop in nature and relies on preoperative or intraoperative images. In this thesis, an intraoperative MRI guided robotic system for prostate biopsy comprising of an MRI compatible 4-DOF robotic manipulator, robot controller and control application with Clinical User Interface (CUI) and surgical planning applications (3DSlicer and RadVision) is presented. This system utilizes intraoperative images acquired after each full or partial needle insertion for needle tip localization. Presented system was approved by Institutional Review Board at Brigham and Women\u27s Hospital(BWH) and has been used in 30 patient trials. Successful translation of such a system utilizing intraoperative MR images motivated towards the development of a system architecture for close-loop, real-time MRI guided percutaneous interventions. Robot assisted, close-loop intervention could help in accurate positioning and localization of the therapy delivery instrument, improve physician and patient comfort and allow real-time therapy monitoring. Also, utilizing real-time MR images could allow correction of surgical instrument trajectory and controlled therapy delivery. Two of the applications validating the presented architecture; closed-loop needle steering and MRI guided brain tumor ablation are demonstrated under real-time MRI guidance
    corecore