919 research outputs found

    uPAR-targeted optical near-infrared (NIR) fluorescence imaging and PET for image-guided surgery in head and neck cancer:Proof-of-concept in orthotopic xenograft model

    Get PDF
    PURPOSE: Urokinase-like Plasminogen Activator Receptor (uPAR) is overexpressed in a variety of carcinoma types, and therefore represents an attractive imaging target. The aim of this study was to assess the feasibility of two uPAR-targeted probes for PET and fluorescence tumor imaging in a human xenograft tongue cancer model. EXPERIMENTAL DESIGN AND RESULTS: Tumor growth of tongue cancer was monitored by bioluminescence imaging (BLI) and MRI. Either ICG-Glu-Glu-AE105 (fluorescent agent) or (64)Cu-DOTA-AE105 (PET agent) was injected systemically, and fluorescence imaging or PET/CT imaging was performed. Tissue was collected for micro-fluorescence imaging and histology. A clear fluorescent signal was detected in the primary tumor with a mean in vivo tumor-to-background ratio of 2.5. Real-time fluorescence-guided tumor resection was possible, and sub-millimeter tumor deposits could be localized. Histological analysis showed co-localization of the fluorescent signal, uPAR expression and tumor deposits. In addition, the feasibility of uPAR-guided robotic cancer surgery was demonstrated. Also, uPAR-PET imaging showed a clear and localized signal in the tongue tumors. CONCLUSIONS: This study demonstrated the feasibility of combining two uPAR-targeted probes in a preclinical head and neck cancer model. The PET modality provided preoperative non-invasive tumor imaging and the optical modality allowed for real-time fluorescence-guided tumor detection and resection. Clinical translation of this platform seems promising

    Computer- and robot-assisted Medical Intervention

    Full text link
    Medical robotics includes assistive devices used by the physician in order to make his/her diagnostic or therapeutic practices easier and more efficient. This chapter focuses on such systems. It introduces the general field of Computer-Assisted Medical Interventions, its aims, its different components and describes the place of robots in that context. The evolutions in terms of general design and control paradigms in the development of medical robots are presented and issues specific to that application domain are discussed. A view of existing systems, on-going developments and future trends is given. A case-study is detailed. Other types of robotic help in the medical environment (such as for assisting a handicapped person, for rehabilitation of a patient or for replacement of some damaged/suppressed limbs or organs) are out of the scope of this chapter.Comment: Handbook of Automation, Shimon Nof (Ed.) (2009) 000-00

    Image-Fusion for Biopsy, Intervention, and Surgical Navigation in Urology

    Get PDF

    Image Fusion: A Review

    Get PDF
    At the present time, image fusion is considered as one of the types of integrated technology information, it has played a significant role in several domains and production of high-quality images. The goal of image fusion is blending information from several images, also it is fusing and keeping all the significant visual information that exists in the original images. Image fusion is one of the methods of field image processing. Image fusion is the process of merging information from a set of images to consist one image that is more informative and suitable for human and machine perception. It increases and enhances the quality of images for visual interpretation in different applications. This paper offers the outline of image fusion methods, the modern tendencies of image fusion and image fusion applications. Image fusion can be performed in the spatial and frequency domains. In the spatial domain is applied directly on the original images by merging the pixel values of the two or more images for purpose forming a fused image, while in the frequency domain the original images will decompose into multilevel coefficient and synthesized by using inverse transform to compose the fused image. Also, this paper presents a various techniques for image fusion in spatial and frequency domains such as averaging, minimum/maximum, HIS, PCA and transform-based techniques, etc.. Different quality measures have been explained in this paper to perform a comparison of these methods

    Developing and testing a robotic MRI/CT fusion biopsy technique using a purpose-built interventional phantom.

    Get PDF
    BACKGROUND: Magnetic resonance imaging (MRI) can be used to target tumour components in biopsy procedures, while the ability to precisely correlate histology and MRI signal is crucial for imaging biomarker validation. Robotic MRI/computed tomography (CT) fusion biopsy offers the potential for this without in-gantry biopsy, although requires development. METHODS: Test-retest T1 and T2 relaxation times, attenuation (Hounsfield units, HU), and biopsy core quality were prospectively assessed (January-December 2021) in a range of gelatin, agar, and mixed gelatin/agar solutions of differing concentrations on days 1 and 8 after manufacture. Suitable materials were chosen, and four biopsy phantoms were constructed with twelve spherical 1-3-cm diameter targets visible on MRI, but not on CT. A technical pipeline was developed, and intraoperator and interoperator reliability was tested in four operators performing a total of 96 biopsies. Statistical analysis included T1, T2, and HU repeatability using Bland-Altman analysis, Dice similarity coefficient (DSC), and intraoperator and interoperator reliability. RESULTS: T1, T2, and HU repeatability had 95% limits-of-agreement of 8.3%, 3.4%, and 17.9%, respectively. The phantom was highly reproducible, with DSC of 0.93 versus 0.92 for scanning the same or two different phantoms, respectively. Hit rate was 100% (96/96 targets), and all operators performed robotic biopsies using a single volumetric acquisition. The fastest procedure time was 32 min for all 12 targets. CONCLUSIONS: A reproducible biopsy phantom was developed, validated, and used to test robotic MRI/CT-fusion biopsy. The technique was highly accurate, reliable, and achievable in clinically acceptable timescales meaning it is suitable for clinical application

    06311 Abstracts Collection -- Sensor Data and Information Fusion in Computer Vision and Medicine

    Get PDF
    From 30.07.06 to 04.08.06, the Dagstuhl Seminar 06311 ``Sensor Data and Information Fusion in Computer Vision and Medicine\u27\u27 was held in the International Conference and Research Center (IBFI), Schloss Dagstuhl. Sensor data fusion is of increasing importance for many research fields and applications. Multi-modal imaging is routine in medicine, and in robitics it is common to use multi-sensor data fusion. During the seminar, researchers and application experts working in the field of sensor data fusion presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. The second part briefly summarizes the contributions

    Mobile intraoperative CT-assisted frameless stereotactic biopsies achieved single-millimeter trajectory accuracy for deep-seated brain lesions in a sample of 7 patients

    Full text link
    BACKGROUND Brain biopsies are crucial diagnostic interventions, providing valuable information for treatment and prognosis, but largely depend on a high accuracy and precision. We hypothesized that through the combination of neuronavigation-based frameless stereotaxy and MRI-guided trajectory planning with intraoperative CT examination using a mobile unit, one can achieve a seamlessly integrated approach yielding optimal target accuracy. METHODS We analyzed a total of 7 stereotactic biopsy trajectories for a variety of deep-seated locations and different patient positions. After rigid head fixation, an intraoperative pre-procedural scan using a mobile CT unit was performed for automatic image fusion with the planning MRI images and a peri-procedural scan with the biopsy cannula in situ for verification of the definite target position. We then evaluated the radial trajectory error. RESULTS Intraoperative scanning, surgery, computerized merging of MRI and CT images as well as trajectory planning were feasible without difficulties and safe in all cases. We achieved a radial trajectory deviation of 0.97 ± 0.39 mm at a trajectory length of 60 ± 12.3 mm (mean ± standard deviation). Repositioning of the biopsy cannula due to inaccurate targeting was not required. CONCLUSION Intraoperative verification using a mobile CT unit in combination with frameless neuronavigation-guided stereotaxy and pre-operative MRI-based trajectory planning was feasible, safe and highly accurate. The setting enabled single-millimeter accuracy for deep-seated brain lesions and direct detection of intraoperative complications, did not depend on a dedicated operating room and was seamlessly integrated into common stereotactic procedures

    Intraoperative Navigation Systems for Image-Guided Surgery

    Get PDF
    Recent technological advancements in medical imaging equipment have resulted in a dramatic improvement of image accuracy, now capable of providing useful information previously not available to clinicians. In the surgical context, intraoperative imaging provides a crucial value for the success of the operation. Many nontrivial scientific and technical problems need to be addressed in order to efficiently exploit the different information sources nowadays available in advanced operating rooms. In particular, it is necessary to provide: (i) accurate tracking of surgical instruments, (ii) real-time matching of images from different modalities, and (iii) reliable guidance toward the surgical target. Satisfying all of these requisites is needed to realize effective intraoperative navigation systems for image-guided surgery. Various solutions have been proposed and successfully tested in the field of image navigation systems in the last ten years; nevertheless several problems still arise in most of the applications regarding precision, usability and capabilities of the existing systems. Identifying and solving these issues represents an urgent scientific challenge. This thesis investigates the current state of the art in the field of intraoperative navigation systems, focusing in particular on the challenges related to efficient and effective usage of ultrasound imaging during surgery. The main contribution of this thesis to the state of the art are related to: Techniques for automatic motion compensation and therapy monitoring applied to a novel ultrasound-guided surgical robotic platform in the context of abdominal tumor thermoablation. Novel image-fusion based navigation systems for ultrasound-guided neurosurgery in the context of brain tumor resection, highlighting their applicability as off-line surgical training instruments. The proposed systems, which were designed and developed in the framework of two international research projects, have been tested in real or simulated surgical scenarios, showing promising results toward their application in clinical practice

    Computational approaches to Explainable Artificial Intelligence: Advances in theory, applications and trends

    Get PDF
    Deep Learning (DL), a groundbreaking branch of Machine Learning (ML), has emerged as a driving force in both theoretical and applied Artificial Intelligence (AI). DL algorithms, rooted in complex and non-linear artificial neural systems, excel at extracting high-level features from data. DL has demonstrated human-level performance in real-world tasks, including clinical diagnostics, and has unlocked solutions to previously intractable problems in virtual agent design, robotics, genomics, neuroimaging, computer vision, and industrial automation. In this paper, the most relevant advances from the last few years in Artificial Intelligence (AI) and several applications to neuroscience, neuroimaging, computer vision, and robotics are presented, reviewed and discussed. In this way, we summarize the state-of-the-art in AI methods, models and applications within a collection of works presented at the 9 International Conference on the Interplay between Natural and Artificial Computation (IWINAC). The works presented in this paper are excellent examples of new scientific discoveries made in laboratories that have successfully transitioned to real-life applications
    • …
    corecore