2,072 research outputs found

    Real-time Near-infrared Virtual Intraoperative Surgical Photoacoustic Microscopy

    Get PDF
    AbstractWe developed a near infrared (NIR) virtual intraoperative surgical photoacoustic microscopy (NIR-VISPAM) system that combines a conventional surgical microscope and an NIR light photoacoustic microscopy (PAM) system. NIR-VISPAM can simultaneously visualize PA B-scan images at a maximum display rate of 45Hz and display enlarged microscopic images on a surgeon's view plane through the ocular lenses of the surgical microscope as augmented reality. The use of the invisible NIR light eliminated the disturbance to the surgeon's vision caused by the visible PAM excitation laser in a previous report. Further, the maximum permissible laser pulse energy at this wavelength is approximately 5 times more than that at the visible spectral range. The use of a needle-type ultrasound transducer without any water bath for acoustic coupling can enhance convenience in an intraoperative environment. We successfully guided needle and injected carbon particles in biological tissues ex vivo and in melanoma-bearing mice in vivo

    Image guidance in neurosurgical procedures, the "Visages" point of view.

    Get PDF
    This paper gives an overview of the evolution of clinical neuroinformatics in the domain of neurosurgery. It shows how image guided neurosurgery (IGNS) is evolving according to the integration of new imaging modalities before, during and after the surgical procedure and how this acts as the premise of the Operative Room of the future. These different issues, as addressed by the VisAGeS INRIA/INSERM U746 research team (http://www.irisa.fr/visages), are presented and discussed in order to exhibit the benefits of an integrated work between physicians (radiologists, neurologists and neurosurgeons) and computer scientists to give adequate answers toward a more effective use of images in IGNS

    Looking for a perfect match: multimodal combinations of Raman spectroscopy for biomedical applications

    Get PDF
    Raman spectroscopy has shown very promising results in medical diagnostics by providing label-free and highly specific molecular information of pathological tissue ex vivo and in vivo. Nevertheless, the high specificity of Raman spectroscopy comes at a price, i.e., low acquisition rate, no direct access to depth information, and limited sampling areas. However, a similar case regarding advantages and disadvantages can also be made for other highly regarded optical modalities, such as optical coherence tomography, autofluorescence imaging and fluorescence spectroscopy, fluorescence lifetime microscopy, second-harmonic generation, and others. While in these modalities the acquisition speed is significantly higher, they have no or only limited molecular specificity and are only sensitive to a small group of molecules. It can be safely stated that a single modality provides only a limited view on a specific aspect of a biological specimen and cannot assess the entire complexity of a sample. To solve this issue, multimodal optical systems, which combine different optical modalities tailored to a particular need, become more and more common in translational research and will be indispensable diagnostic tools in clinical pathology in the near future. These systems can assess different and partially complementary aspects of a sample and provide a distinct set of independent biomarkers. Here, we want to give an overview on the development of multimodal systems that use RS in combination with other optical modalities to improve the diagnostic performance

    Multimodal optical systems for clinical oncology

    Get PDF
    This thesis presents three multimodal optical (light-based) systems designed to improve the capabilities of existing optical modalities for cancer diagnostics and theranostics. Optical diagnostic and therapeutic modalities have seen tremendous success in improving the detection, monitoring, and treatment of cancer. For example, optical spectroscopies can accurately distinguish between healthy and diseased tissues, fluorescence imaging can light up tumours for surgical guidance, and laser systems can treat many epithelial cancers. However, despite these advances, prognoses for many cancers remain poor, positive margin rates following resection remain high, and visual inspection and palpation remain crucial for tumour detection. The synergistic combination of multiple optical modalities, as presented here, offers a promising solution. The first multimodal optical system (Chapter 3) combines Raman spectroscopic diagnostics with photodynamic therapy using a custom-built multimodal optical probe. Crucially, this system demonstrates the feasibility of nanoparticle-free theranostics, which could simplify the clinical translation of cancer theranostic systems without sacrificing diagnostic or therapeutic benefit. The second system (Chapter 4) applies computer vision to Raman spectroscopic diagnostics to achieve spatial spectroscopic diagnostics. It provides an augmented reality display of the surgical field-of-view, overlaying spatially co-registered spectroscopic diagnoses onto imaging data. This enables the translation of Raman spectroscopy from a 1D technique to a 2D diagnostic modality and overcomes the trade-off between diagnostic accuracy and field-of-view that has limited optical systems to date. The final system (Chapter 5) integrates fluorescence imaging and Raman spectroscopy for fluorescence-guided spatial spectroscopic diagnostics. This facilitates macroscopic tumour identification to guide accurate spectroscopic margin delineation, enabling the spectroscopic examination of suspicious lesions across large tissue areas. Together, these multimodal optical systems demonstrate that the integration of multiple optical modalities has potential to improve patient outcomes through enhanced tumour detection and precision-targeted therapies.Open Acces

    Current and Future Advances in Surgical Therapy for Pituitary Adenoma

    Get PDF
    The vital physiological role of the pituitary gland, alongside its proximal critical neurovascular structures means pituitary adenomas cause significant morbidity or mortality. Whilst enormous advancements have been made in the surgical care of pituitary adenomas, treatment failure and recurrence remain challenges. To meet these clinical challenges, there has been an enormous expansion of novel medical technologies (e.g. endoscopy, advanced imaging, artificial intelligence). These innovations have the potential to benefit each step of the patient journey, and ultimately, drive improved outcomes. Earlier and more accurate diagnosis addresses this in part. Analysis of novel patient data sets, such as automated facial analysis or natural language processing of medical records holds potential in achieving an earlier diagnosis. After diagnosis, treatment decision-making and planning will benefit from radiomics and multimodal machine learning models. Surgical safety and effectiveness will be transformed by smart simulation methods for trainees. Next-generation imaging techniques and augmented reality will enhance surgical planning and intraoperative navigation. Similarly, the future armamentarium of pituitary surgeons, including advanced optical devices, smart instruments and surgical robotics, will augment the surgeon's abilities. Intraoperative support to team members will benefit from a surgical data science approach, utilising machine learning analysis of operative videos to improve patient safety and orientate team members to a common workflow. Postoperatively, early detection of individuals at risk of complications and prediction of treatment failure through neural networks of multimodal datasets will support earlier intervention, safer hospital discharge, guide follow-up and adjuvant treatment decisions. Whilst advancements in pituitary surgery hold promise to enhance the quality of care, clinicians must be the gatekeepers of technological translation, ensuring systematic assessment of risk and benefit. In doing so, the synergy between these innovations can be leveraged to drive improved outcomes for patients of the future

    Real-time, label-free, intraoperative visualization of peripheral nerves and microvasculatures using multimodal optical imaging techniques

    Get PDF
    Accurate, real-time identification and display of critical anatomic structures, such as the nerve and vasculature structures, are critical for reducing complications and improving surgical outcomes. Human vision is frequently limited in clearly distinguishing and contrasting these structures. We present a novel imaging system, which enables noninvasive visualization of critical anatomic structures during surgical dissection. Peripheral nerves are visualized by a snapshot polarimetry that calculates the anisotropic optical properties. Vascular structures, both venous and arterial, are identified and monitored in real-time using a near-infrared laser-speckle-contrast imaging. We evaluate the system by performing in vivo animal studies with qualitative comparison by contrast-agent-aided fluorescence imaging

    Advanced Endoscopic Navigation:Surgical Big Data,Methodology,and Applications

    Get PDF
    随着科学技术的飞速发展,健康与环境问题日益成为人类面临的最重大问题之一。信息科学、计算机技术、电子工程与生物医学工程等学科的综合应用交叉前沿课题,研究现代工程技术方法,探索肿瘤癌症等疾病早期诊断、治疗和康复手段。本论文综述了计算机辅助微创外科手术导航、多模态医疗大数据、方法论及其临床应用:从引入微创外科手术导航概念出发,介绍了医疗大数据的术前与术中多模态医学成像方法、阐述了先进微创外科手术导航的核心流程包括计算解剖模型、术中实时导航方案、三维可视化方法及交互式软件技术,归纳了各类微创外科手术方法的临床应用。同时,重点讨论了全球各种手术导航技术在临床应用中的优缺点,分析了目前手术导航领域内的最新技术方法。在此基础上,提出了微创外科手术方法正向数字化、个性化、精准化、诊疗一体化、机器人化以及高度智能化的发展趋势。【Abstract】Interventional endoscopy (e.g., bronchoscopy, colonoscopy, laparoscopy, cystoscopy) is a widely performed procedure that involves either diagnosis of suspicious lesions or guidance for minimally invasive surgery in a variety of organs within the body cavity. Endoscopy may also be used to guide the introduction of certain items (e.g., stents) into the body. Endoscopic navigation systems seek to integrate big data with multimodal information (e.g., computed tomography, magnetic resonance images, endoscopic video sequences, ultrasound images, external trackers) relative to the patient's anatomy, control the movement of medical endoscopes and surgical tools, and guide the surgeon's actions during endoscopic interventions. Nevertheless, it remains challenging to realize the next generation of context-aware navigated endoscopy. This review presents a broad survey of various aspects of endoscopic navigation, particularly with respect to the development of endoscopic navigation techniques. First, we investigate big data with multimodal information involved in endoscopic navigation. Next, we focus on numerous methodologies used for endoscopic navigation. We then review different endoscopic procedures in clinical applications. Finally, we discuss novel techniques and promising directions for the development of endoscopic navigation.X.L. acknowledges funding from the Fundamental Research Funds for the Central Universities. T.M.P. acknowledges funding from the Canadian Foundation for Innovation, the Canadian Institutes for Health Research, the National Sciences and Engineering Research Council of Canada, and a grant from Intuitive Surgical Inc
    corecore