374 research outputs found

    Dental cone beam CT : An updated review

    Get PDF
    Cone beam computed tomography (CBCT) is a diverse 3D x-ray imaging technique that has gained significant popularity in dental radiology in the last two decades. CBCT overcomes the limitations of traditional twodimensional dental imaging and enables accurate depiction of multiplanar details of maxillofacial bony structures and surrounding soft tissues. In this review article, we provide an updated status on dental CBCT imaging and summarise the technical features of currently used CBCT scanner models, extending to recent developments in scanner technology, clinical aspects, and regulatory perspectives on dose optimisation, dosimetry, and diagnostic reference levels. We also consider the outlook of potential techniques along with issues that should be resolved in providing clinically more effective CBCT examinations that are optimised for the benefit of the patient.Peer reviewe

    IMPROVED IMAGE QUALITY IN CONE-BEAM COMPUTED TOMOGRAPHY FOR IMAGE-GUIDED INTERVENTIONS

    Get PDF
    In the past few decades, cone-beam computed tomography (CBCT) emerged as a rapidly developing imaging modality that provides single rotation 3D volumetric reconstruction with sub-millimeter spatial resolution. Compared to the conventional multi-detector CT (MDCT), CBCT exhibited a number of characteristics that are well suited to applications in image-guided interventions, including improved mechanical simplicity, higher portability, and lower cost. Although the current generation of CBCT has shown strong promise for high-resolution and high-contrast imaging (e.g., visualization of bone structures and surgical instrumentation), it is often believed that CBCT yields inferior contrast resolution compared to MDCT and is not suitable for soft-tissue imaging. Aiming at expanding the utility of CBCT in image-guided interventions, this dissertation concerns the development of advanced imaging systems and algorithms to tackle the challenges of soft-tissue contrast resolution. The presented material includes work encompassing: (i) a comprehensive simulation platform to generate realistic CBCT projections (e.g., as training data for deep learning approaches); (ii) a new projection domain statistical noise model to improve the noise-resolution tradeoff in model-based iterative reconstruction (MBIR); (iii) a novel method to avoid CBCT metal artifacts by optimization of the source-detector orbit; (iv) an integrated software pipeline to correct various forms of CBCT artifacts (i.e., lag, glare, scatter, beam hardening, patient motion, and truncation); (v) a new 3D reconstruction method that only reconstructs the difference image from the image prior for use in CBCT neuro-angiography; and (vi) a novel method for 3D image reconstruction (DL-Recon) that combines deep learning (DL)-based image synthesis network with physics-based models based on Bayesian estimation of the statical uncertainty of the neural network. Specific clinical challenges were investigated in monitoring patients in the neurological critical care unit (NCCU) and advancing intraoperative soft-tissue imaging capability in image-guided spinal and intracranial neurosurgery. The results show that the methods proposed in this work substantially improved soft-tissue contrast in CBCT. The thesis demonstrates that advanced imaging approaches based on accurate system models, novel artifact reduction methods, and emerging 3D image reconstruction algorithms can effectively tackle current challenges in soft-tissue contrast resolution and expand the application of CBCT in image-guided interventions

    Computed Tomography of Chemiluminescence: A 3D Time Resolved Sensor for Turbulent Combustion

    No full text
    Time resolved 3D measurements of turbulent flames are required to further understanding of combustion and support advanced simulation techniques (LES). Computed Tomography of Chemiluminescence (CTC) allows a flame’s 3D chemiluminescence profile to be obtained by inverting a series of integral measurements. CTC provides the instantaneous 3D flame structure, and can also measure: excited species concentrations, equivalence ratio, heat release rate, and possibly strain rate. High resolutions require simultaneous measurements from many view points, and the cost of multiple sensors has traditionally limited spatial resolutions. However, recent improvements in commodity cameras makes a high resolution CTC sensor possible and is investigated in this work. Using realistic LES Phantoms (known fields), the CT algorithm (ART) is shown to produce low error reconstructions even from limited noisy datasets. Error from selfabsorption is also tested using LES Phantoms and a modification to ART that successfully corrects this error is presented. A proof-of-concept experiment using 48 non-simultaneous views is performed and successfully resolves a Matrix Burner flame to 0.01% of the domain width (D). ART is also extended to 3D (without stacking) to allow 3D camera locations and optical effects to be considered. An optical integral geometry (weighted double-cone) is presented that corrects for limited depth-of-field, and (even with poorly estimated camera parameters) reconstructs the Matrix Burner as well as the standard geometry. CTC is implemented using five PicSight P32M cameras and mirrors to provide 10 simultaneous views. Measurements of the Matrix Burner and a Turbulent Opposed Jet achieve exposure times as low as 62 μs, with even shorter exposures possible. With only 10 views the spatial resolution of the reconstructions is low. However, a cosine Phantom study shows that 20–40 viewing angles are necessary to achieve high resolutions (0.01– 0.04D). With 40 P32M cameras costing £40000, future CTC implementations can achieve high spatial and temporal resolutions

    Advanced Endoscopic Navigation:Surgical Big Data,Methodology,and Applications

    Get PDF
    随着科学技术的飞速发展,健康与环境问题日益成为人类面临的最重大问题之一。信息科学、计算机技术、电子工程与生物医学工程等学科的综合应用交叉前沿课题,研究现代工程技术方法,探索肿瘤癌症等疾病早期诊断、治疗和康复手段。本论文综述了计算机辅助微创外科手术导航、多模态医疗大数据、方法论及其临床应用:从引入微创外科手术导航概念出发,介绍了医疗大数据的术前与术中多模态医学成像方法、阐述了先进微创外科手术导航的核心流程包括计算解剖模型、术中实时导航方案、三维可视化方法及交互式软件技术,归纳了各类微创外科手术方法的临床应用。同时,重点讨论了全球各种手术导航技术在临床应用中的优缺点,分析了目前手术导航领域内的最新技术方法。在此基础上,提出了微创外科手术方法正向数字化、个性化、精准化、诊疗一体化、机器人化以及高度智能化的发展趋势。【Abstract】Interventional endoscopy (e.g., bronchoscopy, colonoscopy, laparoscopy, cystoscopy) is a widely performed procedure that involves either diagnosis of suspicious lesions or guidance for minimally invasive surgery in a variety of organs within the body cavity. Endoscopy may also be used to guide the introduction of certain items (e.g., stents) into the body. Endoscopic navigation systems seek to integrate big data with multimodal information (e.g., computed tomography, magnetic resonance images, endoscopic video sequences, ultrasound images, external trackers) relative to the patient's anatomy, control the movement of medical endoscopes and surgical tools, and guide the surgeon's actions during endoscopic interventions. Nevertheless, it remains challenging to realize the next generation of context-aware navigated endoscopy. This review presents a broad survey of various aspects of endoscopic navigation, particularly with respect to the development of endoscopic navigation techniques. First, we investigate big data with multimodal information involved in endoscopic navigation. Next, we focus on numerous methodologies used for endoscopic navigation. We then review different endoscopic procedures in clinical applications. Finally, we discuss novel techniques and promising directions for the development of endoscopic navigation.X.L. acknowledges funding from the Fundamental Research Funds for the Central Universities. T.M.P. acknowledges funding from the Canadian Foundation for Innovation, the Canadian Institutes for Health Research, the National Sciences and Engineering Research Council of Canada, and a grant from Intuitive Surgical Inc

    Recent advances in optical tomography in low scattering media

    Get PDF
    Low scattering media is the best scenario for optical imaging in thick samples and deep tissue, as it allows to obtain high resolution images without suffering the limitations that the diffusion phenomenon imposes. The high contribution of ballistic light in this regime enabled the development of light sheet microscopy and optical projection tomography, two of the most common techniques nowadays in research laboratories. Their revolutionary approach and wide spectrum of applications and possibilities has lead to a frenetic rhythm of new works and techniques arising every year. The large amount of information available often overwhelms scientists and researchers trying to keep up to date with the last cutting edge advances of the field. This paper aims to give a brief review of the origins and fundamental aspects of these two techniques to focus on the most recent and yet non reviewed works. Apart from novel methods, this document also covers combined multimodal approaches and systems. To conclude, we put a spotlight on the important role that open-source microscopy systems play in the field, as they improve the accessibility to these techniques and promote collaborative networks across the optical imaging community

    Augmented Reality and Artificial Intelligence in Image-Guided and Robot-Assisted Interventions

    Get PDF
    In minimally invasive orthopedic procedures, the surgeon places wires, screws, and surgical implants through the muscles and bony structures under image guidance. These interventions require alignment of the pre- and intra-operative patient data, the intra-operative scanner, surgical instruments, and the patient. Suboptimal interaction with patient data and challenges in mastering 3D anatomy based on ill-posed 2D interventional images are essential concerns in image-guided therapies. State of the art approaches often support the surgeon by using external navigation systems or ill-conditioned image-based registration methods that both have certain drawbacks. Augmented reality (AR) has been introduced in the operating rooms in the last decade; however, in image-guided interventions, it has often only been considered as a visualization device improving traditional workflows. Consequently, the technology is gaining minimum maturity that it requires to redefine new procedures, user interfaces, and interactions. This dissertation investigates the applications of AR, artificial intelligence, and robotics in interventional medicine. Our solutions were applied in a broad spectrum of problems for various tasks, namely improving imaging and acquisition, image computing and analytics for registration and image understanding, and enhancing the interventional visualization. The benefits of these approaches were also discovered in robot-assisted interventions. We revealed how exemplary workflows are redefined via AR by taking full advantage of head-mounted displays when entirely co-registered with the imaging systems and the environment at all times. The proposed AR landscape is enabled by co-localizing the users and the imaging devices via the operating room environment and exploiting all involved frustums to move spatial information between different bodies. The system's awareness of the geometric and physical characteristics of X-ray imaging allows the exploration of different human-machine interfaces. We also leveraged the principles governing image formation and combined it with deep learning and RGBD sensing to fuse images and reconstruct interventional data. We hope that our holistic approaches towards improving the interface of surgery and enhancing the usability of interventional imaging, not only augments the surgeon's capabilities but also augments the surgical team's experience in carrying out an effective intervention with reduced complications

    Adaptive Cone-Beam Scan-Trajektorien für interventionelle Anwendungen

    Get PDF
    Adaptive Cone-Beam Scan-Trajektorien für interventionelle Anwendungen Die interventionelle Röntgenbildgebung stellt Ärzten während minimalinvasiven Eingriffen Informationen über die Patientenmorphologie bereit. Sie muss aber aufgrund der gewebeschädigenden Wirkung mit Bedacht eingesetzt werden. Derzeit können Ärzte nur zwischen dosisarmen Röntgenprojektionen ohne Tiefeninformation und strahlungsintensiven Cone-Beam- Computertomografien mit Tiefeninformation wählen. Viele medizinische Anwendungen wie Positionskontrollen erfordern zwar Tiefeninformation, aber keinen vollständigen 3D-Datensatz. Adaptive Scan-Trajektorien können diese Lücke schließen, indem sie Objekte gezielt unterabtasten und die relevanten Informationen so dosiseffizient in Erfahrung bringen. In dieser Arbeit wird eine Methode präsentiert, die eine Implementierung von neuen adaptiven Scan-Trajektorien an einem C-Bogen-System erlaubt. Am Beispiel einer Klasse von Scan-Trajektorien, der zirkulären Tomosynthese (ZT), wurde die Realisierbarkeit der Methode demonstriert. Streustrahlenmessungen ergaben, dass die ZT eine vorteilhaftere Streustrahlenverteilung als die klassischen 3D-Trajektorien aufweist. In kritischen Körperpartien wie oberer Torso und Gesicht, wurde eine geringere relative Dosis von 75% und 46% (ZT) als bei klassischen Trajektorien (100% und 63%) gemessen. Die Scan-Trajektorien wurden mit einer Kalibrierung kombiniert, die auch eine retrospektive Kalibrierung an beliebigen Positionen im Interventionsraum erlaubt. In Streßtests konnten die Positionen von Metallkugeln eines Evaluierungsphantoms mit einer mittleren Genauigkeit von (0,01 ± 0,08) mm und einer mittleren Radiusabweichung von (0,13 ± 0,07) mm bestimmt werden. Bei einer Voxelgröße von 0,48 mm sind die Abweichungen kleiner als die Messgenauigkeit des bildgebenden Systems. Die untersuchten Trajektorien verwenden nur ein Viertel bis ein Fünftel der Projektionen herkömmlicher 3D-Trajektorien. Die Unterabtastung des Objekts und die Dosiseinsparung verursachen Artefakte in den Bilddaten. Mithilfe eines vorwissenbasierten Ansatzes konnten diese Artefakte minimiert und die Bildqualität auf die eines konventionellen 3D-Datensatzes verbessert werden. Die Ergebnisse dieser Arbeit zeigen, dass adaptive Scan-Trajektorien die interventionelle Röntgenbildgebung um einen neuen Bildgebungsmodus erweitern können, der gegenüber derzeitigen Bildgebungsmodi relevante Bildinformationen bei reduzierter Dosis akquiriert

    Technical advances in image-guided radiation therapy systems

    Get PDF
    corecore