28 research outputs found

    Deep learning-based synthetic-CT generation in radiotherapy and PET: A review

    Get PDF
    Recently,deep learning (DL)-based methods for the generation of synthetic computed tomography (sCT) have received significant research attention as an alternative to classical ones. We present here a systematic review of these methods by grouping them into three categories, according to their clinical applications: (i) to replace computed tomography in magnetic resonance (MR) based treatment planning, (ii) facilitate cone-beam computed tomography based image-guided adaptive radiotherapy, and (iii) derive attenuation maps for the correction of positron emission tomography. Appropriate database searching was performed on journal articles published between January 2014 and December 2020. The DL methods' key characteristics were extracted from each eligible study, and a comprehensive comparison among network architectures and metrics was reported. A detailed review of each category was given, highlighting essential contributions, identifying specific challenges, and summarizing the achievements. Lastly, the statistics of all the cited works from various aspects were analyzed, revealing the popularity and future trends and the potential of DL-based sCT generation. The current status of DL-based sCT generation was evaluated, assessing the clinical readiness of the presented methods

    Comparison of CBCT based synthetic CT methods suitable for proton dose calculations in adaptive proton therapy

    Get PDF
    In-room imaging is a prerequisite for adaptive proton therapy. The use of onboard cone-beam computed tomography (CBCT) imaging, which is routinely acquired for patient position verification, can enable daily dose reconstructions and plan adaptation decisions. Image quality deficiencies though, hamper dose calculation accuracy and make corrections of CBCTs a necessity. This study compared three methods to correct CBCTs and create synthetic CTs that are suitable for proton dose calculations. CBCTs, planning CTs and repeated CTs (rCT) from 33 H&N cancer patients were used to compare a deep convolutional neural network (DCNN), deformable image registration (DIR) and an analytical image-based correction method (AIC) for synthetic CT (sCT) generation. Image quality of sCTs was evaluated by comparison with a same-day rCT, using mean absolute error (MAE), mean error (ME), Dice similarity coefficient (DSC), structural non-uniformity (SNU) and signal/contrast-to-noise ratios (SNR/CNR) as metrics. Dosimetric accuracy was investigated in an intracranial setting by performing gamma analysis and calculating range shifts. Neural network-based sCTs resulted in the lowest MAE and ME (37/2 HU) and the highest DSC (0.96). While DIR and AIC generated images with a MAE of 44/77 HU, a ME of -8/1 HU and a DSC of 0.94/0.90. Gamma and range shift analysis showed almost no dosimetric difference between DCNN and DIR based sCTs. The lower image quality of AIC based sCTs affected dosimetric accuracy and resulted in lower pass ratios and higher range shifts. Patient-specific differences highlighted the advantages and disadvantages of each method. For the set of patients, the DCNN created synthetic CTs with the highest image quality. Accurate proton dose calculations were achieved by both DCNN and DIR based sCTs. The AIC method resulted in lower image quality and dose calculation accuracy was reduced compared to the other methods

    An Open-Source COVID-19 CT Dataset with Automatic Lung Tissue Classification for Radiomics

    Get PDF
    The coronavirus disease 19 (COVID-19) pandemic is having a dramatic impact on society and healthcare systems. In this complex scenario, lung computerized tomography (CT) may play an important prognostic role. However, datasets released so far present limitations that hamper the development of tools for quantitative analysis. In this paper, we present an open-source lung CT dataset comprising information on 50 COVID-19-positive patients. The CT volumes are provided along with (i) an automatic threshold-based annotation obtained with a Gaussian mixture model (GMM) and (ii) a scoring provided by an expert radiologist. This score was found to significantly correlate with the presence of ground glass opacities and the consolidation found with GMM. The dataset is freely available in an ITK-based file format under the CC BY-NC 4.0 license. The code for GMM fitting is publicly available, as well. We believe that our dataset will provide a unique opportunity for researchers working in the field of medical image analysis, and hope that its release will lay the foundations for the successfully implementation of algorithms to support clinicians in facing the COVID-19 pandemic

    Ageing of flax textiles: fingerprints in micro-Raman spectra of single fibres

    No full text
    Flax fibre (Linum usitatissimum) is probably the earliest textile material and holds a great archaeological interest. The possibility to define a connection between ageing and molecular characteristics is thus a concrete purpose aiming to help indirect dating. In the present work, such possibility was investigated by spectroscopic techniques that allow to examine micro-sized samples and are moreover non-destructive towards the sample itself, an important requirement when precious and ancient artefacts are analysed. Confocal micro-Raman spectroscopy and laser-excited micro-fluorescence spectroscopy were applied to 23 micrometric fibres from historical linens (dating from about 3000 BC to the XVII cent.) and 11 crude or treated modern fibres. The intensity ratio between Raman bands at 1121 and 1096 cm(-1), already suggested in the literature as a possible signature for ageing, was systematically evaluated after baseline correction, showing that modern samples exhibit a quite constant ratio value of 0.85 +/- 0.05 which diminishes up to 0.7 if the linen fibre is heated or bleached. Fibres form archaeological linen show instead a lower value for this ratio, that decreases to about 0.5 depending both on age and on conservation conditions. Laser-excited fluorescence spectra were also collected from the fibres, yielding a Pearson correlation value of about 0.7 between the intensity of the fluorescence emission and the age of the flax samples. Irregularity in the trend is mainly due to the possible influence of alien features such as contamination from organic substances

    CoroFinder: A New Tool for Real Time Detection and Tracking of Coronary Arteries in Contrast-Free Cine-Angiography

    No full text
    Coronary Angiography (CA) is the standard of reference to diagnose coronary artery disease. Yet, only a portion of the information it conveys is usually used. Quantitative Coronary Angiography (QCA) reliably contributes to improving the measurable assessment of CA. In this work, we developed a new software, CoroFinder, able to automatically identify epicardial coronary arteries and to dynamically track the vessel profile in dye-free frames. The coronary tree is automatically segmented by Frangi’s filter in the angiogram’s frames where vessels are contrasted (“template frames”). Afterward, the image similarity among each template frame and the dye-free images is scored by cross-correlation. Finally, each dye-free image is associated with the most similar template frame, resulting in an estimation of vessel contour. CoroFinder allows locating the position of coronary arteries in absence of contrast dye. The developed algorithm is robust to diverse vessel curvatures, variation of vessel widths, and the presence of stenoses. This article describes the newly developed CoroFinder algorithm and the associated software and provides an overview of its potential application in research and for translation to the clinic

    SlicerArduino: A Bridge between Medical Imaging Platform and Microcontroller

    No full text
    Interaction between medical image platform and external environment is a desirable feature in several clinical, research, and educational scenarios. In this work, the integration between 3D Slicer package and Arduino board is introduced, enabling a simple and useful communication between the two software/hardware platforms. The open source extension, programmed in Python language, manages the connection process and offers a communication layer accessible from any point of the medical image suite infrastructure. Deep integration with 3D Slicer code environment is provided and a basic input–output mechanism accessible via GUI is also made available. To test the proposed extension, two exemplary use cases were implemented: (1) INPUT data to 3D Slicer, to navigate on basis of data detected by a distance sensor connected to the board, and (2) OUTPUT data from 3D Slicer, to control a servomotor on the basis of data computed through image process procedures. Both goals were achieved and quasi-real-time control was obtained without any lag or freeze, thus boosting the integration between 3D Slicer and Arduino. This integration can be easily obtained through the execution of few lines of Python code. In conclusion, SlicerArduino proved to be suitable for fast prototyping, basic input–output interaction, and educational purposes. The extension is not intended for mission-critical clinical tasks
    corecore