9 research outputs found
DeepEOR: automated perioperative volumetric assessment of variable grade gliomas using deep learning
PURPOSE
Volumetric assessments, such as extent of resection (EOR) or residual tumor volume, are essential criterions in glioma resection surgery. Our goal is to develop and validate segmentation machine learning models for pre- and postoperative magnetic resonance imaging scans, allowing us to assess the percentagewise tumor reduction after intracranial surgery for gliomas.
METHODS
For the development of the preoperative segmentation model (U-Net), MRI scans of 1053 patients from the Multimodal Brain Tumor Segmentation Challenge (BraTS) 2021 as well as from patients who underwent surgery at the University Hospital in Zurich were used. Subsequently, the model was evaluated on a holdout set containing 285 images from the same sources. The postoperative model was developed using 72 scans and validated on 45 scans obtained from the BraTS 2015 and Zurich dataset. Performance is evaluated using Dice Similarity score, Jaccard coefficient and Hausdorff 95%.
RESULTS
We were able to achieve an overall mean Dice Similarity Score of 0.59 and 0.29 on the pre- and postoperative holdout sets, respectively. Our algorithm managed to determine correct EOR in 44.1%.
CONCLUSION
Although our models are not suitable for clinical use at this point, the possible applications are vast, going from automated lesion detection to disease progression evaluation. Precise determination of EOR is a challenging task, but we managed to show that deep learning can provide fast and objective estimates
Whole Spine Segmentation Using Object Detection and Semantic Segmentation
OBJECTIVE
Virtual and augmented reality have enjoyed increased attention in spine surgery. Preoperative planning, pedicle screw placement, and surgical training are among the most studied use cases. Identifying osseous structures is a key aspect of navigating a 3-dimensional virtual reconstruction. To automate the otherwise time-consuming process of labeling vertebrae on each slice individually, we propose a fully automated pipeline that automates segmentation on computed tomography (CT) and which can form the basis for further virtual or augmented reality application and radiomic analysis.
METHODS
Based on a large public dataset of annotated vertebral CT scans, we first trained a YOLOv8m (You-Only-Look-Once algorithm, Version 8 and size medium) to detect each vertebra individually. On the then cropped images, a 2D-U-Net was developed and externally validated on 2 different public datasets.
RESULTS
Two hundred fourteen CT scans (cervical, thoracic, or lumbar spine) were used for model training, and 40 scans were used for external validation. Vertebra recognition achieved a mAP50 (mean average precision with Jaccard threshold of 0.5) of over 0.84, and the segmentation algorithm attained a mean Dice score of 0.75 ± 0.14 at internal, 0.77 ± 0.12 and 0.82 ± 0.14 at external validation, respectively.
CONCLUSION
We propose a 2-stage approach consisting of single vertebra labeling by an object detection algorithm followed by semantic segmentation. In our externally validated pilot study, we demonstrate robust performance for our object detection network in identifying individual vertebrae, as well as for our segmentation model in precisely delineating the bony structures
TomoRay: Generating Synthetic Computed Tomography of the Spine From Biplanar Radiographs
OBJECTIVE
Computed tomography (CT) imaging is a cornerstone in the assessment of patients with spinal trauma and in the planning of spinal interventions. However, CT studies are associated with logistical problems, acquisition costs, and radiation exposure. In this proof-of-concept study, the feasibility of generating synthetic spinal CT images using biplanar radiographs was explored. This could expand the potential applications of x-ray machines pre-, post-, and even intraoperatively.
METHODS
A cohort of 209 patients who underwent spinal CT imaging from the VerSe2020 dataset was used to train the algorithm. The model was subsequently evaluated using an internal and external validation set containing 55 from the VerSe2020 dataset and a subset of 56 images from the CTSpine1K dataset, respectively. Digitally reconstructed radiographs served as input for training and evaluation of the 2-dimensional (2D)-to-3-dimentional (3D) generative adversarial model. Model performance was assessed using peak signal to noise ratio (PSNR), structural similarity index (SSIM), and cosine similarity (CS).
RESULTS
At external validation, the developed model achieved a PSNR of 21.139 ± 1.018 dB (mean ± standard deviation). The SSIM and CS amounted to 0.947 ± 0.010 and 0.671 ± 0.691, respectively.
CONCLUSION
Generating an artificial 3D output from 2D imaging is challenging, especially for spinal imaging, where x-rays are known to deliver insufficient information frequently. Although the synthetic CT scans derived from our model do not perfectly match their ground truth CT, our proof-of-concept study warrants further exploration of the potential of this technology
Automated volumetric assessment of pituitary adenoma
PURPOSE
Assessment of pituitary adenoma (PA) volume and extent of resection (EOR) through manual segmentation is time-consuming and likely suffers from poor interrater agreement, especially postoperatively. Automated tumor segmentation and volumetry by use of deep learning techniques may provide more objective and quick volumetry.
METHODS
We developed an automated volumetry pipeline for pituitary adenoma. Preoperative and three-month postoperative T1-weighted, contrast-enhanced magnetic resonance imaging (MRI) with manual segmentations were used for model training. After adequate preprocessing, an ensemble of convolutional neural networks (CNNs) was trained and validated for preoperative and postoperative automated segmentation of tumor tissue. Generalization was evaluated on a separate holdout set.
RESULTS
In total, 193 image sets were used for training and 20 were held out for validation. At validation using the holdout set, our models (preoperative / postoperative) demonstrated a median Dice score of 0.71 (0.27) / 0 (0), a mean Jaccard score of 0.53 ± 0.21/0.030 ± 0.085 and a mean 95 percentile Hausdorff distance of 3.89 ± 1.96./12.199 ± 6.684. Pearson's correlation coefficient for volume correlation was 0.85 / 0.22 and -0.14 for extent of resection. Gross total resection was detected with a sensitivity of 66.67% and specificity of 36.36%.
CONCLUSIONS
Our volumetry pipeline demonstrated its ability to accurately segment pituitary adenomas. This is highly valuable for lesion detection and evaluation of progression of pituitary incidentalomas. Postoperatively, however, objective and precise detection of residual tumor remains less successful. Larger datasets, more diverse data, and more elaborate modeling could potentially improve performance
Machine learning-based clinical outcome prediction in surgery for acromegaly
Purpose
Biochemical remission (BR), gross total resection (GTR), and intraoperative cerebrospinal fluid (CSF) leaks are important metrics in transsphenoidal surgery for acromegaly, and prediction of their likelihood using machine learning would be clinically advantageous. We aim to develop and externally validate clinical prediction models for outcomes after transsphenoidal surgery for acromegaly.
Methods
Using data from two registries, we develop and externally validate machine learning models for GTR, BR, and CSF leaks after endoscopic transsphenoidal surgery in acromegalic patients. For the model development a registry from Bologna, Italy was used. External validation was then performed using data from Zurich, Switzerland. Gender, age, prior surgery, as well as Hardy and Knosp classification were used as input features. Discrimination and calibration metrics were assessed.
Results
The derivation cohort consisted of 307 patients (43.3% male; mean [SD] age, 47.2 [12.7] years). GTR was achieved in 226 (73.6%) and BR in 245 (79.8%) patients. In the external validation cohort with 46 patients, 31 (75.6%) achieved GTR and 31 (77.5%) achieved BR. Area under the curve (AUC) at external validation was 0.75 (95% confidence interval: 0.59–0.88) for GTR, 0.63 (0.40–0.82) for BR, as well as 0.77 (0.62–0.91) for intraoperative CSF leaks. While prior surgery was the most important variable for prediction of GTR, age, and Hardy grading contributed most to the predictions of BR and CSF leaks, respectively.
Conclusions
Gross total resection, biochemical remission, and CSF leaks remain hard to predict, but machine learning offers potential in helping to tailor surgical therapy. We demonstrate the feasibility of developing and externally validating clinical prediction models for these outcomes after surgery for acromegaly and lay the groundwork for development of a multicenter model with more robust generalization
Machine learning-based clinical outcome prediction in surgery for acromegaly
In therapy for pituitary adenomas, the main cause of Acromegaly, physicians are frequently confronted with the question, whether a surgical approach is appropriate or if conservative medical therapy might be more suitable. Gross total resection (GTR), biochemical remission (BR) as well as intraoperative cerebrospinal fluid leaks (CSF leaks) are key parameters that quantify success of surgery. Thus, predicting the probability of achieving these important targets with machine learning before surgery would be clinically beneficial
Receiver architecture for multicarrier-based GNSS signals
International audienceThis paper presents the receiver architecture analysis on FMT signals for navigation purposes, which are a multicarrier signaling technique based on OFDM for CDMA signals with SRRC pulses for the spectral efficiency. A new type of FMT signal processing scheme making use of the complex FMT replica to have a narrow correlation peak and thus to achieve precise ranging without performing the IFFT operation is proposed. The distinctive features of the proposed receiver architecture are discussed and preliminary considerations on the signal processing schemes, e.g. acquisition and tracking, for FMT signals are described