6 research outputs found

    Developing the surgeon-machine interface: Using a novel instance-segmentation framework for intraoperative landmark labelling

    Get PDF
    Introduction: The utilisation of artificial intelligence (AI) augments intraoperative safety, surgical training, and patient outcomes. We introduce the term Surgeon-Machine Interface (SMI) to describe this innovative intersection between surgeons and machine inference. A custom deep computer vision (CV) architecture within a sparse labelling paradigm was developed, specifically tailored to conceptualise the SMI. This platform demonstrates the ability to perform instance segmentation on anatomical landmarks and tools from a single open spinal dural arteriovenous fistula (dAVF) surgery video dataset. Methods: Our custom deep convolutional neural network was based on SOLOv2 architecture for precise, instance-level segmentation of surgical video data. Test video consisted of 8520 frames, with sparse labelling of only 133 frames annotated for training. Accuracy and inference time, assessed using F1-score and mean Average Precision (mAP), were compared against current state-of-the-art architectures on a separate test set of 85 additionally annotated frames. Results: Our SMI demonstrated superior accuracy and computing speed compared to these frameworks. The F1-score and mAP achieved by our platform were 17% and 15.2% respectively, surpassing MaskRCNN (15.2%, 13.9%), YOLOv3 (5.4%, 11.9%), and SOLOv2 (3.1%, 10.4%). Considering detections that exceeded the Intersection over Union threshold of 50%, our platform achieved an impressive F1-score of 44.2% and mAP of 46.3%, outperforming MaskRCNN (41.3%, 43.5%), YOLOv3 (15%, 34.1%), and SOLOv2 (9%, 32.3%). Our platform demonstrated the fastest inference time (88ms), compared to MaskRCNN (90ms), SOLOV2 (100ms), and YOLOv3 (106ms). Finally, the minimal amount of training set demonstrated a good generalisation performance -our architecture successfully identified objects in a frame that were not included in the training or validation frames, indicating its ability to handle out-of-domain scenarios. Discussion: We present our development of an innovative intraoperative SMI to demonstrate the future promise of advanced CV in the surgical domain. Through successful implementation in a microscopic dAVF surgery, our framework demonstrates superior performance over current state-of-the-art segmentation architectures in intraoperative landmark guidance with high sample efficiency, representing the most advanced AI-enabled surgical inference platform to date. Our future goals include transfer learning paradigms for scaling to additional surgery types, addressing clinical and technical limitations for performing real-time decoding, and ultimate enablement of a real-time neurosurgical guidance platform.</p

    Volumetric breast density estimation from three-dimensional reconstructed digital breast tomosynthesis images using deep learning

    Get PDF
    PURPOSE: Breast density is a widely established independent breast cancer risk factor. With the increasing utilization of digital breast tomosynthesis (DBT) in breast cancer screening, there is an opportunity to estimate volumetric breast density (VBD) routinely. However, current available methods extrapolate VBD from two-dimensional (2D) images acquired using DBT and/or depend on the existence of raw DBT data, which is rarely archived by clinical centers because of storage constraints. METHODS: We retrospectively analyzed 1,080 nonactionable three-dimensional (3D) reconstructed DBT screening examinations acquired between 2011 and 2016. Reference tissue segmentations were generated using previously validated software that uses 3D reconstructed slices and raw 2D DBT data. We developed a deep learning (DL) model that segments dense and fatty breast tissue from background. We then applied this model to estimate %VBD and absolute dense volume (ADV) in cm RESULTS: The DL model achieved unweighted and weighted Dice scores of 0.88 (standard deviation [SD] = 0.08) and 0.76 (SD = 0.15), respectively, on the held-out test set, demonstrating good agreement between the model and 3D reference segmentations. There was a significant association between the odds of breast cancer diagnosis and model-derived VBD (odds ratio [OR], 1.41 [95 % CI, 1.13 to 1.77]; CONCLUSION: DL-derived density measures derived from 3D reconstructed DBT images are associated with breast cancer diagnosis

    Volumetric Breast Density Estimation From Three-Dimensional Reconstructed Digital Breast Tomosynthesis Images Using Deep Learning

    No full text
    Purpose: Breast density is a widely established independent breast cancer risk factor. With the increasing utilization of digital breast tomosynthesis (DBT) in breast cancer screening, there is an opportunity to estimate volumetric breast density (VBD) routinely. However, current available methods extrapolate VBD from two-dimensional (2D) images acquired using DBT and/or depend on the existence of raw DBT data, which is rarely archived by clinical centers because of storage constraints. Methods: We retrospectively analyzed 1,080 nonactionable three-dimensional (3D) reconstructed DBT screening examinations acquired between 2011 and 2016. Reference tissue segmentations were generated using previously validated software that uses 3D reconstructed slices and raw 2D DBT data. We developed a deep learning (DL) model that segments dense and fatty breast tissue from background. We then applied this model to estimate %VBD and absolute dense volume (ADV) in cm3 in a separate case-control sample (180 cases and 654 controls). We created two conditional logistic regression models, relating each model-derived density measurement to likelihood of contralateral breast cancer diagnosis, adjusted for age, BMI, family history, and menopausal status. Results: The DL model achieved unweighted and weighted Dice scores of 0.88 (standard deviation [SD] = 0.08) and 0.76 (SD = 0.15), respectively, on the held-out test set, demonstrating good agreement between the model and 3D reference segmentations. There was a significant association between the odds of breast cancer diagnosis and model-derived VBD (odds ratio [OR], 1.41 [95 % CI, 1.13 to 1.77]; P = .002), with an AUC of 0.65 (95% CI, 0.60 to 0.69). ADV was also significantly associated with breast cancer diagnosis (OR, 1.45 [95% CI, 1.22 to 1.73]; P < .001) with an AUC of 0.67 (95% CI, 0.62 to 0.71). Conclusion: DL-derived density measures derived from 3D reconstructed DBT images are associated with breast cancer diagnosis
    corecore