5 research outputs found

    A Clinical Guideline Driven Automated Linear Feature Extraction for Vestibular Schwannoma

    Full text link
    Vestibular Schwannoma is a benign brain tumour that grows from one of the balance nerves. Patients may be treated by surgery, radiosurgery or with a conservative "wait-and-scan" strategy. Clinicians typically use manually extracted linear measurements to aid clinical decision making. This work aims to automate and improve this process by using deep learning based segmentation to extract relevant clinical features through computational algorithms. To the best of our knowledge, our study is the first to propose an automated approach to replicate local clinical guidelines. Our deep learning based segmentation provided Dice-scores of 0.8124 +- 0.2343 and 0.8969 +- 0.0521 for extrameatal and whole tumour regions respectively for T2 weighted MRI, whereas 0.8222 +- 0.2108 and 0.9049 +- 0.0646 were obtained for T1 weighted MRI. We propose a novel algorithm to choose and extract the most appropriate maximum linear measurement from the segmented regions based on the size of the extrameatal portion of the tumour. Using this tool, clinicians will be provided with a visual guide and related metrics relating to tumour progression that will function as a clinical decision aid. In this study, we utilize 187 scans obtained from 50 patients referred to a tertiary specialist neurosurgical service in the United Kingdom. The measurements extracted manually by an expert neuroradiologist indicated a significant correlation with the automated measurements (p < 0.0001).Comment: SPIE Medical Imagin

    Imaging biomarkers associated with extra-axial intracranial tumors: a systematic review

    Get PDF
    Extra-axial brain tumors are extra-cerebral tumors and are usually benign. The choice of treatment for extra-axial tumors is often dependent on the growth of the tumor, and imaging plays a significant role in monitoring growth and clinical decision-making. This motivates the investigation of imaging biomarkers for these tumors that may be incorporated into clinical workflows to inform treatment decisions. The databases from Pubmed, Web of Science, Embase, and Medline were searched from 1 January 2000 to 7 March 2022, to systematically identify relevant publications in this area. All studies that used an imaging tool and found an association with a growth-related factor, including molecular markers, grade, survival, growth/progression, recurrence, and treatment outcomes, were included in this review. We included 42 studies, comprising 22 studies (50%) of patients with meningioma; 17 studies (38.6%) of patients with pituitary tumors; three studies (6.8%) of patients with vestibular schwannomas; and two studies (4.5%) of patients with solitary fibrous tumors. The included studies were explicitly and narratively analyzed according to tumor type and imaging tool. The risk of bias and concerns regarding applicability were assessed using QUADAS-2. Most studies (41/44) used statistics-based analysis methods, and a small number of studies (3/44) used machine learning. Our review highlights an opportunity for future work to focus on machine learning-based deep feature identification as biomarkers, combining various feature classes such as size, shape, and intensity.Systematic Review Registration: PROSPERO, CRD4202230692

    DEEP-squared: deep learning powered De-scattering with Excitation Patterning

    No full text
    Abstract Limited throughput is a key challenge in in vivo deep tissue imaging using nonlinear optical microscopy. Point scanning multiphoton microscopy, the current gold standard, is slow especially compared to the widefield imaging modalities used for optically cleared or thin specimens. We recently introduced “De-scattering with Excitation Patterning” or “DEEP” as a widefield alternative to point-scanning geometries. Using patterned multiphoton excitation, DEEP encodes spatial information inside tissue before scattering. However, to de-scatter at typical depths, hundreds of such patterned excitations were needed. In this work, we present DEEP2, a deep learning-based model that can de-scatter images from just tens of patterned excitations instead of hundreds. Consequently, we improve DEEP’s throughput by almost an order of magnitude. We demonstrate our method in multiple numerical and experimental imaging studies, including in vivo cortical vasculature imaging up to 4 scattering lengths deep in live mice
    corecore