3 research outputs found
Advancing combined radiological and optical scanning for breast-conserving surgery margin guidance
Breast cancer is one of the most common types of cancer worldwide, and standard-of-care for early-stage disease typically involves a lumpectomy or breast-conserving surgery (BCS). BCS involves the local resection of cancerous tissue, while sparring as much healthy tissue as possible. State-of-the-art methods for intraoperatively evaluating BCS margins are limited. Approximately 20% of BCS cases result in a tissue resection with cancer at or near the resection surface (i.e., a positive margin). A two-fold increase in ipsilateral breast cancer recurrence is associated with the presence of one or more positive margins. Consequently, positive margins often necessitate costly re-excision procedures to achieve a curative outcome. X-ray micro-computed tomography (CT) is emerging as a powerful ex vivo specimen imaging technology, as it provides robust three-dimensional sensing of tumor morphology rapidly. However, X-ray attenuation lacks contrast between soft tissues that are important for surgical decision making during BCS. Optical structured light imaging, including spatial frequency domain imaging and active line scan imaging, can act as adjuvant tools to complement micro-CT, providing wide field-of-view, non-contact sensing of relevant breast tissue subtypes on resection margins that cannot be differentiated by micro-CT alone. This thesis is dedicated to multimodal imaging of BCS tissues to ultimately improve intraoperative BCS margin assessment, reducing the number of positive margins after initial surgeries and thereby reducing the need for costly follow-up procedures. Volumetric sensing of micro-CT is combined with surface-weighted, sub-diffuse optical reflectance derived from high spatial frequency structured light imaging. Sub-diffuse reflectance plays the key role of providing enhanced contrast to a suite of normal, abnormal benign, and malignant breast tissue subtypes. This finding is corroborated through clinical studies imaging BCS specimen slices post-operatively and is further investigated through an observational clinical trial focused on combined, intraoperative micro-CT and optical imaging of whole, freshly resected BCS tumors. The central thesis of this work is that combining volumetric X-ray imaging and sub-diffuse optical scanning provides a synergistic multimodal imaging solution to margin assessment, one that can be readily implemented or retrofitted in X-ray specimen imaging systems and that could meaningfully improve surgical guidance during initial BCS procedures
Recommended from our members
A Hybrid Multibiometric System for Personal Identification Based on Face and Iris Traits. The Development of an automated computer system for the identification of humans by integrating facial and iris features using Localization, Feature Extraction, Handcrafted and Deep learning Techniques.
Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level.
Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image.
Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.Multimodal biometric systems have been widely applied in many real-world applications due to its ability to deal with a number of significant limitations of unimodal biometric systems, including sensitivity to noise, population coverage, intra-class variability, non-universality, and vulnerability to spoofing. This PhD thesis is focused on the combination of both the face and the left and right irises, in a unified hybrid multimodal biometric identification system using different fusion approaches at the score and rank level.
Firstly, the facial features are extracted using a novel multimodal local feature extraction approach, termed as the Curvelet-Fractal approach, which based on merging the advantages of the Curvelet transform with Fractal dimension. Secondly, a novel framework based on merging the advantages of the local handcrafted feature descriptors with the deep learning approaches is proposed, Multimodal Deep Face Recognition (MDFR) framework, to address the face recognition problem in unconstrained conditions. Thirdly, an efficient deep learning system is employed, termed as IrisConvNet, whose architecture is based on a combination of Convolutional Neural Network (CNN) and Softmax classifier to extract discriminative features from an iris image.
Finally, The performance of the unimodal and multimodal systems has been evaluated by conducting a number of extensive experiments on large-scale unimodal databases: FERET, CAS-PEAL-R1, LFW, CASIA-Iris-V1, CASIA-Iris-V3 Interval, MMU1 and IITD and MMU1, and SDUMLA-HMT multimodal dataset. The results obtained have demonstrated the superiority of the proposed systems compared to the previous works by achieving new state-of-the-art recognition rates on all the employed datasets with less time required to recognize the person’s identity.Higher Committee for Education Development in Ira