53 research outputs found

    A Deep Learning Study on Osteosarcoma Detection from Histological Images

    Full text link
    In the U.S, 5-10\% of new pediatric cases of cancer are primary bone tumors. The most common type of primary malignant bone tumor is osteosarcoma. The intention of the present work is to improve the detection and diagnosis of osteosarcoma using computer-aided detection (CAD) and diagnosis (CADx). Such tools as convolutional neural networks (CNNs) can significantly decrease the surgeon's workload and make a better prognosis of patient conditions. CNNs need to be trained on a large amount of data in order to achieve a more trustworthy performance. In this study, transfer learning techniques, pre-trained CNNs, are adapted to a public dataset on osteosarcoma histological images to detect necrotic images from non-necrotic and healthy tissues. First, the dataset was preprocessed, and different classifications are applied. Then, Transfer learning models including VGG19 and Inception V3 are used and trained on Whole Slide Images (WSI) with no patches, to improve the accuracy of the outputs. Finally, the models are applied to different classification problems, including binary and multi-class classifiers. Experimental results show that the accuracy of the VGG19 has the highest, 96\%, performance amongst all binary classes and multiclass classification. Our fine-tuned model demonstrates state-of-the-art performance on detecting malignancy of Osteosarcoma based on histologic images

    Optimized Swarm Enabled Deep Learning Technique for Bone Tumor Detection using Histopathological Image

    Get PDF
    Cancer subjugates a community that lacks proper care. It remains apparent that research studies enhance novel benchmarks in developing a computer-assisted tool for prognosis in radiology yet an indication of illness detection should be recognized by the pathologist. In bone cancer (BC), Identification of malignancy out of the BC’s histopathological image (HI) remains difficult because of the intricate structure of the bone tissue (BTe) specimen. This study proffers a new approach to diagnosing BC by feature extraction alongside classification employing deep learning frameworks. In this, the input is processed and segmented by Tsallis Entropy for noise elimination, image rescaling, and smoothening. The features are excerpted employing Efficient Net-based Convolutional Neural Network (CNN) Feature Extraction. ROI extraction will be employed to enhance the precise detection of atypical portions surrounding the affected area. Next, for classifying the accurate spotting and for grading the BTe as typical and a typical employing augmented XGBoost alongside Whale optimization (WOA). HIs gathering out of prevailing scales patients is acquired alongside texture characteristics of such images remaining employed for training and testing the Neural Network (NN). These classification outcomes exhibit that NN possesses a hit ratio of 99.48 percent while this occurs in BT classification

    Emerging Applications of Deep Learning in Bone Tumors: Current Advances and Challenges

    Get PDF
    Deep learning is a subfield of state-of-the-art artificial intelligence (AI) technology, and multiple deep learning-based AI models have been applied to musculoskeletal diseases. Deep learning has shown the capability to assist clinical diagnosis and prognosis prediction in a spectrum of musculoskeletal disorders, including fracture detection, cartilage and spinal lesions identification, and osteoarthritis severity assessment. Meanwhile, deep learning has also been extensively explored in diverse tumors such as prostate, breast, and lung cancers. Recently, the application of deep learning emerges in bone tumors. A growing number of deep learning models have demonstrated good performance in detection, segmentation, classification, volume calculation, grading, and assessment of tumor necrosis rate in primary and metastatic bone tumors based on both radiological (such as X-ray, CT, MRI, SPECT) and pathological images, implicating a potential for diagnosis assistance and prognosis prediction of deep learning in bone tumors. In this review, we first summarized the workflows of deep learning methods in medical images and the current applications of deep learning-based AI for diagnosis and prognosis prediction in bone tumors. Moreover, the current challenges in the implementation of the deep learning method and future perspectives in this field were extensively discussed

    Deep learning for necrosis detection using canine perivascular wall tumour whole slide images

    Get PDF
    Necrosis seen in histopathology Whole Slide Images is a major criterion that contributes towards scoring tumour grade which then determines treatment options. However conventional manual assessment suffers from inter-operator reproducibility impacting grading precision. To address this, automatic necrosis detection using AI may be used to assess necrosis for final scoring that contributes towards the final clinical grade. Using deep learning AI, we describe a novel approach for automating necrosis detection in Whole Slide Images, tested on a canine Soft Tissue Sarcoma (cSTS) data set consisting of canine Perivascular Wall Tumours (cPWTs). A patch-based deep learning approach was developed where different variations of training a DenseNet-161 Convolutional Neural Network architecture were investigated as well as a stacking ensemble. An optimised DenseNet-161 with post-processing produced a hold-out test F1-score of 0.708 demonstrating state-of-the-art performance. This represents a novel first-time automated necrosis detection method in the cSTS domain as well specifically in detecting necrosis in cPWTs demonstrating a significant step forward in reproducible and reliable necrosis assessment for improving the precision of tumour grading

    Deep learning in medical imaging and radiation therapy

    Full text link
    Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/1/mp13264_am.pdfhttps://deepblue.lib.umich.edu/bitstream/2027.42/146980/2/mp13264.pd

    Deep learning for biomarker and outcome prediction in cancer

    Get PDF
    Machine learning in the form of deep learning (DL) has recently transformed how computer vision tasks are solved in numerous domains, including image-based medical diagnostics. DL-based methods have the potential to enable more precise quantitative characterisation of cancer tissue specimens routinely analysed in clinical pathology laboratories for diagnostic purposes. Computer-assisted tissue analysis within pathology is not restricted to the quantification and classification of specific tissue entities. DL allows to directly address clinically relevant questions related to the prediction of cancer outcome and efficacy of cancer treatment. This thesis focused on the following crucial research question: is it possible to predict cancer outcome, biomarker status, and treatment efficacy directly from the tissue morphology using DL without any special stains or molecular methods? To address this question, we utilised digitised hematoxylin-eosin-stained (H&E) tissue specimens from two common types of solid tumours – breast and colorectal cancer. Tissue specimens and corresponding clinical data were retrieved from retrospective patient series collected in Finland. First, a DL-based algorithm was developed to extract prognostic information for patients diagnosed with colorectal cancer, using digitised H&E images only. Computational analysis of tumour tissue samples with DL demonstrated a superhuman performance and surpassed a consensus of three expert pathologists in predicting five-year colorectal cancer-specific outcomes. Then, outcome prediction was studied in two independent breast cancer patient series. Particularly, generalisation of the trained algorithms to previously unseen patients from an independent series was examined on the large whole-slide tumour specimens. In breast cancer outcome prediction, we investigated a multitask learning approach by combining outcome and biomarker-supervised learning. Our experiments in breast and colorectal cancer show that tissue morphological features learned by the DL models supervised by patient outcome provided prognostic information independent of established prognostic factors such as histological grade, tumour size and lymph nodes status. Additionally, the accuracy of DL-based predictors was compared to other prognostic characteristics evaluated by pathologists in breast cancer, including mitotic count, nuclear pleomorphism, tubules formation, tumour necrosis and tumour-infiltrating lymphocytes. We further assessed if molecular biomarkers such as hormone receptor status and ERBB2 gene amplification can be predicted from H&E- stained tissue samples obtained at the time of diagnosis from patients with breast cancer and showed that molecular alterations are reflected in the basic tissue morphology and can be captured with DL. Finally, we studied how morphological features of breast cancer can be linked to molecularly targeted treatment response. The results showed that ERBB2-associated morphology extracted with DL correlated with the efficacy of adjuvant anti-ERBB2 treatment and can contribute to treatment-predictive information in breast cancer. Taken together, this thesis shows the potential utility of DL in tissue-based characterisation of cancer for prediction of cancer outcome, tumour molecular status and efficacy of molecularly targeted treatments. DL-based analysis of the basic tissue morphology can provide significant predictive information and be combined with clinicopathological and molecular data to improve the accuracy of cancer diagnostics.Koneoppiminen syväoppimisen (SO) muodossa on muuttanut, miten tietokonenäön tehtävät ratkaistaan monilla toimialueilla, kuten lääketieteellisessä kuvantamisdiagnostiikkassa. SO-perusteiset menetelmät mahdollistavat tarkemman kvantitatiivisen karakterisoinnin syöpäkas- vainnäytteistä, jotka rutiinisti analysoidaan kliinisen patologian laboratorioissa diagnosointia varten. Tietokoneavusteinen kudosanalyysi ei rajoitu ainoastaan tiettyjen kudosentiteettien määrittämiseen ja luokitteluun. SO:n avulla voidaan suoraan tutkia syövän ennustetta ja syöpähoitojen vastetta. Tämä väitöskirja keskittyi tärkeään tutkimuskysymykseen: onko syövän ennuste, biomarkke- rien status ja hoidon tehokkuus mahdollista ennustaa SO:lla suoraan kudosmorfologiasta ilman erillisiä värjäyksiä tai molekyylibiologisia testejä? Vastataksemme tähän kysymykseen käytimme digitaalisia hematoksyliini-eosiini (H&E)-värjättyjä kudosnäytteitä kahdesta taval- lisesta kiinteästä kasvaimesta, rinta- ja paksusuolensyövästä. Kudosnäytteet ja niihin liittyvät kliiniset tiedot saatiin Suomessa kerätystä retrospektiivisestä potilassarjasta. Ensimmäiseksi kehitimme SO-algoritmin, jolla poimimme prognostisen tiedon paksusuolensyöpäpotilaista käyttäen ainoastaan digitalisoituja H&E-värjäyksiä. Kudosnäytteistä SO:lla tehty laskennalli- nen analyysi osoitti ihmisasiantuntijaa parempaa suorituskykyä ja ylitti kolmen patologian asiantuntijan antaman yksimielisen viiden vuoden ennusteen syövän lopputulemasta. Seu- raavaksi lopputuleman ennustamista tutkittiin kahdessa erillisessä rintasyöpäpotilassarjassa. Erityisesti tutkimme koulutetun algoritmin kykyä yleistää syöpäkudosten kokoleikkeistä, jotka olivat peräisin erillisestä algoritmille aiemmin tuntemattomasta potilassarjasta. Rin- tasyövän ennusteen suhteen tutkimme ”multitask learning”-lähestymistapaa yhdistämällä eloonjäämis- ja biomarkkeri-valvotun oppimisen. Tutkimuksemme rinta- ja paksusuolen- syövän osalta osoittavat, että SO-mallien avulla, jotka ovat opetettu potilaan eloonjäämisen mukaan, voidaan kudosmorfologian perusteella saada ennuste, joka on rippumaton aiemmin saatavilla olevista ennustetekijöistä, kuten histologisesta luokittelusta, kasvaimen koosta ja imusolmukkeiden statuksesta. Lisäksi SO-perusteisten ennusteiden tarkkuutta rintasyövässä verrattiin patologien arvioimiin syovän, kuten mitoosien lukumäärä, tuman pleomorfismiin, tubulusten tiehyeiden erilaistumisasteeseen, kasvaimen nekroosiin ja kasvaimen infiltroiviin lymfosyytteihin. Tutkimme myös, voiko rintasyöpäpotilailta syöpädiagnosoinnin yhteydessä saaduista H&E-värjätyistä kudosnäytteistä ennustaa molekulaarisia biomarkkereita, kuten hormonireseptoristatusta ja ERBB2-geenin monistumista. Tutkimuksemme osoitti, että mo- lekulaariset muutokset löytyvät myös kudosmorfologiasta ja ne voi tunnistaa SO:n avulla. Lopuksi tutkimme, miten rintasyövän morfologiset piirteet voidaan yhdistää hoitovasteeseen. Tutkimuksemme osoitti, että SO:n tunnistama ERBB2-positiivisen kasvaimen morfologia kor- reloi anti-ERBB2-liitännäishoitojen tehokkuuden kanssa ja SO:ta voi käyttää ennustamaan rintasyövän lääkevastetta. Tämän väitöskirjatyön tulokset osoittavat, että SO:n syöpäkudoksen karakterisointi voi olla hyödyllinen syövän ennusteen arvioinnissa sekä, molekulaarisen statuksen ja lääkevas- teen ennustamisessa. SO-perusteinen kudosmorfologinen analyysi voi antaa merkittävää tietoa syövän ennusteesta ja se voidaan yhdistää kliiniseen patologiaan ja molekulaariseen informaatioon tarkemman syöpädiagnosoinnin mahdollistamiseksi

    Advance Nanomaterials for Biosensors

    Get PDF
    The book provides a comprehensive overview of nanostructures and methods used to design biosensors, as well as applications for these biosensor nanotechnologies in the biological, chemical, and environmental monitoring fields. Biological sensing has proven to be an essential tool for understanding living systems, but it also has practical applications in medicine, drug discovery, food safety, environmental monitoring, defense, personal security, etc. In healthcare, advancements in telecommunications, expert systems, and distributed diagnostics are challenging current delivery models, while robust industrial sensors enable new approaches to research and development. Experts from around the world have written five articles on topics including:Diagnosing and treating intraocular cancers such as retinoblastoma; Nanomedicine in cancer management; Engineered nanomaterials in osteosarcoma diagnosis and treatment; Practical design of nanoscale devices; Detect alkaline phosphatase quantitatively in clinical diagnosis; Progress in the area of non-enzymatic sensing of dual/multi biomolecules; Developments in non-enzymatic glucose and H2O2 (NEGH) sensing; Multi-functionalized nanocarrier therapies for targeting retinoblastoma; Galactose functionalized nanocarriers; Sensing performance, electro-catalytic mechanism, and morphology and design of electrode materials; Biosensors along with their applications and the benefits of machine learning; Innovative approaches to improve the NEGH sensitivity, selectivity, and stability in real-time applications; Challenges and solutions in the field of biosensors

    Deep Learning in Medical Image Analysis

    Get PDF
    The accelerating power of deep learning in diagnosing diseases will empower physicians and speed up decision making in clinical environments. Applications of modern medical instruments and digitalization of medical care have generated enormous amounts of medical images in recent years. In this big data arena, new deep learning methods and computational models for efficient data processing, analysis, and modeling of the generated data are crucially important for clinical applications and understanding the underlying biological process. This book presents and highlights novel algorithms, architectures, techniques, and applications of deep learning for medical image analysis

    Development of deep learning methods for head and neck cancer detection in hyperspectral imaging and digital pathology for surgical guidance

    Get PDF
    Surgeons performing routine cancer resections utilize palpation and visual inspection, along with time-consuming microscopic tissue analysis, to ensure removal of cancer. Despite this, inadequate surgical cancer margins are reported for up to 10-20% of head and neck squamous cell carcinoma (SCC) operations. There exists a need for surgical guidance with optical imaging to ensure complete cancer resection in the operating room. The objective of this dissertation is to evaluate hyperspectral imaging (HSI) as a non-contact, label-free optical imaging modality to provide intraoperative diagnostic information. For comparison of different optical methods, autofluorescence, RGB composite images synthesized from HSI, and two fluorescent dyes are also acquired and investigated for head and neck cancer detection. A novel and comprehensive dataset was obtained of 585 excised tissue specimens from 204 patients undergoing routine head and neck cancer surgeries. The first aim was to use SCC tissue specimens to determine the potential of HSI for surgical guidance in the challenging task of head and neck SCC detection. It is hypothesized that HSI could reduce time and provide quantitative cancer predictions. State-of-the-art deep learning algorithms were developed for SCC detection in 102 patients and compared to other optical methods. HSI detected SCC with a median AUC score of 85%, and several anatomical locations demonstrated good SCC detection, such as the larynx, oropharynx, hypopharynx, and nasal cavity. To understand the ability of HSI for SCC detection, the most important spectral features were calculated and correlated with known cancer physiology signals, notably oxygenated and deoxygenated hemoglobin. The second aim was to evaluate HSI for tumor detection in thyroid and salivary glands, and RGB images were synthesized using the spectral response curves of the human eye for comparison. Using deep learning, HSI detected thyroid tumors with 86% average AUC score, which outperformed fluorescent dyes and autofluorescence, but HSI-synthesized RGB imagery performed with 90% AUC score. The last aim was to develop deep learning algorithms for head and neck cancer detection in hundreds of digitized histology slides. Slides containing SCC or thyroid carcinoma can be distinguished from normal slides with 94% and 99% AUC scores, respectively, and SCC and thyroid carcinoma can be localized within whole-slide images with 92% and 95% AUC scores, respectively. In conclusion, the outcomes of this thesis work demonstrate that HSI and deep learning methods could aid surgeons and pathologists in detecting head and neck cancers.Ph.D

    Multimodal Data Fusion and Quantitative Analysis for Medical Applications

    Get PDF
    Medical big data is not only enormous in its size, but also heterogeneous and complex in its data structure, which makes conventional systems or algorithms difficult to process. These heterogeneous medical data include imaging data (e.g., Positron Emission Tomography (PET), Computerized Tomography (CT), Magnetic Resonance Imaging (MRI)), and non-imaging data (e.g., laboratory biomarkers, electronic medical records, and hand-written doctor notes). Multimodal data fusion is an emerging vital field to address this urgent challenge, aiming to process and analyze the complex, diverse and heterogeneous multimodal data. The fusion algorithms bring great potential in medical data analysis, by 1) taking advantage of complementary information from different sources (such as functional-structural complementarity of PET/CT images) and 2) exploiting consensus information that reflects the intrinsic essence (such as the genetic essence underlying medical imaging and clinical symptoms). Thus, multimodal data fusion benefits a wide range of quantitative medical applications, including personalized patient care, more optimal medical operation plan, and preventive public health. Though there has been extensive research on computational approaches for multimodal fusion, there are three major challenges of multimodal data fusion in quantitative medical applications, which are summarized as feature-level fusion, information-level fusion and knowledge-level fusion: • Feature-level fusion. The first challenge is to mine multimodal biomarkers from high-dimensional small-sample multimodal medical datasets, which hinders the effective discovery of informative multimodal biomarkers. Specifically, efficient dimension reduction algorithms are required to alleviate "curse of dimensionality" problem and address the criteria for discovering interpretable, relevant, non-redundant and generalizable multimodal biomarkers. • Information-level fusion. The second challenge is to exploit and interpret inter-modal and intra-modal information for precise clinical decisions. Although radiomics and multi-branch deep learning have been used for implicit information fusion guided with supervision of the labels, there is a lack of methods to explicitly explore inter-modal relationships in medical applications. Unsupervised multimodal learning is able to mine inter-modal relationship as well as reduce the usage of labor-intensive data and explore potential undiscovered biomarkers; however, mining discriminative information without label supervision is an upcoming challenge. Furthermore, the interpretation of complex non-linear cross-modal associations, especially in deep multimodal learning, is another critical challenge in information-level fusion, which hinders the exploration of multimodal interaction in disease mechanism. • Knowledge-level fusion. The third challenge is quantitative knowledge distillation from multi-focus regions on medical imaging. Although characterizing imaging features from single lesions using either feature engineering or deep learning methods have been investigated in recent years, both methods neglect the importance of inter-region spatial relationships. Thus, a topological profiling tool for multi-focus regions is in high demand, which is yet missing in current feature engineering and deep learning methods. Furthermore, incorporating domain knowledge with distilled knowledge from multi-focus regions is another challenge in knowledge-level fusion. To address the three challenges in multimodal data fusion, this thesis provides a multi-level fusion framework for multimodal biomarker mining, multimodal deep learning, and knowledge distillation from multi-focus regions. Specifically, our major contributions in this thesis include: • To address the challenges in feature-level fusion, we propose an Integrative Multimodal Biomarker Mining framework to select interpretable, relevant, non-redundant and generalizable multimodal biomarkers from high-dimensional small-sample imaging and non-imaging data for diagnostic and prognostic applications. The feature selection criteria including representativeness, robustness, discriminability, and non-redundancy are exploited by consensus clustering, Wilcoxon filter, sequential forward selection, and correlation analysis, respectively. SHapley Additive exPlanations (SHAP) method and nomogram are employed to further enhance feature interpretability in machine learning models. • To address the challenges in information-level fusion, we propose an Interpretable Deep Correlational Fusion framework, based on canonical correlation analysis (CCA) for 1) cohesive multimodal fusion of medical imaging and non-imaging data, and 2) interpretation of complex non-linear cross-modal associations. Specifically, two novel loss functions are proposed to optimize the discovery of informative multimodal representations in both supervised and unsupervised deep learning, by jointly learning inter-modal consensus and intra-modal discriminative information. An interpretation module is proposed to decipher the complex non-linear cross-modal association by leveraging interpretation methods in both deep learning and multimodal consensus learning. • To address the challenges in knowledge-level fusion, we proposed a Dynamic Topological Analysis framework, based on persistent homology, for knowledge distillation from inter-connected multi-focus regions in medical imaging and incorporation of domain knowledge. Different from conventional feature engineering and deep learning, our DTA framework is able to explicitly quantify inter-region topological relationships, including global-level geometric structure and community-level clusters. K-simplex Community Graph is proposed to construct the dynamic community graph for representing community-level multi-scale graph structure. The constructed dynamic graph is subsequently tracked with a novel Decomposed Persistence algorithm. Domain knowledge is incorporated into the Adaptive Community Profile, summarizing the tracked multi-scale community topology with additional customizable clinically important factors
    corecore