79 research outputs found

    Application of a novel automatic method for determining the bilateral symmetry midline of the facial skeleton based on invariant moments

    Get PDF
    © 2020 by the authors. Assuming a symmetric pattern plays a fundamental role in the diagnosis and surgical treatment of facial asymmetry, for reconstructive craniofacial surgery, knowing the precise location of the facial midline is important since for most reconstructive procedures the intact side of the face serves as a template for the malformed side. However, the location of the midline is still a subjective procedure, despite its importance. This study aimed to automatically locate the bilateral symmetry midline of the facial skeleton based on an invariant moment technique using pseudo-Zernike moments. A total of 367 skull images were evaluated using the proposed technique. The technique was found to be reliable and provided good accuracy in the symmetry planes. This new technique will be utilized for subsequent studies to evaluate diverse craniofacial reconstruction techniques

    The Role of Transient Vibration of the Skull on Concussion

    Get PDF
    Concussion is a traumatic brain injury usually caused by a direct or indirect blow to the head that affects brain function. The maximum mechanical impedance of the brain tissue occurs at 450±50 Hz and may be affected by the skull resonant frequencies. After an impact to the head, vibration resonance of the skull damages the underlying cortex. The skull deforms and vibrates, like a bell for 3 to 5 milliseconds, bruising the cortex. Furthermore, the deceleration forces the frontal and temporal cortex against the skull, eliminating a layer of cerebrospinal fluid. When the skull vibrates, the force spreads directly to the cortex, with no layer of cerebrospinal fluid to reflect the wave or cushion its force. To date, there is few researches investigating the effect of transient vibration of the skull. Therefore, the overall goal of the proposed research is to gain better understanding of the role of transient vibration of the skull on concussion. This goal will be achieved by addressing three research objectives. First, a MRI skull and brain segmentation automatic technique is developed. Due to bones’ weak magnetic resonance signal, MRI scans struggle with differentiating bone tissue from other structures. One of the most important components for a successful segmentation is high-quality ground truth labels. Therefore, we introduce a deep learning framework for skull segmentation purpose where the ground truth labels are created from CT imaging using the standard tessellation language (STL). Furthermore, the brain region will be important for a future work, thus, we explore a new initialization concept of the convolutional neural network (CNN) by orthogonal moments to improve brain segmentation in MRI. Second, the creation of a novel 2D and 3D Automatic Method to Align the Facial Skeleton is introduced. An important aspect for further impact analysis is the ability to precisely simulate the same point of impact on multiple bone models. To perform this task, the skull must be precisely aligned in all anatomical planes. Therefore, we introduce a 2D/3D technique to align the facial skeleton that was initially developed for automatically calculating the craniofacial symmetry midline. In the 2D version, the entire concept of using cephalometric landmarks and manual image grid alignment to construct the training dataset was introduced. Then, this concept was extended to a 3D version where coronal and transverse planes are aligned using CNN approach. As the alignment in the sagittal plane is still undefined, a new alignment based on these techniques will be created to align the sagittal plane using Frankfort plane as a framework. Finally, the resonant frequencies of multiple skulls are assessed to determine how the skull resonant frequency vibrations propagate into the brain tissue. After applying material properties and mesh to the skull, modal analysis is performed to assess the skull natural frequencies. Finally, theories will be raised regarding the relation between the skull geometry, such as shape and thickness, and vibration with brain tissue injury, which may result in concussive injury

    AI-enhanced diagnosis of challenging lesions in breast MRI: a methodology and application primer

    Get PDF
    Computer-aided diagnosis (CAD) systems have become an important tool in the assessment of breast tumors with magnetic resonance imaging (MRI). CAD systems can be used for the detection and diagnosis of breast tumors as a “second opinion” review complementing the radiologist’s review. CAD systems have many common parts such as image pre-processing, tumor feature extraction and data classification that are mostly based on machine learning (ML) techniques. In this review paper, we describe the application of ML-based CAD systems in MRI of the breast covering the detection of diagnostically challenging lesions such as non-mass enhancing (NME) lesions, multiparametric MRI, neo-adjuvant chemotherapy (NAC) and radiomics all applied to NME. Since ML has been widely used in the medical imaging community, we provide an overview about the state-ofthe-art and novel techniques applied as classifiers to CAD systems. The differences in the CAD systems in MRI of the breast for several standard and novel applications for NME are explained in detail to provide important examples illustrating: (i) CAD for the detection and diagnosis, (ii) CAD in multi-parametric imaging (iii) CAD in NAC and (iv) breast cancer radiomics. We aim to provide a comparison between these CAD applications and to illustrate a global view on intelligent CAD systems based on ANN in MRI of the breast

    A hybrid deep learning approach for texture analysis

    Get PDF
    Texture classification is a problem that has various applications such as remote sensing and forest species recognition. Solutions tend to be custom fit to the dataset used but fails to generalize. The Convolutional Neural Network (CNN) in combination with Support Vector Machine (SVM) form a robust selection between powerful invariant feature extractor and accurate classifier. The fusion of classifiers shows the stability of classification among different datasets and slight improvement compared to state of the art methods. The classifiers are fused using confusion matrix after independent training of each using the same training set, then put to test. Statistical information about each classifier is fed to a confusion matrix that generates two confidence measures used in building two binary classifiers. The binary classifier is allowed to activate or deactivate a classifier during testing time based on a confidence measure obtained from the confusion matrix. The method obtained results approaching state of the art with a difference less than 1% in classification success rates. Moreover, the method was able to maintain this success rate among different datasets while other methods had failed to obtain similar stability. Two datasets had been used in this research Brodatz and Kylberg where the results came 98.17% and 99.70%. In comparison to conventional methods in the literature, it came as 98.9% and 99.64% respectively

    Enhanced alzheimer’s disease classification scheme using 3d features

    Get PDF
    Alzheimer’s disease (AD) is a neurodegenerative brain illness that leads to death due to complications. Many studies on AD classification with Magnetic Resonance Imaging (MRI) images were conducted to act as a computer-aided diagnosis. Feature extraction and feature selection were performed to reduce the number of features and extract significant features concurrently. However, the classification of stable mild cognitive impairment (SMCI) and progressive mild cognitive impairment (PMCI) is far from satisfactory due to the high similarity between the groups. Therefore, this research aimed to enhance the AD classification scheme to solve the problem. The proposed method has included shape enhancement before feature extraction to maximize the difference between healthy patients (normal control (NC)+SMCI) and sick patients (PMCI+AD). The sick patient has a thinner brain boundary compared to a healthy patient. Therefore, a 3D opening morphological operation was proposed to eliminate the thinner boundary and restore the thicker boundary. After that, the proposed 3-level 3D Discrete Wavelet Transform (DWT) and Principal Component Analysis (PCA) were combined for feature extraction. Using the Haar filter, 3-level 3D-DWT extracted 3D significant features to improve the classification result. PCA further reduced the number of features by projecting the training set and test set to lower-dimensional space. The number of features was greatly reduced from 2,122,945 to 159. Feature selection was removed from the proposed scheme after realizing the process would eliminate important features to segregate the classification groups. Linear Support Vector Machine (SVM) was employed to perform binary classification. The proposed scheme achieved higher mean accuracy compared to the previous method, which was from 79% to 80%, from 81% to 84%, from 80% to 84 % on the datasets collected at time points of 24 months, 18 months before stable diagnosis and at the stable diagnosis time point, respectively

    On Improving Generalization of CNN-Based Image Classification with Delineation Maps Using the CORF Push-Pull Inhibition Operator

    Get PDF
    Deployed image classification pipelines are typically dependent on the images captured in real-world environments. This means that images might be affected by different sources of perturbations (e.g. sensor noise in low-light environments). The main challenge arises by the fact that image quality directly impacts the reliability and consistency of classification tasks. This challenge has, hence, attracted wide interest within the computer vision communities. We propose a transformation step that attempts to enhance the generalization ability of CNN models in the presence of unseen noise in the test set. Concretely, the delineation maps of given images are determined using the CORF push-pull inhibition operator. Such an operation transforms an input image into a space that is more robust to noise before being processed by a CNN. We evaluated our approach on the Fashion MNIST data set with an AlexNet model. It turned out that the proposed CORF-augmented pipeline achieved comparable results on noise-free images to those of a conventional AlexNet classification model without CORF delineation maps, but it consistently achieved significantly superior performance on test images perturbed with different levels of Gaussian and uniform noise

    A Review on Skin Disease Classification and Detection Using Deep Learning Techniques

    Get PDF
    Skin cancer ranks among the most dangerous cancers. Skin cancers are commonly referred to as Melanoma. Melanoma is brought on by genetic faults or mutations on the skin, which are caused by Unrepaired Deoxyribonucleic Acid (DNA) in skin cells. It is essential to detect skin cancer in its infancy phase since it is more curable in its initial phases. Skin cancer typically progresses to other regions of the body. Owing to the disease's increased frequency, high mortality rate, and prohibitively high cost of medical treatments, early diagnosis of skin cancer signs is crucial. Due to the fact that how hazardous these disorders are, scholars have developed a number of early-detection techniques for melanoma. Lesion characteristics such as symmetry, colour, size, shape, and others are often utilised to detect skin cancer and distinguish benign skin cancer from melanoma. An in-depth investigation of deep learning techniques for melanoma's early detection is provided in this study. This study discusses the traditional feature extraction-based machine learning approaches for the segmentation and classification of skin lesions. Comparison-oriented research has been conducted to demonstrate the significance of various deep learning-based segmentation and classification approaches

    Deep Learning for Multiclass Classification, Predictive Modeling and Segmentation of Disease Prone Regions in Alzheimer’s Disease

    Get PDF
    One of the challenges facing accurate diagnosis and prognosis of Alzheimer’s Disease (AD) is identifying the subtle changes that define the early onset of the disease. This dissertation investigates three of the main challenges confronted when such subtle changes are to be identified in the most meaningful way. These are (1) the missing data challenge, (2) longitudinal modeling of disease progression, and (3) the segmentation and volumetric calculation of disease-prone brain areas in medical images. The scarcity of sufficient data compounded by the missing data challenge in many longitudinal samples exacerbates the problem as we seek statistical meaningfulness in multiclass classification and regression analysis. Although there are many participants in the AD Neuroimaging Initiative (ADNI) study, many of the observations have a lot of missing features which often lead to the exclusion of potentially valuable data points that could add significant meaning in many ongoing experiments. Motivated by the necessity of examining all participants, even those with missing tests or imaging modalities, multiple techniques of handling missing data in this domain have been explored. Specific attention was drawn to the Gradient Boosting (GB) algorithm which has an inherent capability of addressing missing values. Prior to applying state-of-the-art classifiers such as Support Vector Machine (SVM) and Random Forest (RF), the impact of imputing data in common datasets with numerical techniques has been also investigated and compared with the GB algorithm. Furthermore, to discriminate AD subjects from healthy control individuals, and Mild Cognitive Impairment (MCI), longitudinal multimodal heterogeneous data was modeled using recurring neural networks (RNNs). In the segmentation and volumetric calculation challenge, this dissertation places its focus on one of the most relevant disease-prone areas in many neurological and neurodegenerative diseases, the hippocampus region. Changes in hippocampus shape and volume are considered significant biomarkers for AD diagnosis and prognosis. Thus, a two-stage model based on integrating the Vision Transformer and Convolutional Neural Network (CNN) is developed to automatically locate, segment, and estimate the hippocampus volume from the brain 3D MRI. The proposed architecture was trained and tested on a dataset containing 195 brain MRIs from the 2019 Medical Segmentation Decathlon Challenge against the manually segmented regions provided therein and was deployed on 326 MRI from our own data collected through Mount Sinai Medical Center as part of the 1Florida Alzheimer Disease Research Center (ADRC)

    3D Architectural Analysis of Neurons, Astrocytes, Vasculature & Nuclei in the Motor and Somatosensory Murine Cortical Columns

    Get PDF
    Characterization of the complex cortical structure of the brain at a cellular level is a fundamental goal of neuroscience which can provide a better understanding of both normal function as well as disease state progression. Many challenges exist however when carrying out this form of analysis. Immunofluorescent staining is a key technique for revealing 3-dimensional structure, but subsequent fluorescence microscopy is limited by the quantity of simultaneous targets that can be labeled and intrinsic lateral and isotropic axial point-spread function (PSF) blurring during the imaging process in a spectral and depth-dependent manner. Even after successful staining, imaging and optical deconvolution, the sheer density of filamentous processes in the neuropil significantly complicates analysis due to the difficulty of separating individual cells in a highly interconnected network of tightly woven cellular arbors. In order to solve these problems, a variety of methodologies were developed and validated for improved analysis of cortical anatomy. An enhanced immunofluorescent staining and imaging protocol was utilized to precisely locate specific functional regions within brain slices at high magnification and collect four-channel, complete cortical columns. A powerful deconvolution routine was established which collected depth variant PSFs using an optical phantom for image restoration. Fractional volume analysis (FVA) was used to provide preliminary data of the proportions of each stained component in order to statistically characterize the variability within and between the functional regions in a depth-dependent and depth-independent manner. Finally, using machine learning techniques, a supervised learning model was developed that could automatically classify neuronal and astrocytic nuclei within the large cortical column datasets based on perinuclear fluorescence. These annotated nuclei were then used as seed points within their corresponding fluorescent channel for cell individualization in a highly interconnected network. For astrocytes, this technique provides the first method for characterization of complex morphology in an automated fashion over large areas without laborious dye filling or manual tracing
    corecore