41 research outputs found

    Local and deep texture features for classification of natural and biomedical images

    Get PDF
    Developing efficient feature descriptors is very important in many computer vision applications including biomedical image analysis. In the past two decades and before the popularity of deep learning approaches in image classification, texture features proved to be very effective to capture the gradient variation in the image. Following the success of the Local Binary Pattern (LBP) descriptor, many variations of this descriptor were introduced to further improve the ability of obtaining good classification results. However, the problem of image classification gets more complicated when the number of images increases as well as the number of classes. In this case, more robust approaches must be used to address this problem. In this thesis, we address the problem of analyzing biomedical images by using a combination of local and deep features. First, we propose a novel descriptor that is based on the motif Peano scan concept called Joint Motif Labels (JML). After that, we combine the features extracted from the JML descriptor with two other descriptors called Rotation Invariant Co-occurrence among Local Binary Patterns (RIC-LBP) and Joint Adaptive Medina Binary Patterns (JAMBP). In addition, we construct another descriptor called Motif Patterns encoded by RIC-LBP and use it in our classification framework. We enrich the performance of our framework by combining these local descriptors with features extracted from a pre-trained deep network called VGG-19. Hence, the 4096 features of the Fully Connected 'fc7' layer are extracted and combined with the proposed local descriptors. Finally, we show that Random Forests (RF) classifier can be used to obtain superior performance in the field of biomedical image analysis. Testing was performed on two standard biomedical datasets and another three standard texture datasets. Results show that our framework can beat state-of-the-art accuracy on the biomedical image analysis and the combination of local features produce promising results on the standard texture datasets.Includes bibliographical reference

    Deep Active Learning for Automatic Mitotic Cell Detection on HEp-2 Specimen Medical Images

    Get PDF
    Identifying Human Epithelial Type 2 (HEp-2) mitotic cells is a crucial procedure in anti-nuclear antibodies (ANAs) testing, which is the standard protocol for detecting connective tissue diseases (CTD). Due to the low throughput and labor-subjectivity of the ANAs' manual screening test, there is a need to develop a reliable HEp-2 computer-aided diagnosis (CAD) system. The automatic detection of mitotic cells from the microscopic HEp-2 specimen images is an essential step to support the diagnosis process and enhance the throughput of this test. This work proposes a deep active learning (DAL) approach to overcoming the cell labeling challenge. Moreover, deep learning detectors are tailored to automatically identify the mitotic cells directly in the entire microscopic HEp-2 specimen images, avoiding the segmentation step. The proposed framework is validated using the I3A Task-2 dataset over 5-fold cross-validation trials. Using the YOLO predictor, promising mitotic cell prediction results are achieved with an average of 90.011% recall, 88.307% precision, and 81.531% mAP. Whereas, average scores of 86.986% recall, 85.282% precision, and 78.506% mAP are obtained using the Faster R-CNN predictor. Employing the DAL method over four labeling rounds effectively enhances the accuracy of the data annotation, and hence, improves the prediction performance. The proposed framework could be practically applicable to support medical personnel in making rapid and accurate decisions about the mitotic cells' existence

    Deep Learning based HEp-2 Image Classification: A Comprehensive Review

    Get PDF
    Classification of HEp-2 cell patterns plays a significant role in the indirect immunofluorescence test for identifying autoimmune diseases in the human body. Many automatic HEp-2 cell classification methods have been proposed in recent years, amongst which deep learning based methods have shown impressive performance. This paper provides a comprehensive review of the existing deep learning based HEp-2 cell image classification methods. These methods perform HEp-2 image classification at two levels, namely, cell-level and specimen-level. Both levels are covered in this review. At each level, the methods are organized with a deep network usage based taxonomy. The core idea, notable achievements, and key strengths and weaknesses of each method are critically analyzed. Furthermore, a concise review of the existing HEp-2 datasets that are commonly used in the literature is given. The paper ends with a discussion on novel opportunities and future research directions in this field. It is hoped that this paper would provide readers with a thorough reference of this novel, challenging, and thriving field.Comment: Published in Medical Image Analysi

    Evaluation of process-structure-property relationships of carbon nanotube forests using simulation and deep learning

    Get PDF
    This work is aimed to explore process-structure-property relationships of carbon nanotube (CNT) forests. CNTs have superior mechanical, electrical and thermal properties that make them suitable for many applications. Yet, due to lack of manufacturing control, there is a huge performance gap between promising properties of individual CNTs and CNT forest properties that hinders their adoption into potential industrial applications. In this research, computational modelling, in-situ electron microscopy for CNT synthesis, and data-driven and high-throughput deep convolutional neural networks are employed to not only accelerate implementing CNTs in various applications but also to establish a framework to make validated predictive models that can be easily extended to achieve application-tailored synthesis of any materials. A time-resolved and physics-based finite-element simulation tool is modelled in MATLAB to investigate synthesis of CNT forests, specially to study the CNT-CNT interactions and generated mechanical forces and their role in ensemble structure and properties. A companion numerical model with similar construct is then employed to examine forest mechanical properties in compression. In addition, in-situ experiments are carried out inside Environmental Scanning Electron Microscope (ESEM) to nucleate and synthesize CNTs. Findings may primarily be used to expand the forest growth and self-assembly knowledge and to validate the assumptions of simulation package. Also, SEM images can be used as feed database to construct a deep learning model to grow CNTs by design. The chemical vapor deposition parameter space of CNT synthesis is so vast that it is not possible to investigate all conceivable combinations in terms of time and costs. Hence, simulated CNT forest morphology images are used to train machine learning and learning algorithms that are able to predict CNT synthesis conditions based on desired properties. Exceptionally high prediction accuracies of R2 > 0.94 is achieved for buckling load and stiffness, as well as accuracies of > 0.91 for the classification task. This high classification accuracy promotes discovering the CNT forest synthesis-structure relationships so that their promising performance can be adopted in real world applications. We foresee this work as a meaningful step towards creating an unsupervised simulation using machine learning techniques that can seek out the desired CNT forest synthesis parameters to achieve desired property sets for diverse applications.Includes bibliographical reference

    Deep learning architectures for 2D and 3D scene perception

    Get PDF
    Scene understanding is a fundamental problem in computer vision tasks, that is being more intensively explored in recent years with the development of deep learning. In this dissertation, we proposed deep learning structures to address challenges in 2D and 3D scene perception. We developed several novel architectures for 3D point cloud understanding at city-scale point by effectively capturing both long-range and short-range information to handle the challenging problem of large variations in object size for city-scale point cloud segmentation. GLSNet++ is a two-branch network for multiscale point cloud segmentation that models this complex problem using both global and local processing streams to capture different levels of contextual and structural 3D point cloud information. We developed PointGrad, a new graph convolution gradient operator for capturing structural relationships, that encoded point-based directional gradients into a high-dimensional multiscale tensor space. Using the Point- Grad operator with graph convolution on scattered irregular point sets captures the salient structural information in the point cloud across spatial and feature scale space, enabling efficient learning. We integrated PointGrad with several deep network architectures for large-scale 3D point cloud semantic segmentation, including indoor scene and object part segmentation. In many real application areas including remote sensing and aerial imaging, the class imbalance is common and sufficient data for rare classes is hard to acquire or has high-cost associated with expert labeling. We developed MDXNet for few-shot and zero-shot learning, which emulates the human visual system by leveraging multi-domain knowledge from general visual primitives with transfer learning for more specialized learning tasks in various application domains. We extended deep learning methods in various domains, including the material domain for predicting carbon nanotube forest attributes and mechanical properties, biomedical domain for cell segmentation.Includes bibliographical references

    Fully Unsupervised Image Denoising, Diversity Denoising and Image Segmentation with Limited Annotations

    Get PDF
    Understanding the processes of cellular development and the interplay of cell shape changes, division and migration requires investigation of developmental processes at the spatial resolution of single cell. Biomedical imaging experiments enable the study of dynamic processes as they occur in living organisms. While biomedical imaging is essential, a key component of exposing unknown biological phenomena is quantitative image analysis. Biomedical images, especially microscopy images, are usually noisy owing to practical limitations such as available photon budget, sample sensitivity, etc. Additionally, microscopy images often contain artefacts due to the optical aberrations in microscopes or due to imperfections in camera sensor and internal electronics. The noisy nature of images as well as the artefacts prohibit accurate downstream analysis such as cell segmentation. Although countless approaches have been proposed for image denoising, artefact removal and segmentation, supervised Deep Learning (DL) based content-aware algorithms are currently the best performing for all these tasks. Supervised DL based methods are plagued by many practical limitations. Supervised denoising and artefact removal algorithms require paired corrupted and high quality images for training. Obtaining such image pairs can be very hard and virtually impossible in most biomedical imaging applications owing to photosensitivity and the dynamic nature of the samples being imaged. Similarly, supervised DL based segmentation methods need copious amounts of annotated data for training, which is often very expensive to obtain. Owing to these restrictions, it is imperative to look beyond supervised methods. The objective of this thesis is to develop novel unsupervised alternatives for image denoising, and artefact removal as well as semisupervised approaches for image segmentation. The first part of this thesis deals with unsupervised image denoising and artefact removal. For unsupervised image denoising task, this thesis first introduces a probabilistic approach for training DL based methods using parametric models of imaging noise. Next, a novel unsupervised diversity denoising framework is presented which addresses the fundamentally non-unique inverse nature of image denoising by generating multiple plausible denoised solutions for any given noisy image. Finally, interesting properties of the diversity denoising methods are presented which make them suitable for unsupervised spatial artefact removal in microscopy and medical imaging applications. In the second part of this thesis, the problem of cell/nucleus segmentation is addressed. The focus is especially on practical scenarios where ground truth annotations for training DL based segmentation methods are scarcely available. Unsupervised denoising is used as an aid to improve segmentation performance in the presence of limited annotations. Several training strategies are presented in this work to leverage the representations learned by unsupervised denoising networks to enable better cell/nucleus segmentation in microscopy data. Apart from DL based segmentation methods, a proof-of-concept is introduced which views cell/nucleus segmentation from the perspective of solving a label fusion problem. This method, through limited human interaction, learns to choose the best possible segmentation for each cell/nucleus using only a pool of diverse (and possibly faulty) segmentation hypotheses as input. In summary, this thesis seeks to introduce new unsupervised denoising and artefact removal methods as well as semi-supervised segmentation methods which can be easily deployed to directly and immediately benefit biomedical practitioners with their research

    Computational Methods for the Analysis of Genomic Data and Biological Processes

    Get PDF
    In recent decades, new technologies have made remarkable progress in helping to understand biological systems. Rapid advances in genomic profiling techniques such as microarrays or high-performance sequencing have brought new opportunities and challenges in the fields of computational biology and bioinformatics. Such genetic sequencing techniques allow large amounts of data to be produced, whose analysis and cross-integration could provide a complete view of organisms. As a result, it is necessary to develop new techniques and algorithms that carry out an analysis of these data with reliability and efficiency. This Special Issue collected the latest advances in the field of computational methods for the analysis of gene expression data, and, in particular, the modeling of biological processes. Here we present eleven works selected to be published in this Special Issue due to their interest, quality, and originality

    Immunohistochemical and electrophysiological investigation of E/I balance alterations in animal models of frontotemporal dementia

    Get PDF
    Behavioural variant frontotemporal dementia (bvFTD) is a neurodegenerative disease characterised by changes in behaviour. Apathy, behavioural disinhibition and stereotyped behaviours are the first symptoms to appear and all have a basis in reward and pleasure deficits. The ventral striatum and ventral regions of the globus pallidus are involved in reward and pleasure. It is therefore reasonable to suggest alterations in these regions may underpin bvFTD. One postulated contributory factor is alteration in E/I balance in striatal regions. GABAergic interneurons play a role in E/I balance, acting as local inhibitory brakes, they are therefore a rational target for research investigating early biological predictors of bvFTD. To investigate this, we will carry out immunohistochemical staining for GABAergic interneurons (parvalbumin and neuronal nitric oxide synthase) in striatal regions of brains taken from CHMP2B mice, a validated animal model of bvFTD. We hypothesise that there will be fewer GABAergic interneurons in the striatum which may lead to ‘reward-seeking’ behaviour in bvFTD. This will also enable us to investigate any preclinical alterations in interneuron expression within this region. Results will be analysed using a mixed ANOVA and if significant, post hoc t-tests will be used. The second part of our study will involve extracellular recordings from CHMP2B mouse brains using a multi-electrode array (MEA). This will enable us to determine if there are alterations in local field potentials (LFP) in preclinical and symptomatic animals. We will also be able to see if neuromodulators such as serotonin and dopamine effect LFPs after bath application. We will develop slice preparations to preserve pathways between the ventral tegmental area and the ventral pallidum, an output structure of the striatum, and the dorsal raphe nucleus and the VP. Using the MEA we will stimulate an endogenous release of dopamine and serotonin using the slice preparations as described above. This will enable us to see if there are any changes in LFPs after endogenous release of neuromodulators. We hypothesise there will be an increase in LFPs due to loss of GABAergic interneurons

    2023- The Twenty-seventh Annual Symposium of Student Scholars

    Get PDF
    The full program book from the Twenty-seventh Annual Symposium of Student Scholars, held on April 18-21, 2023. Includes abstracts from the presentations and posters.https://digitalcommons.kennesaw.edu/sssprograms/1027/thumbnail.jp
    corecore