413 research outputs found

    CELL PATTERN CLASSIFICATION OF INDIRECT IMMUNOFLUORESCENCE IMAGES

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Learning Multimodal Structures in Computer Vision

    Get PDF
    A phenomenon or event can be received from various kinds of detectors or under different conditions. Each such acquisition framework is a modality of the phenomenon. Due to the relation between the modalities of multimodal phenomena, a single modality cannot fully describe the event of interest. Since several modalities report on the same event introduces new challenges comparing to the case of exploiting each modality separately. We are interested in designing new algorithmic tools to apply sensor fusion techniques in the particular signal representation of sparse coding which is a favorite methodology in signal processing, machine learning and statistics to represent data. This coding scheme is based on a machine learning technique and has been demonstrated to be capable of representing many modalities like natural images. We will consider situations where we are not only interested in support of the model to be sparse, but also to reflect a-priorily known knowledge about the application in hand. Our goal is to extract a discriminative representation of the multimodal data that leads to easily finding its essential characteristics in the subsequent analysis step, e.g., regression and classification. To be more precise, sparse coding is about representing signals as linear combinations of a small number of bases from a dictionary. The idea is to learn a dictionary that encodes intrinsic properties of the multimodal data in a decomposition coefficient vector that is favorable towards the maximal discriminatory power. We carefully design a multimodal representation framework to learn discriminative feature representations by fully exploiting, the modality-shared which is the information shared by various modalities, and modality-specific which is the information content of each modality individually. Plus, it automatically learns the weights for various feature components in a data-driven scheme. In other words, the physical interpretation of our learning framework is to fully exploit the correlated characteristics of the available modalities, while at the same time leverage the modality-specific character of each modality and change their corresponding weights for different parts of the feature in recognition

    A Survey on Deep Learning in Medical Image Analysis

    Full text link
    Deep learning algorithms, in particular convolutional networks, have rapidly become a methodology of choice for analyzing medical images. This paper reviews the major deep learning concepts pertinent to medical image analysis and summarizes over 300 contributions to the field, most of which appeared in the last year. We survey the use of deep learning for image classification, object detection, segmentation, registration, and other tasks and provide concise overviews of studies per application area. Open challenges and directions for future research are discussed.Comment: Revised survey includes expanded discussion section and reworked introductory section on common deep architectures. Added missed papers from before Feb 1st 201

    Temporal - spatial recognizer for multi-label data

    Get PDF
    Pattern recognition is an important artificial intelligence task with practical applications in many fields such as medical and species distribution. Such application involves overlapping data points which are demonstrated in the multi- label dataset. Hence, there is a need for a recognition algorithm that can separate the overlapping data points in order to recognize the correct pattern. Existing recognition methods suffer from sensitivity to noise and overlapping points as they could not recognize a pattern when there is a shift in the position of the data points. Furthermore, the methods do not implicate temporal information in the process of recognition, which leads to low quality of data clustering. In this study, an improved pattern recognition method based on Hierarchical Temporal Memory (HTM) is proposed to solve the overlapping in data points of multi- label dataset. The imHTM (Improved HTM) method includes improvement in two of its components; feature extraction and data clustering. The first improvement is realized as TS-Layer Neocognitron algorithm which solves the shift in position problem in feature extraction phase. On the other hand, the data clustering step, has two improvements, TFCM and cFCM (TFCM with limit- Chebyshev distance metric) that allows the overlapped data points which occur in patterns to be separated correctly into the relevant clusters by temporal clustering. Experiments on five datasets were conducted to compare the proposed method (imHTM) against statistical, template and structural pattern recognition methods. The results showed that the percentage of success in recognition accuracy is 99% as compared with the template matching method (Featured-Based Approach, Area-Based Approach), statistical method (Principal Component Analysis, Linear Discriminant Analysis, Support Vector Machines and Neural Network) and structural method (original HTM). The findings indicate that the improved HTM can give an optimum pattern recognition accuracy, especially the ones in multi- label dataset

    Scaling Multidimensional Inference for Big Structured Data

    Get PDF
    In information technology, big data is a collection of data sets so large and complex that it becomes difficult to process using traditional data processing applications [151]. In a world of increasing sensor modalities, cheaper storage, and more data oriented questions, we are quickly passing the limits of tractable computations using traditional statistical analysis methods. Methods which often show great results on simple data have difficulties processing complicated multidimensional data. Accuracy alone can no longer justify unwarranted memory use and computational complexity. Improving the scaling properties of these methods for multidimensional data is the only way to make these methods relevant. In this work we explore methods for improving the scaling properties of parametric and nonparametric models. Namely, we focus on the structure of the data to lower the complexity of a specific family of problems. The two types of structures considered in this work are distributive optimization with separable constraints (Chapters 2-3), and scaling Gaussian processes for multidimensional lattice input (Chapters 4-5). By improving the scaling of these methods, we can expand their use to a wide range of applications which were previously intractable open the door to new research questions

    Meson spectroscopy at non-zero temperature using lattice QCD

    Get PDF
    This thesis explores two main topics: the effects of the temperature on several Quantum Chromodynamics mesonic observables, with a concrete focus on the tem-perature dependence of the mesonic mass spectrum, and numerical spectral recon-struction of lattice correlation functions employing deep neural networks. In the first two chapters, a brief introduction to standard lattice Quantum Chromodynamics and non-zero temperature field theory is provided. Using the tools presented in the intro-ductory chapters, a complete spectroscopy analysis of the temperature dependence of several mesonic ground state masses is developed. From this study, novel results in the restoration of chiral symmetry as a function of the temperature are obtained by studying the degree of degeneracy between the ρ(770) and a1(1260) states. Ad-ditionally, a complete study of the thermal effects affecting the mesonic D(s)-sector below the pseudocritical temperature of the system is provided. A self-contained chapter discussing the pion velocity in the medium is also included in the document. The pion velocity is estimated as a function of the temperature using non-zero tem-perature lattice Quantum Chromodynamics. In addition, after providing a detailed introduction to the field of neural networks, their application to numerical spectral reconstruction is studied. A simple implementation in which deep neural networks are applied to numerical spectral reconstruction is tested in order to explore its limits and applicability

    Quantitative Techniques for PET/CT: A Clinical Assessment of the Impact of PSF and TOF

    Get PDF
    Tomographic reconstruction has been a challenge for many imaging applications, and it is particularly problematic for count-limited modalities such as Positron Emission Tomography (PET). Recent advances in PET, including the incorporation of time-of-flight (TOF) information and modeling the variation of the point response across the imaging field (PSF), have resulted in significant improvements in image quality. While the effects of these techniques have been characterized with simulations and mathematical modeling, there has been relatively little work investigating the potential impact of such methods in the clinical setting. The objective of this work is to quantify these techniques in the context of realistic lesion detection and localization tasks for a medical environment. Mathematical observers are used to first identify optimal reconstruction parameters and then later to evaluate the performance of the reconstructions. The effect on the reconstruction algorithms is then evaluated for various patient sizes and imaging conditions. The findings for the mathematical observers are compared to, and validated by, the performance of three experienced nuclear medicine physicians completing the same task
    corecore