398 research outputs found

    Temporal - spatial recognizer for multi-label data

    Get PDF
    Pattern recognition is an important artificial intelligence task with practical applications in many fields such as medical and species distribution. Such application involves overlapping data points which are demonstrated in the multi- label dataset. Hence, there is a need for a recognition algorithm that can separate the overlapping data points in order to recognize the correct pattern. Existing recognition methods suffer from sensitivity to noise and overlapping points as they could not recognize a pattern when there is a shift in the position of the data points. Furthermore, the methods do not implicate temporal information in the process of recognition, which leads to low quality of data clustering. In this study, an improved pattern recognition method based on Hierarchical Temporal Memory (HTM) is proposed to solve the overlapping in data points of multi- label dataset. The imHTM (Improved HTM) method includes improvement in two of its components; feature extraction and data clustering. The first improvement is realized as TS-Layer Neocognitron algorithm which solves the shift in position problem in feature extraction phase. On the other hand, the data clustering step, has two improvements, TFCM and cFCM (TFCM with limit- Chebyshev distance metric) that allows the overlapped data points which occur in patterns to be separated correctly into the relevant clusters by temporal clustering. Experiments on five datasets were conducted to compare the proposed method (imHTM) against statistical, template and structural pattern recognition methods. The results showed that the percentage of success in recognition accuracy is 99% as compared with the template matching method (Featured-Based Approach, Area-Based Approach), statistical method (Principal Component Analysis, Linear Discriminant Analysis, Support Vector Machines and Neural Network) and structural method (original HTM). The findings indicate that the improved HTM can give an optimum pattern recognition accuracy, especially the ones in multi- label dataset

    Illumination Invariant Deep Learning for Hyperspectral Data

    Get PDF
    Motivated by the variability in hyperspectral images due to illumination and the difficulty in acquiring labelled data, this thesis proposes different approaches for learning illumination invariant feature representations and classification models for hyperspectral data captured outdoors, under natural sunlight. The approaches integrate domain knowledge into learning algorithms and hence does not rely on a priori knowledge of atmospheric parameters, additional sensors or large amounts of labelled training data. Hyperspectral sensors record rich semantic information from a scene, making them useful for robotics or remote sensing applications where perception systems are used to gain an understanding of the scene. Images recorded by hyperspectral sensors can, however, be affected to varying degrees by intrinsic factors relating to the sensor itself (keystone, smile, noise, particularly at the limits of the sensed spectral range) but also by extrinsic factors such as the way the scene is illuminated. The appearance of the scene in the image is tied to the incident illumination which is dependent on variables such as the position of the sun, geometry of the surface and the prevailing atmospheric conditions. Effects like shadows can make the appearance and spectral characteristics of identical materials to be significantly different. This degrades the performance of high-level algorithms that use hyperspectral data, such as those that do classification and clustering. If sufficient training data is available, learning algorithms such as neural networks can capture variability in the scene appearance and be trained to compensate for it. Learning algorithms are advantageous for this task because they do not require a priori knowledge of the prevailing atmospheric conditions or data from additional sensors. Labelling of hyperspectral data is, however, difficult and time-consuming, so acquiring enough labelled samples for the learning algorithm to adequately capture the scene appearance is challenging. Hence, there is a need for the development of techniques that are invariant to the effects of illumination that do not require large amounts of labelled data. In this thesis, an approach to learning a representation of hyperspectral data that is invariant to the effects of illumination is proposed. This approach combines a physics-based model of the illumination process with an unsupervised deep learning algorithm, and thus requires no labelled data. Datasets that vary both temporally and spatially are used to compare the proposed approach to other similar state-of-the-art techniques. The results show that the learnt representation is more invariant to shadows in the image and to variations in brightness due to changes in the scene topography or position of the sun in the sky. The results also show that a supervised classifier can predict class labels more accurately and more consistently across time when images are represented using the proposed method. Additionally, this thesis proposes methods to train supervised classification models to be more robust to variations in illumination where only limited amounts of labelled data are available. The transfer of knowledge from well-labelled datasets to poorly labelled datasets for classification is investigated. A method is also proposed for enabling small amounts of labelled samples to capture the variability in spectra across the scene. These samples are then used to train a classifier to be robust to the variability in the data caused by variations in illumination. The results show that these approaches make convolutional neural network classifiers more robust and achieve better performance when there is limited labelled training data. A case study is presented where a pipeline is proposed that incorporates the methods proposed in this thesis for learning robust feature representations and classification models. A scene is clustered using no labelled data. The results show that the pipeline groups the data into clusters that are consistent with the spatial distribution of the classes in the scene as determined from ground truth

    Rough Sets and Near Sets in Medical Imaging: A Review

    Full text link

    Hematological image analysis for acute lymphoblastic leukemia detection and classification

    Get PDF
    Microscopic analysis of peripheral blood smear is a critical step in detection of leukemia.However, this type of light microscopic assessment is time consuming, inherently subjective, and is governed by hematopathologists clinical acumen and experience. To circumvent such problems, an efficient computer aided methodology for quantitative analysis of peripheral blood samples is required to be developed. In this thesis, efforts are therefore made to devise methodologies for automated detection and subclassification of Acute Lymphoblastic Leukemia (ALL) using image processing and machine learning methods.Choice of appropriate segmentation scheme plays a vital role in the automated disease recognition process. Accordingly to segment the normal mature lymphocyte and malignant lymphoblast images into constituent morphological regions novel schemes have been proposed. In order to make the proposed schemes viable from a practical and real–time stand point, the segmentation problem is addressed in both supervised and unsupervised framework. These proposed methods are based on neural network,feature space clustering, and Markov random field modeling, where the segmentation problem is formulated as pixel classification, pixel clustering, and pixel labeling problem respectively. A comprehensive validation analysis is presented to evaluate the performance of four proposed lymphocyte image segmentation schemes against manual segmentation results provided by a panel of hematopathologists. It is observed that morphological components of normal and malignant lymphocytes differ significantly. To automatically recognize lymphoblasts and detect ALL in peripheral blood samples, an efficient methodology is proposed.Morphological, textural and color features are extracted from the segmented nucleus and cytoplasm regions of the lymphocyte images. An ensemble of classifiers represented as EOC3 comprising of three classifiers shows highest classification accuracy of 94.73% in comparison to individual members. The subclassification of ALL based on French–American–British (FAB) and World Health Organization (WHO) criteria is essential for prognosis and treatment planning. Accordingly two independent methodologies are proposed for automated classification of malignant lymphocyte (lymphoblast) images based on morphology and phenotype. These methods include lymphoblast image segmentation, nucleus and cytoplasm feature extraction, and efficient classification

    Modeling, Estimation, and Pattern Analysis of Random Texture on 3-D Surfaces

    Get PDF
    To recover 3-D structure from a shaded and textural surface image involving textures, neither the Shape-from-shading nor the Shape-from-texture analysis is enough, because both radiance and texture information coexist within the scene surface. A new 3-D texture model is developed by considering the scene image as the superposition of a smooth shaded image and a random texture image. To describe the random part, the orthographical projection is adapted to take care of the non-isotropic distribution function of the intensity due to the slant and tilt of a 3-D textures surface, and the Fractional Differencing Periodic (FDP) model is chosen to describe the random texture, because this model is able to simultaneously represent the coarseness and the pattern of the 3-D texture surface, and enough flexible to synthesize both long-term and short-term correlation structures of random texture. Since the object is described by the model involving several free parameters and the values of these parameters are determined directly from its projected image, it is possible to extract 3-D information and texture pattern directly from the image without any preprocessing. Thus, the cumulative error obtained from each pre-processing can be minimized. For estimating the parameters, a hybrid method which uses both the least square and the maximum likelihood estimates is applied and the estimation of parameters and the synthesis are done in frequency domain. Among the texture pattern features which can be obtained from a single surface image, Fractal scaling parameter plays a major role for classifying and/or segmenting the different texture patterns tilted and slanted due to the 3-dimensional rotation, because of its rotational and scaling invariant properties. Also, since the Fractal scaling factor represents the coarseness of the surface, each texture pattern has its own Fractal scale value, and particularly at the boundary between the different textures, it has relatively higher value to the one within a same texture. Based on these facts, a new classification method and a segmentation scheme for the 3-D rotated texture patterns are develope

    Advances on Time Series Analysis using Elastic Measures of Similarity

    Get PDF
    A sequence is a collection of data instances arranged in a structured manner. When this arrangement is held in the time domain, sequences are instead referred to as time series. As such, each observation in a time series represents an observation drawn from an underlying process, produced at a specific time instant. However, other type of data indexing structures, such as space- or threshold-based arrangements are possible. Data points that compose a time series are often correlated with each other. To account for this correlation in data mining tasks, time series are usually studied as a whole data object rather than as a collection of independent observations. In this context, techniques for time series analysis aim at analyzing this type of data structures by applying specific approaches developed to leverage intrinsic properties of the time series for a wide range of problems, such as classification, clustering and other tasks alike. The development of monitoring and storage devices has made time se- ries analysis proliferate in numerous application fields, including medicine, economics, manufacturing and telecommunications, among others. Over the years, the community has gathered efforts towards the development of new data-based techniques for time series analysis suited to address the problems and needs of such application fields. In the related literature, such techniques can be divided in three main groups: feature-, model- and distance-based methods. The first group (feature-based) transforms time series into a collection of features, which are then used by conventional learning algorithms to provide solutions to the task under consideration. In contrast, methods belonging to the second group (model-based) assume that each time series is drawn from a generative model, which is then har- nessed to elicit knowledge from data. Finally, distance-based techniques operate directly on raw time series. To this end, these methods resort to specially defined measures of distance or similarity for comparing time series, without requiring any further processing. Among them, elastic sim- ilarity measures (e.g., dynamic time warping and edit distance) compute the closeness between two sequences by finding the best alignment between them, disregarding differences in time, and thus focusing exclusively on shape differences. This Thesis presents several contributions to the field of distance-based techniques for time series analysis, namely: i) a novel multi-dimensional elastic similarity learning method for time series classification; ii) an adap- tation of elastic measures to streaming time series scenarios; and iii) the use of distance-based time series analysis to make machine learning meth- ods for image classification robust against adversarial attacks. Throughout the Thesis, each contribution is framed within its related state of the art, explained in detail and empirically evaluated. The obtained results lead to new insights on the application of distance-based time series methods for the considered scenarios, and motivates research directions that highlight the vibrant momentum of this research area

    Advances on Time Series Analysis using Elastic Measures of Similarity

    Get PDF
    135 p.A sequence is a collection of data instances arranged in an structured manner. When thisarrangement is held in the time domain, sequences are instead referred to as time series. As such,each observation in a time series represents an observation drawn from an underlying process,produced at a specific time instant. However, other type of data indexing structures, such as spaceorthreshold-based arrangements are possible. Data points that compose a time series are oftencorrelated to each other. To account for this correlation in data mining tasks, time series are usuallystudied as a whole data object rather than as a collection of independent observations. In thiscontext, techniques for time series analysis aim at analyzing this type of data structures by applyingspecific approaches developed to harness intrinsic properties of the time series for a wide range ofproblems such as, classification, clustering and other tasks alike.The development of monitoring and storage devices has made time series analysisproliferate in numerous application fields including medicine, economics, manufacturing andtelecommunications, among others. Over the years, the community has gathered efforts towards thedevelopment of new data-based techniques for time series analysis suited to address the problemsand needs of such application fields. In the related literature, such techniques can be divided in threemain groups: feature-, model- and distance- based methods. The first group (feature-based)transforms time series into a collection of features, which are then used by conventional learningalgorithms to provide solutions to the task under consideration. In contrast, methods belonging to thesecond group (model-based) assume that each time series is drawn from a generative model, whichis then harnessed to elicit information from data. Finally, distance-based techniques operate directlyon raw time series. To this end, these latter methods resort to specially defined measures of distanceor similarity for comparing time series, without requiring any further processing. Among them,elastic similarity measures (e.g., dynamic time warping and edit distance) compute the closenessbetween two sequences by finding the best alignment between them, disregarding differences intime gaps and thus focusing exclusively on shape differences.This Thesis presents several contributions to the field of distance-based techniques for timeseries analysis, namely: i) a novel multi-dimensional elastic similarity learning method for timeseries classification; ii) an adaptation of elastic measures to streaming time series scenarios; and iii)the use of distance-based time series analysis to make machine learning methods for imageclassification robust against adversarial attacks. Throughout the Thesis, each contribution is framedwithin its related state of the art, explained in detail and empirically evaluated. The obtained resultslead to new insights on the application of distance-based time series methods for the consideredscenarios, and motivates research directions that highlight the vibrant momentum of this researcharea
    corecore