9 research outputs found

    A perceptual learning model to discover the hierarchical latent structure of image collections

    Get PDF
    Biology has been an unparalleled source of inspiration for the work of researchers in several scientific and engineering fields including computer vision. The starting point of this thesis is the neurophysiological properties of the human early visual system, in particular, the cortical mechanism that mediates learning by exploiting information about stimuli repetition. Repetition has long been considered a fundamental correlate of skill acquisition andmemory formation in biological aswell as computational learning models. However, recent studies have shown that biological neural networks have differentways of exploiting repetition in forming memory maps. The thesis focuses on a perceptual learning mechanism called repetition suppression, which exploits the temporal distribution of neural activations to drive an efficient neural allocation for a set of stimuli. This explores the neurophysiological hypothesis that repetition suppression serves as an unsupervised perceptual learning mechanism that can drive efficient memory formation by reducing the overall size of stimuli representation while strengthening the responses of the most selective neurons. This interpretation of repetition is different from its traditional role in computational learning models mainly to induce convergence and reach training stability, without using this information to provide focus for the neural representations of the data. The first part of the thesis introduces a novel computational model with repetition suppression, which forms an unsupervised competitive systemtermed CoRe, for Competitive Repetition-suppression learning. The model is applied to generalproblems in the fields of computational intelligence and machine learning. Particular emphasis is placed on validating the model as an effective tool for the unsupervised exploration of bio-medical data. In particular, it is shown that the repetition suppression mechanism efficiently addresses the issues of automatically estimating the number of clusters within the data, as well as filtering noise and irrelevant input components in highly dimensional data, e.g. gene expression levels from DNA Microarrays. The CoRe model produces relevance estimates for the each covariate which is useful, for instance, to discover the best discriminating bio-markers. The description of the model includes a theoretical analysis using Huber’s robust statistics to show that the model is robust to outliers and noise in the data. The convergence properties of themodel also studied. It is shown that, besides its biological underpinning, the CoRe model has useful properties in terms of asymptotic behavior. By exploiting a kernel-based formulation for the CoRe learning error, a theoretically sound motivation is provided for the model’s ability to avoid local minima of its loss function. To do this a necessary and sufficient condition for global error minimization in vector quantization is generalized by extending it to distance metrics in generic Hilbert spaces. This leads to the derivation of a family of kernel-based algorithms that address the local minima issue of unsupervised vector quantization in a principled way. The experimental results show that the algorithm can achieve a consistent performance gain compared with state-of-the-art learning vector quantizers, while retaining a lower computational complexity (linear with respect to the dataset size). Bridging the gap between the low level representation of the visual content and the underlying high-level semantics is a major research issue of current interest. The second part of the thesis focuses on this problem by introducing a hierarchical and multi-resolution approach to visual content understanding. On a spatial level, CoRe learning is used to pool together the local visual patches by organizing them into perceptually meaningful intermediate structures. On the semantical level, it provides an extension of the probabilistic Latent Semantic Analysis (pLSA) model that allows discovery and organization of the visual topics into a hierarchy of aspects. The proposed hierarchical pLSA model is shown to effectively address the unsupervised discovery of relevant visual classes from pictorial collections, at the same time learning to segment the image regions containing the discovered classes. Furthermore, by drawing on a recent pLSA-based image annotation system, the hierarchical pLSA model is extended to process and representmulti-modal collections comprising textual and visual data. The results of the experimental evaluation show that the proposed model learns to attach textual labels (available only at the level of the whole image) to the discovered image regions, while increasing the precision/ recall performance with respect to flat, pLSA annotation model

    Machine learning-based automated segmentation with a feedback loop for 3D synchrotron micro-CT

    Get PDF
    Die Entwicklung von Synchrotronlichtquellen der dritten Generation hat die Grundlage für die Untersuchung der 3D-Struktur opaker Proben mit einer Auflösung im Mikrometerbereich und höher geschaffen. Dies führte zur Entwicklung der Röntgen-Synchrotron-Mikro-Computertomographie, welche die Schaffung von Bildgebungseinrichtungen zur Untersuchung von Proben verschiedenster Art förderte, z.B. von Modellorganismen, um die Physiologie komplexer lebender Systeme besser zu verstehen. Die Entwicklung moderner Steuerungssysteme und Robotik ermöglichte die vollständige Automatisierung der Röntgenbildgebungsexperimente und die Kalibrierung der Parameter des Versuchsaufbaus während des Betriebs. Die Weiterentwicklung der digitalen Detektorsysteme führte zu Verbesserungen der Auflösung, des Dynamikbereichs, der Empfindlichkeit und anderer wesentlicher Eigenschaften. Diese Verbesserungen führten zu einer beträchtlichen Steigerung des Durchsatzes des Bildgebungsprozesses, aber auf der anderen Seite begannen die Experimente eine wesentlich größere Datenmenge von bis zu Dutzenden von Terabyte zu generieren, welche anschließend manuell verarbeitet wurden. Somit ebneten diese technischen Fortschritte den Weg für die Durchführung effizienterer Hochdurchsatzexperimente zur Untersuchung einer großen Anzahl von Proben, welche Datensätze von besserer Qualität produzierten. In der wissenschaftlichen Gemeinschaft besteht daher ein hoher Bedarf an einem effizienten, automatisierten Workflow für die Röntgendatenanalyse, welcher eine solche Datenlast bewältigen und wertvolle Erkenntnisse für die Fachexperten liefern kann. Die bestehenden Lösungen für einen solchen Workflow sind nicht direkt auf Hochdurchsatzexperimente anwendbar, da sie für Ad-hoc-Szenarien im Bereich der medizinischen Bildgebung entwickelt wurden. Daher sind sie nicht für Hochdurchsatzdatenströme optimiert und auch nicht in der Lage, die hierarchische Beschaffenheit von Proben zu nutzen. Die wichtigsten Beiträge der vorliegenden Arbeit sind ein neuer automatisierter Analyse-Workflow, der für die effiziente Verarbeitung heterogener Röntgendatensätze hierarchischer Natur geeignet ist. Der entwickelte Workflow basiert auf verbesserten Methoden zur Datenvorverarbeitung, Registrierung, Lokalisierung und Segmentierung. Jede Phase eines Arbeitsablaufs, die eine Trainingsphase beinhaltet, kann automatisch feinabgestimmt werden, um die besten Hyperparameter für den spezifischen Datensatz zu finden. Für die Analyse von Faserstrukturen in Proben wurde eine neue, hochgradig parallelisierbare 3D-Orientierungsanalysemethode entwickelt, die auf einem neuartigen Konzept der emittierenden Strahlen basiert und eine präzisere morphologische Analyse ermöglicht. Alle entwickelten Methoden wurden gründlich an synthetischen Datensätzen validiert, um ihre Anwendbarkeit unter verschiedenen Abbildungsbedingungen quantitativ zu bewerten. Es wurde gezeigt, dass der Workflow in der Lage ist, eine Reihe von Datensätzen ähnlicher Art zu verarbeiten. Darüber hinaus werden die effizienten CPU/GPU-Implementierungen des entwickelten Workflows und der Methoden vorgestellt und der Gemeinschaft als Module für die Sprache Python zur Verfügung gestellt. Der entwickelte automatisierte Analyse-Workflow wurde erfolgreich für Mikro-CT-Datensätze angewandt, die in Hochdurchsatzröntgenexperimenten im Bereich der Entwicklungsbiologie und Materialwissenschaft gewonnen wurden. Insbesondere wurde dieser Arbeitsablauf für die Analyse der Medaka-Fisch-Datensätze angewandt, was eine automatisierte Segmentierung und anschließende morphologische Analyse von Gehirn, Leber, Kopfnephronen und Herz ermöglichte. Darüber hinaus wurde die entwickelte Methode der 3D-Orientierungsanalyse bei der morphologischen Analyse von Polymergerüst-Datensätzen eingesetzt, um einen Herstellungsprozess in Richtung wünschenswerter Eigenschaften zu lenken

    Internetsuche und Neuronale Netze: Stand der Technik

    Get PDF
    Veröffentlichung des Wilhelm-Schickard-Institut für Informatik Universität Tübinge

    Variational learning of a Dirichlet process of generalized Dirichlet distributions for simultaneous clustering and feature selection

    Get PDF
    This paper introduces a novel enhancement for unsupervised feature selection based on generalized Dirichlet (GD) mixture models. Our proposal is based on the extension of the finite mixture model previously developed in [1] to the infinite case, via the consideration of Dirichlet process mixtures, which can be viewed actually as a purely nonparametric model since the number of mixture components can increase as data are introduced. The infinite assumption is used to avoid problems related to model selection (i.e. determination of the number of clusters) and allows simultaneous separation of data in to similar clusters and selection of relevant features. Our resulting model is learned within a principled variational Bayesian framework that we have developed. The experimental results reported for both synthetic data and real-world challenging applications involving image categorization, automatic semantic annotation and retrieval show the ability of our approach to provide accurate models by distinguishing between relevant and irrelevant features without over- or under-fitting the data

    New Fundamental Technologies in Data Mining

    Get PDF
    The progress of data mining technology and large public popularity establish a need for a comprehensive text on the subject. The series of books entitled by "Data Mining" address the need by presenting in-depth description of novel mining algorithms and many useful applications. In addition to understanding each section deeply, the two books present useful hints and strategies to solving problems in the following chapters. The contributing authors have highlighted many future research directions that will foster multi-disciplinary collaborations and hence will lead to significant development in the field of data mining

    AIRSPACE PLANNING FOR OPTIMAL CAPACITY, EFFICIENCY, AND SAFETY USING ANALYTICS

    Get PDF
    Air Navigation Service Providers (ANSP) worldwide have been making a considerable effort for the development of a better method for planning optimal airspace capacity, efficiency, and safety. These goals require separation and sequencing of aircraft before they depart. Prior approaches have tactically achieved these goals to some extent. However, dealing with increasingly congested airspace and new environmental factors with high levels of uncertainty still remains the challenge when deterministic approach is used. Hence due to the nature of uncertainties, we take a stochastic approach and propose a suite of analytics models for (1) Flight Time Prediction, (2) Aircraft Trajectory Clustering, (3) Aircraft Trajectory Prediction, and (4) Aircraft Conflict Detection and Resolution long before aircraft depart. The suite of data-driven models runs on a scalable Data Management System that continuously processes streaming massive flight data to achieve the strategic airspace planning for optimal capacity, efficiency, and safety. (1) Flight Time Prediction. Unlike other systems that collect and use features only for the arrival airport to build a data-driven model for predicting flight times, we use a richer set of features along the potential route, such as weather parameters and air traffic data in addition to those that are particular to the arrival airport. Our feature engineering process generates an extensive set of multidimensional time series data which goes through Time Series Clustering with Dynamic Time Warping (DTW) to generate a single set of representative features at each time instance. The features are fed into various regression and deep learning models and the best performing models with most accurate ETA predictions are selected. Evaluations on extensive set of real trajectory, weather, and airport data in Europe verify our prediction system generates more accurate ETAs with far less variance than those of European ANSP, EUROCONTROL’s. This translates to more accurately predicted flight arrival times, enabling airlines to make more cost-effective ground resource allocation and ANSPs to make more efficient flight scheduling. (2) Aircraft Trajectory Clustering. The novel divide-cluster-merge; DICLERGE system clusters aircraft trajectories by dividing them into the three standard major flight phases: climb, en-route, and descent. Trajectory segments in each phase are clustered in isolation, then merged together. Our unique approach also discovers a representative trajectory, the model for the entire trajectory set. (3) Aircraft Trajectory Prediction. Our approach considers airspace as a 3D grid network, where each grid point is a location of a weather observation. We hypothetically build cubes around these grid points, so the entire airspace can be considered as a set of cubes. Each cube is defined by its centroid, the original grid point, and associated weather parameters that remain homogeneous within the cube during a period of time. Then, we align raw trajectories to a set of cube centroids which are basically fixed 3D positions independent of trajectory data. This creates a new form of trajectories which are 4D joint cubes, where each cube is a segment that is associated with not only spatio-temporal attributes but also with weather parameters. Next, we exploit machine learning techniques to train inference models from historical data and apply a stochastic model, a Hidden Markov Model (HMM), to predict trajectories taking environmental uncertainties into account. During the process, we apply time series clustering to generate input observations from an excessive set of weather parameters to feed into the Viterbi algorithm. The experiments use a real trajectory dataset with pertaining weather observations and demonstrate the effectiveness of our approach to the trajectory prediction process for Air Traffic Management. (4) Aircraft Conflict Detection. We propose a novel data-driven system to address a long-range aircraft conflict detection and resolution (CDR) problem. Given a set of predicted trajectories, the system declares a conflict when a protected zone of an aircraft on its trajectory is infringed upon by another aircraft. The system resolves the conflict by prescribing an alternative solution that is optimized by perturbing at least one of the trajectories involved in the conflict. To achieve this, the system learns from descriptive patterns of historical trajectories and pertinent weather observations and builds a Hidden Markov Model (HMM). Using a variant of the Viterbi algorithm, the system avoids the airspace volume in which the conflict is detected and generates a new optimal trajectory that is conflict-free. The key concept upon which the system is built is the assumption that the airspace is nothing more than a horizontally and vertically concatenated set of spatio-temporal data cubes where each cube is considered as an atomic unit. We evaluate the system using real trajectory datasets with pertinent weather observations from two continents and demonstrate its effectiveness for strategic CDR. Overall, in this thesis, we develop a suite of analytics models and algorithms to accurately identify current patterns in the massive flight data and use these patterns to predict future behaviors in the airspace. Upon prediction of a non-ideal outcome, we prescribe a solution to plan airspace for optimal capacity, efficiency, and safety
    corecore