1,007 research outputs found

    Robust Deep Learning in the Open World with Lifelong Learning and Representation Learning

    Full text link
    Deep neural networks have shown a superior performance in many learning problems by learning hierarchical latent representations from a large amount of labeled data. However, the success of deep learning methods is under the closed-world assumption: no instances of new classes appear at test time. On the contrary, our world is open and dynamic, such that the closed-world assumption may not hold in many real applications. In other words, deep learning-based agents are not guaranteed to work in the open world, where instances of unknown and unseen classes are pervasive. In this dissertation, we explore lifelong learning and representation learning to generalize deep learning methods to the open world. Lifelong learning involves identifying novel classes and incrementally learning them without training from scratch, and representation learning involves being robust to data distribution shifts. Specifically, we propose 1) hierarchical novelty detection for detecting and identifying novel classes, 2) continual learning with unlabeled data to overcome catastrophic forgetting when learning the novel classes, 3) network randomization for learning robust representations across visual domain shifts, and 4) domain-agnostic contrastive representation learning, which is robust to data distribution shifts. The first part of this dissertation studies a cycle of lifelong learning. We divide it into two steps and present how we can achieve each step: first, we propose a new novelty detection and classification framework termed hierarchical novelty detection for detecting and identifying novel classes. Then, we show that unlabeled data easily obtainable in the open world are useful to avoid forgetting about the previously learned classes when learning novel classes. We propose a new knowledge distillation method and confidence-based sampling method to effectively leverage the unlabeled data. The second part of this dissertation studies robust representation learning: first, we present a network randomization method to learn an invariant representation across visual changes, particularly effective in deep reinforcement learning. Then, we propose a domain-agnostic robust representation learning method by introducing vicinal risk minimization in contrastive representation learning, which consistently improves the quality of representation and transferability across data distribution shifts.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/162981/1/kibok_1.pd

    Principal component-based image segmentation: a new approach to outline in vitro cell colonies

    Get PDF
    The in vitro clonogenic assay is a technique to study the ability of a cell to form a colony in a culture dish. By optical imaging, dishes with stained colonies can be scanned and assessed digitally. Identification, segmentation and counting of stained colonies play a vital part in high-throughput screening and quantitative assessment of biological assays. Image processing of such pictured/scanned assays can be affected by image/scan acquisition artifacts like background noise and spatially varying illumination, and contaminants in the suspension medium. Although existing approaches tackle these issues, the segmentation quality requires further improvement, particularly on noisy and low contrast images. In this work, we present an objective and versatile machine learning procedure to amend these issues by characterizing, extracting and segmenting inquired colonies using principal component analysis, k-means clustering and a modified watershed segmentation algorithm. The intention is to automatically identify visible colonies through spatial texture assessment and accordingly discriminate them from background in preparation for successive segmentation. The proposed segmentation algorithm yielded a similar quality as manual counting by human observers. High F1 scores (>0.9) and low root-mean-square errors (around 14%) underlined good agreement with ground truth data. Moreover, it outperformed a recent state-of-the-art method. The methodology will be an important tool in future cancer research applications

    Automatic taxonomy evaluation

    Full text link
    This thesis would not be made possible without the generous support of IATA.Les taxonomies sont une représentation essentielle des connaissances, jouant un rôle central dans de nombreuses applications riches en connaissances. Malgré cela, leur construction est laborieuse que ce soit manuellement ou automatiquement, et l'évaluation quantitative de taxonomies est un sujet négligé. Lorsque les chercheurs se concentrent sur la construction d'une taxonomie à partir de grands corpus non structurés, l'évaluation est faite souvent manuellement, ce qui implique des biais et se traduit souvent par une reproductibilité limitée. Les entreprises qui souhaitent améliorer leur taxonomie manquent souvent d'étalon ou de référence, une sorte de taxonomie bien optimisée pouvant service de référence. Par conséquent, des connaissances et des efforts spécialisés sont nécessaires pour évaluer une taxonomie. Dans ce travail, nous soutenons que l'évaluation d'une taxonomie effectuée automatiquement et de manière reproductible est aussi importante que la génération automatique de telles taxonomies. Nous proposons deux nouvelles méthodes d'évaluation qui produisent des scores moins biaisés: un modèle de classification de la taxonomie extraite d'un corpus étiqueté, et un modèle de langue non supervisé qui sert de source de connaissances pour évaluer les relations hyperonymiques. Nous constatons que nos substituts d'évaluation corrèlent avec les jugements humains et que les modèles de langue pourraient imiter les experts humains dans les tâches riches en connaissances.Taxonomies are an essential knowledge representation and play an important role in classification and numerous knowledge-rich applications, yet quantitative taxonomy evaluation remains to be overlooked and left much to be desired. While studies focus on automatic taxonomy construction (ATC) for extracting meaningful structures and semantics from large corpora, their evaluation is usually manual and subject to bias and low reproducibility. Companies wishing to improve their domain-focused taxonomies also suffer from lacking ground-truths. In fact, manual taxonomy evaluation requires substantial labour and expert knowledge. As a result, we argue in this thesis that automatic taxonomy evaluation (ATE) is just as important as taxonomy construction. We propose two novel taxonomy evaluation methods for automatic taxonomy scoring, leveraging supervised classification for labelled corpora and unsupervised language modelling as a knowledge source for unlabelled data. We show that our evaluation proxies can exert similar effects and correlate well with human judgments and that language models can imitate human experts on knowledge-rich tasks

    Probabilistic procrustean models for shape recognition with an application to robotic grasping

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2008.Includes bibliographical references (p. 92-98).Robot manipulators largely rely on complete knowledge of object geometry in order to plan their motion and compute successful grasps. If an object is fully in view, the object geometry can be inferred from sensor data and a grasp computed directly. If the object is occluded by other entities in the scene, manipulations based on the visible part of the object may fail; to compensate, object recognition is often used to identify the location of the object and compute the grasp from a prior model. However, new instances of a known class of objects may vary from the prior model, and known objects may appear in novel configurations if they are not perfectly rigid. As a result, manipulation can pose a substantial modeling challenge when objects are not fully in view. In this thesis, we will attempt to model the shapes of objects in a way that is robust to both deformations and occlusions. In addition, we will develop a model that allows us to recover the hidden parts of occluded objects (shape completion), and which maintains information about the object boundary for use in robotic grasp planning. Our approach will be data-driven and generative, and we will base our probabilistic models on Kendall's Procrustean theory of shape.by Jared Marshall Glover.S.M

    Non-parametric modeling in non-intrusive load monitoring

    Get PDF
    Non-intrusive Load Monitoring (NILM) is an approach to the increasingly important task of residential energy analytics. Transparency of energy resources and consumption habits presents opportunities and benefits at all ends of the energy supply-chain, including the end-user. At present, there is no feasible infrastructure available to monitor individual appliances at a large scale. The goal of NILM is to provide appliance monitoring using only the available aggregate data, side-stepping the need for expensive and intrusive monitoring equipment. The present work showcases two self-contained, fully unsupervised NILM solutions: the first featuring non-parametric mixture models, and the second featuring non-parametric factorial Hidden Markov Models with explicit duration distributions. The present implementation makes use of traditional and novel constraints during inference, showing marked improvement in disaggregation accuracy with very little effect on computational cost, relative to the motivating work. To constitute a complete unsupervised solution, labels are applied to the inferred components using a Res-Net-based deep learning architecture. Although this preliminary approach to labelling proves less than satisfactory, it is well-founded and several opportunities for improvement are discussed. Both methods, along with the labelling network, make use of block-filtered data: a steady-state representation that removes transient behaviour and signal noise. A novel filter to achieve this steady-state representation that is both fast and reliable is developed and discussed at length. Finally, an approach to monitor the aggregate for novel events during deployment is developed under the framework of Bayesian surprise. The same non-parametric modelling can be leveraged to examine how the predictive and transitional distributions change given new windows of observations. This framework is also shown to have potential elsewhere, such as in regularizing models against over-fitting, which is an important problem in existing supervised NILM

    Robust computational intelligence techniques for visual information processing

    Get PDF
    The third part is exclusively dedicated to the super-resolution of Magnetic Resonance Images. In one of these works, an algorithm based on the random shifting technique is developed. Besides, we studied noise removal and resolution enhancement simultaneously. To end, the cost function of deep networks has been modified by different combinations of norms in order to improve their training. Finally, the general conclusions of the research are presented and discussed, as well as the possible future research lines that are able to make use of the results obtained in this Ph.D. thesis.This Ph.D. thesis is about image processing by computational intelligence techniques. Firstly, a general overview of this book is carried out, where the motivation, the hypothesis, the objectives, and the methodology employed are described. The use and analysis of different mathematical norms will be our goal. After that, state of the art focused on the applications of the image processing proposals is presented. In addition, the fundamentals of the image modalities, with particular attention to magnetic resonance, and the learning techniques used in this research, mainly based on neural networks, are summarized. To end up, the mathematical framework on which this work is based on, â‚š-norms, is defined. Three different parts associated with image processing techniques follow. The first non-introductory part of this book collects the developments which are about image segmentation. Two of them are applications for video surveillance tasks and try to model the background of a scenario using a specific camera. The other work is centered on the medical field, where the goal of segmenting diabetic wounds of a very heterogeneous dataset is addressed. The second part is focused on the optimization and implementation of new models for curve and surface fitting in two and three dimensions, respectively. The first work presents a parabola fitting algorithm based on the measurement of the distances of the interior and exterior points to the focus and the directrix. The second work changes to an ellipse shape, and it ensembles the information of multiple fitting methods. Last, the ellipsoid problem is addressed in a similar way to the parabola
    • …
    corecore