3 research outputs found

    Automating Active Learning for Gaussian Processes

    Get PDF
    In many problems in science, technology, and engineering, unlabeled data is abundant but acquiring labeled observations is expensive -- it requires a human annotator, a costly laboratory experiment, or a time-consuming computer simulation. Active learning is a machine learning paradigm designed to minimize the cost of obtaining labeled data by carefully selecting which new data should be gathered next. However, excessive machine learning expertise is often required to effectively apply these techniques in their current form. In this dissertation, we propose solutions that further automate active learning. Our core contributions are active learning algorithms that are easy for non-experts to use but that deliver results competitive with or better than human-expert solutions. We begin introducing a novel active search algorithm that automatically and dynamically balances exploration against exploitation --- without relying on a parameter to control this tradeoff. We also provide a theoretical investigation on the hardness of this problem, proving that no polynomial-time policy can achieve a constant factor approximation ratio for the expected utility of the optimal policy. Next, we introduce a novel information-theoretic approach for active model selection. Our method is based on maximizing the mutual information between the output variable and the model class. This is the first active-model-selection approach that does not require updating each model for every candidate point. As a result, we successfully developed an automated audiometry test for rapid screening of noise-induced hearing loss, a widespread and preventable disability, if diagnosed early. We proceed by introducing a novel model selection algorithm for fixed-size datasets, called Bayesian optimization for model selection (BOMS). Our proposed model search method is based on Bayesian optimization in model space, where we reason about the model evidence as a function to be maximized. BOMS is capable of finding a model that explains the dataset well without any human assistance. Finally, we extend BOMS to active learning, creating a fully automatic active learning framework. We apply this framework to Bayesian optimization, creating a sample-efficient automated system for black-box optimization. Crucially, we account for the uncertainty in the choice of model; our method uses multiple and carefully-selected models to represent its current belief about the latent objective function. Our algorithms are completely general and can be extended to any class of probabilistic models. In this dissertation, however, we mainly use the powerful class of Gaussian process models to perform inference. Extensive experimental evidence is provided to demonstrate that all proposed algorithms outperform previously developed solutions to these problems

    High Dimensional Separable Representations for Statistical Estimation and Controlled Sensing.

    Full text link
    This thesis makes contributions to a fundamental set of high dimensional problems in the following areas: (1) performance bounds for high dimensional estimation of structured Kronecker product covariance matrices, (2) optimal query design for a centralized collaborative controlled sensing system used for target localization, and (3) global convergence theory for decentralized controlled sensing systems. Separable approximations are effective dimensionality reduction techniques for high dimensional problems. In multiple modality and spatio-temporal signal processing, separable models for the underlying covariance are exploited for improved estimation accuracy and reduced computational complexity. In query- based controlled sensing, estimation performance is greatly optimized at the expense of query design. Multi-agent controlled sensing systems for target localization consist of a set of agents that collaborate to estimate the location of an unknown target. In the centralized setting, for a large number of agents and/or high- dimensional targets, separable representations of the fusion center’s query policies are exploited to maintain tractability. For large-scale sensor networks, decentralized estimation methods are of primary interest, under which agents obtain new noisy information as a function of their current belief and exchange local beliefs with their neighbors. Here, separable representations of the temporally evolving information state are exploited to improve robustness and scalability. The results improve upon the current state-of-the-art.PhDElectrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/107110/1/ttsili_1.pd
    corecore