19,603 research outputs found

    Compound sequential change-point detection in parallel data streams

    Get PDF
    We consider sequential change-point detection in parallel data streams, where each stream has its own change point. Once a change is detected in a data stream, this stream is deactivated permanently. The goal is to maximize the normal operation of the pre-change streams, while controlling the proportion of post-change streams among the active streams at all time points. Taking a Bayesian formulation, we develop a compound decision framework for this problem. A procedure is proposed that is uniformly optimal among all sequential procedures which control the expected proportion of post-change streams at all time points. We also investigate the asymptotic behavior of the proposed method when the number of data streams grows large. Numerical examples are provided to illustrate the use and performance of the proposed method

    Fast Charging of Lithium-Ion Batteries Using Deep Bayesian Optimization with Recurrent Neural Network

    Full text link
    Fast charging has attracted increasing attention from the battery community for electrical vehicles (EVs) to alleviate range anxiety and reduce charging time for EVs. However, inappropriate charging strategies would cause severe degradation of batteries or even hazardous accidents. To optimize fast-charging strategies under various constraints, particularly safety limits, we propose a novel deep Bayesian optimization (BO) approach that utilizes Bayesian recurrent neural network (BRNN) as the surrogate model, given its capability in handling sequential data. In addition, a combined acquisition function of expected improvement (EI) and upper confidence bound (UCB) is developed to better balance the exploitation and exploration. The effectiveness of the proposed approach is demonstrated on the PETLION, a porous electrode theory-based battery simulator. Our method is also compared with the state-of-the-art BO methods that use Gaussian process (GP) and non-recurrent network as surrogate models. The results verify the superior performance of the proposed fast charging approaches, which mainly results from that: (i) the BRNN-based surrogate model provides a more precise prediction of battery lifetime than that based on GP or non-recurrent network; and (ii) the combined acquisition function outperforms traditional EI or UCB criteria in exploring the optimal charging protocol that maintains the longest battery lifetime

    Offline and Online Models for Learning Pairwise Relations in Data

    Get PDF
    Pairwise relations between data points are essential for numerous machine learning algorithms. Many representation learning methods consider pairwise relations to identify the latent features and patterns in the data. This thesis, investigates learning of pairwise relations from two different perspectives: offline learning and online learning.The first part of the thesis focuses on offline learning by starting with an investigation of the performance modeling of a synchronization method in concurrent programming using a Markov chain whose state transition matrix models pairwise relations between involved cores in a computer process.Then the thesis focuses on a particular pairwise distance measure, the minimax distance, and explores memory-efficient approaches to computing this distance by proposing a hierarchical representation of the data with a linear memory requirement with respect to the number of data points, from which the exact pairwise minimax distances can be derived in a memory-efficient manner. Then, a memory-efficient sampling method is proposed that follows the aforementioned hierarchical representation of the data and samples the data points in a way that the minimax distances between all data points are maximally preserved. Finally, the thesis proposes a practical non-parametric clustering of vehicle motion trajectories to annotate traffic scenarios based on transitive relations between trajectories in an embedded space.The second part of the thesis takes an online learning perspective, and starts by presenting an online learning method for identifying bottlenecks in a road network by extracting the minimax path, where bottlenecks are considered as road segments with the highest cost, e.g., in the sense of travel time. Inspired by real-world road networks, the thesis assumes a stochastic traffic environment in which the road-specific probability distribution of travel time is unknown. Therefore, it needs to learn the parameters of the probability distribution through observations by modeling the bottleneck identification task as a combinatorial semi-bandit problem. The proposed approach takes into account the prior knowledge and follows a Bayesian approach to update the parameters. Moreover, it develops a combinatorial variant of Thompson Sampling and derives an upper bound for the corresponding Bayesian regret. Furthermore, the thesis proposes an approximate algorithm to address the respective computational intractability issue.Finally, the thesis considers contextual information of road network segments by extending the proposed model to a contextual combinatorial semi-bandit framework and investigates and develops various algorithms for this contextual combinatorial setting

    Neural Architecture Search: Insights from 1000 Papers

    Full text link
    In the past decade, advances in deep learning have resulted in breakthroughs in a variety of areas, including computer vision, natural language understanding, speech recognition, and reinforcement learning. Specialized, high-performing neural architectures are crucial to the success of deep learning in these areas. Neural architecture search (NAS), the process of automating the design of neural architectures for a given task, is an inevitable next step in automating machine learning and has already outpaced the best human-designed architectures on many tasks. In the past few years, research in NAS has been progressing rapidly, with over 1000 papers released since 2020 (Deng and Lindauer, 2021). In this survey, we provide an organized and comprehensive guide to neural architecture search. We give a taxonomy of search spaces, algorithms, and speedup techniques, and we discuss resources such as benchmarks, best practices, other surveys, and open-source libraries

    A Novel Point-based Algorithm for Multi-agent Control Using the Common Information Approach

    Full text link
    The Common Information (CI) approach provides a systematic way to transform a multi-agent stochastic control problem to a single-agent partially observed Markov decision problem (POMDP) called the coordinator's POMDP. However, such a POMDP can be hard to solve due to its extraordinarily large action space. We propose a new algorithm for multi-agent stochastic control problems, called coordinator's heuristic search value iteration (CHSVI), that combines the CI approach and point-based POMDP algorithms for large action spaces. We demonstrate the algorithm through optimally solving several benchmark problems.Comment: 11 pages, 4 figure

    Thermodynamic Assessment and Optimisation of Supercritical and Transcritical Power Cycles Operating on CO2 Mixtures by Means of Artificial Neural Networks

    Get PDF
    Feb 21, 2022 to Feb 24, 2022, San Antonio, TX, United StatesClosed supercritical and transcritical power cycles operating on Carbon Dioxide have proven to be a promising technology for power generation and, as such, they are being researched by numerous international projects today. Despite the advantageous features of these cycles enabling very high efficiencies in intermediate temperature applications, the major shortcoming of the technology is a strong dependence on ambient temperature; in order to perform compression near the CO2 critical point (31ºC), low ambient temperatures are needed. This is particularly challenging in Concentrated Solar Power applications, typically found in hot, semi-arid locations. To overcome this limitation, the SCARABEUS project explores the idea of blending raw carbon dioxide with small amounts of certain dopants in order to shift the critical temperature of the resulting working fluid to higher values, hence enabling gaseous compression near the critical point or even liquid compression regardless of a high ambient temperature. Different dopants have been studied within the project so far (i.e. C6F6, TiCl4 and SO2) but the final selection will have to account for trade-offs between thermodynamic performance, economic metrics and system reliability. Bearing all this in mind, the present paper deals with the development of a non-physics-based model using Artificial Neural Networks (ANN), developed using Matlab’s Deep Learning Toolbox, to enable SCARABEUS system optimisation without running the detailed – and extremely time consuming – thermal models, developed with Thermoflex and Matlab software. In the first part of the paper, the candidate dopants and cycle layouts are presented and discussed, and a thorough description of the ANN training methodology is provided, along with all the main assumptions and hypothesis made. In the second part of the manuscript, results confirms that the ANN is a reliable tool capable of successfully reproducing the detailed Thermoflex model, estimating the cycle thermal efficiency with a Root Mean Square Error lower than 0.2 percentage points. Furthermore, the great advantage of using the Artificial Neural Network proposed is demonstrated by the huge reduction in the computational time needed, up to 99% lower than the one consumed by the detailed model. Finally, the high flexibility and versatility of the ANN is shown, applying this tool in different scenarios and estimating different cycle thermal efficiency for a great variety of boundary conditions.Unión Europea H2020-81498

    Bayesian Reconstruction of Magnetic Resonance Images using Gaussian Processes

    Full text link
    A central goal of modern magnetic resonance imaging (MRI) is to reduce the time required to produce high-quality images. Efforts have included hardware and software innovations such as parallel imaging, compressed sensing, and deep learning-based reconstruction. Here, we propose and demonstrate a Bayesian method to build statistical libraries of magnetic resonance (MR) images in k-space and use these libraries to identify optimal subsampling paths and reconstruction processes. Specifically, we compute a multivariate normal distribution based upon Gaussian processes using a publicly available library of T1-weighted images of healthy brains. We combine this library with physics-informed envelope functions to only retain meaningful correlations in k-space. This covariance function is then used to select a series of ring-shaped subsampling paths using Bayesian optimization such that they optimally explore space while remaining practically realizable in commercial MRI systems. Combining optimized subsampling paths found for a range of images, we compute a generalized sampling path that, when used for novel images, produces superlative structural similarity and error in comparison to previously reported reconstruction processes (i.e. 96.3% structural similarity and <0.003 normalized mean squared error from sampling only 12.5% of the k-space data). Finally, we use this reconstruction process on pathological data without retraining to show that reconstructed images are clinically useful for stroke identification

    Anuário científico da Escola Superior de Tecnologia da Saúde de Lisboa - 2021

    Get PDF
    É com grande prazer que apresentamos a mais recente edição (a 11.ª) do Anuário Científico da Escola Superior de Tecnologia da Saúde de Lisboa. Como instituição de ensino superior, temos o compromisso de promover e incentivar a pesquisa científica em todas as áreas do conhecimento que contemplam a nossa missão. Esta publicação tem como objetivo divulgar toda a produção científica desenvolvida pelos Professores, Investigadores, Estudantes e Pessoal não Docente da ESTeSL durante 2021. Este Anuário é, assim, o reflexo do trabalho árduo e dedicado da nossa comunidade, que se empenhou na produção de conteúdo científico de elevada qualidade e partilhada com a Sociedade na forma de livros, capítulos de livros, artigos publicados em revistas nacionais e internacionais, resumos de comunicações orais e pósteres, bem como resultado dos trabalhos de 1º e 2º ciclo. Com isto, o conteúdo desta publicação abrange uma ampla variedade de tópicos, desde temas mais fundamentais até estudos de aplicação prática em contextos específicos de Saúde, refletindo desta forma a pluralidade e diversidade de áreas que definem, e tornam única, a ESTeSL. Acreditamos que a investigação e pesquisa científica é um eixo fundamental para o desenvolvimento da sociedade e é por isso que incentivamos os nossos estudantes a envolverem-se em atividades de pesquisa e prática baseada na evidência desde o início dos seus estudos na ESTeSL. Esta publicação é um exemplo do sucesso desses esforços, sendo a maior de sempre, o que faz com que estejamos muito orgulhosos em partilhar os resultados e descobertas dos nossos investigadores com a comunidade científica e o público em geral. Esperamos que este Anuário inspire e motive outros estudantes, profissionais de saúde, professores e outros colaboradores a continuarem a explorar novas ideias e contribuir para o avanço da ciência e da tecnologia no corpo de conhecimento próprio das áreas que compõe a ESTeSL. Agradecemos a todos os envolvidos na produção deste anuário e desejamos uma leitura inspiradora e agradável.info:eu-repo/semantics/publishedVersio

    Single Image Depth Prediction Made Better: A Multivariate Gaussian Take

    Full text link
    Neural-network-based single image depth prediction (SIDP) is a challenging task where the goal is to predict the scene's per-pixel depth at test time. Since the problem, by definition, is ill-posed, the fundamental goal is to come up with an approach that can reliably model the scene depth from a set of training examples. In the pursuit of perfect depth estimation, most existing state-of-the-art learning techniques predict a single scalar depth value per-pixel. Yet, it is well-known that the trained model has accuracy limits and can predict imprecise depth. Therefore, an SIDP approach must be mindful of the expected depth variations in the model's prediction at test time. Accordingly, we introduce an approach that performs continuous modeling of per-pixel depth, where we can predict and reason about the per-pixel depth and its distribution. To this end, we model per-pixel scene depth using a multivariate Gaussian distribution. Moreover, contrary to the existing uncertainty modeling methods -- in the same spirit, where per-pixel depth is assumed to be independent, we introduce per-pixel covariance modeling that encodes its depth dependency w.r.t all the scene points. Unfortunately, per-pixel depth covariance modeling leads to a computationally expensive continuous loss function, which we solve efficiently using the learned low-rank approximation of the overall covariance matrix. Notably, when tested on benchmark datasets such as KITTI, NYU, and SUN-RGB-D, the SIDP model obtained by optimizing our loss function shows state-of-the-art results. Our method's accuracy (named MG) is among the top on the KITTI depth-prediction benchmark leaderboard.Comment: Accepted to IEEE/CVF CVPR 2023. Draft info: 17 pages, 13 Figures, 9 Table

    PreFair: Privately Generating Justifiably Fair Synthetic Data

    Full text link
    When a database is protected by Differential Privacy (DP), its usability is limited in scope. In this scenario, generating a synthetic version of the data that mimics the properties of the private data allows users to perform any operation on the synthetic data, while maintaining the privacy of the original data. Therefore, multiple works have been devoted to devising systems for DP synthetic data generation. However, such systems may preserve or even magnify properties of the data that make it unfair, endering the synthetic data unfit for use. In this work, we present PreFair, a system that allows for DP fair synthetic data generation. PreFair extends the state-of-the-art DP data generation mechanisms by incorporating a causal fairness criterion that ensures fair synthetic data. We adapt the notion of justifiable fairness to fit the synthetic data generation scenario. We further study the problem of generating DP fair synthetic data, showing its intractability and designing algorithms that are optimal under certain assumptions. We also provide an extensive experimental evaluation, showing that PreFair generates synthetic data that is significantly fairer than the data generated by leading DP data generation mechanisms, while remaining faithful to the private data.Comment: 15 pages, 11 figure
    • …
    corecore