17,053 research outputs found

    Fast Charging of Lithium-Ion Batteries Using Deep Bayesian Optimization with Recurrent Neural Network

    Full text link
    Fast charging has attracted increasing attention from the battery community for electrical vehicles (EVs) to alleviate range anxiety and reduce charging time for EVs. However, inappropriate charging strategies would cause severe degradation of batteries or even hazardous accidents. To optimize fast-charging strategies under various constraints, particularly safety limits, we propose a novel deep Bayesian optimization (BO) approach that utilizes Bayesian recurrent neural network (BRNN) as the surrogate model, given its capability in handling sequential data. In addition, a combined acquisition function of expected improvement (EI) and upper confidence bound (UCB) is developed to better balance the exploitation and exploration. The effectiveness of the proposed approach is demonstrated on the PETLION, a porous electrode theory-based battery simulator. Our method is also compared with the state-of-the-art BO methods that use Gaussian process (GP) and non-recurrent network as surrogate models. The results verify the superior performance of the proposed fast charging approaches, which mainly results from that: (i) the BRNN-based surrogate model provides a more precise prediction of battery lifetime than that based on GP or non-recurrent network; and (ii) the combined acquisition function outperforms traditional EI or UCB criteria in exploring the optimal charging protocol that maintains the longest battery lifetime

    Exploiting Symmetry and Heuristic Demonstrations in Off-policy Reinforcement Learning for Robotic Manipulation

    Full text link
    Reinforcement learning demonstrates significant potential in automatically building control policies in numerous domains, but shows low efficiency when applied to robot manipulation tasks due to the curse of dimensionality. To facilitate the learning of such tasks, prior knowledge or heuristics that incorporate inherent simplification can effectively improve the learning performance. This paper aims to define and incorporate the natural symmetry present in physical robotic environments. Then, sample-efficient policies are trained by exploiting the expert demonstrations in symmetrical environments through an amalgamation of reinforcement and behavior cloning, which gives the off-policy learning process a diverse yet compact initiation. Furthermore, it presents a rigorous framework for a recent concept and explores its scope for robot manipulation tasks. The proposed method is validated via two point-to-point reaching tasks of an industrial arm, with and without an obstacle, in a simulation experiment study. A PID controller, which tracks the linear joint-space trajectories with hard-coded temporal logic to produce interim midpoints, is used to generate demonstrations in the study. The results of the study present the effect of the number of demonstrations and quantify the magnitude of behavior cloning to exemplify the possible improvement of model-free reinforcement learning in common manipulation tasks. A comparison study between the proposed method and a traditional off-policy reinforcement learning algorithm indicates its advantage in learning performance and potential value for applications

    Operational meanings of a generalized conditional expectation in quantum metrology

    Full text link
    A unifying formalism of generalized conditional expectations (GCEs) for quantum mechanics has recently emerged, but its physical implications regarding the retrodiction of a quantum observable remain controversial. To address the controversy, here I offer operational meanings for a version of the GCEs in the context of quantum parameter estimation. When a quantum sensor is corrupted by decoherence, the GCE is found to relate the operator-valued optimal estimators before and after the decoherence. Furthermore, the error increase, or regret, caused by the decoherence is shown to be equal to a divergence between the two estimators. The real weak value as a special case of the GCE plays the same role in suboptimal estimation -- its divergence from the optimal estimator is precisely the regret for not using the optimal measurement. For an application of the GCE, I show that it enables the use of dynamic programming for designing a controller that minimizes the estimation error. For the frequentist setting, I show that the GCE leads to a quantum Rao-Blackwell theorem, which offers significant implications for quantum metrology and thermal-light sensing in particular. These results give the GCE and the associated divergence a natural, useful, and incontrovertible role in quantum decision and control theory.Comment: 17 pages, 3 figures. v4: polished everything and added more reference

    Thermodynamic Assessment and Optimisation of Supercritical and Transcritical Power Cycles Operating on CO2 Mixtures by Means of Artificial Neural Networks

    Get PDF
    Feb 21, 2022 to Feb 24, 2022, San Antonio, TX, United StatesClosed supercritical and transcritical power cycles operating on Carbon Dioxide have proven to be a promising technology for power generation and, as such, they are being researched by numerous international projects today. Despite the advantageous features of these cycles enabling very high efficiencies in intermediate temperature applications, the major shortcoming of the technology is a strong dependence on ambient temperature; in order to perform compression near the CO2 critical point (31ºC), low ambient temperatures are needed. This is particularly challenging in Concentrated Solar Power applications, typically found in hot, semi-arid locations. To overcome this limitation, the SCARABEUS project explores the idea of blending raw carbon dioxide with small amounts of certain dopants in order to shift the critical temperature of the resulting working fluid to higher values, hence enabling gaseous compression near the critical point or even liquid compression regardless of a high ambient temperature. Different dopants have been studied within the project so far (i.e. C6F6, TiCl4 and SO2) but the final selection will have to account for trade-offs between thermodynamic performance, economic metrics and system reliability. Bearing all this in mind, the present paper deals with the development of a non-physics-based model using Artificial Neural Networks (ANN), developed using Matlab’s Deep Learning Toolbox, to enable SCARABEUS system optimisation without running the detailed – and extremely time consuming – thermal models, developed with Thermoflex and Matlab software. In the first part of the paper, the candidate dopants and cycle layouts are presented and discussed, and a thorough description of the ANN training methodology is provided, along with all the main assumptions and hypothesis made. In the second part of the manuscript, results confirms that the ANN is a reliable tool capable of successfully reproducing the detailed Thermoflex model, estimating the cycle thermal efficiency with a Root Mean Square Error lower than 0.2 percentage points. Furthermore, the great advantage of using the Artificial Neural Network proposed is demonstrated by the huge reduction in the computational time needed, up to 99% lower than the one consumed by the detailed model. Finally, the high flexibility and versatility of the ANN is shown, applying this tool in different scenarios and estimating different cycle thermal efficiency for a great variety of boundary conditions.Unión Europea H2020-81498

    Projected Multi-Agent Consensus Equilibrium (PMACE) for Distributed Reconstruction with Application to Ptychography

    Full text link
    Multi-Agent Consensus Equilibrium (MACE) formulates an inverse imaging problem as a balance among multiple update agents such as data-fitting terms and denoisers. However, each such agent operates on a separate copy of the full image, leading to redundant memory use and slow convergence when each agent affects only a small subset of the full image. In this paper, we extend MACE to Projected Multi-Agent Consensus Equilibrium (PMACE), in which each agent updates only a projected component of the full image, thus greatly reducing memory use for some applications.We describe PMACE in terms of an equilibrium problem and an equivalent fixed point problem and show that in most cases the PMACE equilibrium is not the solution of an optimization problem. To demonstrate the value of PMACE, we apply it to the problem of ptychography, in which a sample is reconstructed from the diffraction patterns resulting from coherent X-ray illumination at multiple overlapping spots. In our PMACE formulation, each spot corresponds to a separate data-fitting agent, with the final solution found as an equilibrium among all the agents. Our results demonstrate that the PMACE reconstruction algorithm generates more accurate reconstructions at a lower computational cost than existing ptychography algorithms when the spots are sparsely sampled

    Structured Dynamic Pricing: Optimal Regret in a Global Shrinkage Model

    Full text link
    We consider dynamic pricing strategies in a streamed longitudinal data set-up where the objective is to maximize, over time, the cumulative profit across a large number of customer segments. We consider a dynamic probit model with the consumers' preferences as well as price sensitivity varying over time. Building on the well-known finding that consumers sharing similar characteristics act in similar ways, we consider a global shrinkage structure, which assumes that the consumers' preferences across the different segments can be well approximated by a spatial autoregressive (SAR) model. In such a streamed longitudinal set-up, we measure the performance of a dynamic pricing policy via regret, which is the expected revenue loss compared to a clairvoyant that knows the sequence of model parameters in advance. We propose a pricing policy based on penalized stochastic gradient descent (PSGD) and explicitly characterize its regret as functions of time, the temporal variability in the model parameters as well as the strength of the auto-correlation network structure spanning the varied customer segments. Our regret analysis results not only demonstrate asymptotic optimality of the proposed policy but also show that for policy planning it is essential to incorporate available structural information as policies based on unshrunken models are highly sub-optimal in the aforementioned set-up.Comment: 34 pages, 5 figure

    Anuário científico da Escola Superior de Tecnologia da Saúde de Lisboa - 2021

    Get PDF
    É com grande prazer que apresentamos a mais recente edição (a 11.ª) do Anuário Científico da Escola Superior de Tecnologia da Saúde de Lisboa. Como instituição de ensino superior, temos o compromisso de promover e incentivar a pesquisa científica em todas as áreas do conhecimento que contemplam a nossa missão. Esta publicação tem como objetivo divulgar toda a produção científica desenvolvida pelos Professores, Investigadores, Estudantes e Pessoal não Docente da ESTeSL durante 2021. Este Anuário é, assim, o reflexo do trabalho árduo e dedicado da nossa comunidade, que se empenhou na produção de conteúdo científico de elevada qualidade e partilhada com a Sociedade na forma de livros, capítulos de livros, artigos publicados em revistas nacionais e internacionais, resumos de comunicações orais e pósteres, bem como resultado dos trabalhos de 1º e 2º ciclo. Com isto, o conteúdo desta publicação abrange uma ampla variedade de tópicos, desde temas mais fundamentais até estudos de aplicação prática em contextos específicos de Saúde, refletindo desta forma a pluralidade e diversidade de áreas que definem, e tornam única, a ESTeSL. Acreditamos que a investigação e pesquisa científica é um eixo fundamental para o desenvolvimento da sociedade e é por isso que incentivamos os nossos estudantes a envolverem-se em atividades de pesquisa e prática baseada na evidência desde o início dos seus estudos na ESTeSL. Esta publicação é um exemplo do sucesso desses esforços, sendo a maior de sempre, o que faz com que estejamos muito orgulhosos em partilhar os resultados e descobertas dos nossos investigadores com a comunidade científica e o público em geral. Esperamos que este Anuário inspire e motive outros estudantes, profissionais de saúde, professores e outros colaboradores a continuarem a explorar novas ideias e contribuir para o avanço da ciência e da tecnologia no corpo de conhecimento próprio das áreas que compõe a ESTeSL. Agradecemos a todos os envolvidos na produção deste anuário e desejamos uma leitura inspiradora e agradável.info:eu-repo/semantics/publishedVersio

    Deep Transfer Learning Applications in Intrusion Detection Systems: A Comprehensive Review

    Full text link
    Globally, the external Internet is increasingly being connected to the contemporary industrial control system. As a result, there is an immediate need to protect the network from several threats. The key infrastructure of industrial activity may be protected from harm by using an intrusion detection system (IDS), a preventive measure mechanism, to recognize new kinds of dangerous threats and hostile activities. The most recent artificial intelligence (AI) techniques used to create IDS in many kinds of industrial control networks are examined in this study, with a particular emphasis on IDS-based deep transfer learning (DTL). This latter can be seen as a type of information fusion that merge, and/or adapt knowledge from multiple domains to enhance the performance of the target task, particularly when the labeled data in the target domain is scarce. Publications issued after 2015 were taken into account. These selected publications were divided into three categories: DTL-only and IDS-only are involved in the introduction and background, and DTL-based IDS papers are involved in the core papers of this review. Researchers will be able to have a better grasp of the current state of DTL approaches used in IDS in many different types of networks by reading this review paper. Other useful information, such as the datasets used, the sort of DTL employed, the pre-trained network, IDS techniques, the evaluation metrics including accuracy/F-score and false alarm rate (FAR), and the improvement gained, were also covered. The algorithms, and methods used in several studies, or illustrate deeply and clearly the principle in any DTL-based IDS subcategory are presented to the reader

    A Decision Support System for Economic Viability and Environmental Impact Assessment of Vertical Farms

    Get PDF
    Vertical farming (VF) is the practice of growing crops or animals using the vertical dimension via multi-tier racks or vertically inclined surfaces. In this thesis, I focus on the emerging industry of plant-specific VF. Vertical plant farming (VPF) is a promising and relatively novel practice that can be conducted in buildings with environmental control and artificial lighting. However, the nascent sector has experienced challenges in economic viability, standardisation, and environmental sustainability. Practitioners and academics call for a comprehensive financial analysis of VPF, but efforts are stifled by a lack of valid and available data. A review of economic estimation and horticultural software identifies a need for a decision support system (DSS) that facilitates risk-empowered business planning for vertical farmers. This thesis proposes an open-source DSS framework to evaluate business sustainability through financial risk and environmental impact assessments. Data from the literature, alongside lessons learned from industry practitioners, would be centralised in the proposed DSS using imprecise data techniques. These techniques have been applied in engineering but are seldom used in financial forecasting. This could benefit complex sectors which only have scarce data to predict business viability. To begin the execution of the DSS framework, VPF practitioners were interviewed using a mixed-methods approach. Learnings from over 19 shuttered and operational VPF projects provide insights into the barriers inhibiting scalability and identifying risks to form a risk taxonomy. Labour was the most commonly reported top challenge. Therefore, research was conducted to explore lean principles to improve productivity. A probabilistic model representing a spectrum of variables and their associated uncertainty was built according to the DSS framework to evaluate the financial risk for VF projects. This enabled flexible computation without precise production or financial data to improve economic estimation accuracy. The model assessed two VPF cases (one in the UK and another in Japan), demonstrating the first risk and uncertainty quantification of VPF business models in the literature. The results highlighted measures to improve economic viability and the viability of the UK and Japan case. The environmental impact assessment model was developed, allowing VPF operators to evaluate their carbon footprint compared to traditional agriculture using life-cycle assessment. I explore strategies for net-zero carbon production through sensitivity analysis. Renewable energies, especially solar, geothermal, and tidal power, show promise for reducing the carbon emissions of indoor VPF. Results show that renewably-powered VPF can reduce carbon emissions compared to field-based agriculture when considering the land-use change. The drivers for DSS adoption have been researched, showing a pathway of compliance and design thinking to overcome the ‘problem of implementation’ and enable commercialisation. Further work is suggested to standardise VF equipment, collect benchmarking data, and characterise risks. This work will reduce risk and uncertainty and accelerate the sector’s emergence
    • …
    corecore