726 research outputs found

    AdaGraph: Unifying Predictive and Continuous Domain Adaptation through Graphs

    Full text link
    The ability to categorize is a cornerstone of visual intelligence, and a key functionality for artificial, autonomous visual machines. This problem will never be solved without algorithms able to adapt and generalize across visual domains. Within the context of domain adaptation and generalization, this paper focuses on the predictive domain adaptation scenario, namely the case where no target data are available and the system has to learn to generalize from annotated source images plus unlabeled samples with associated metadata from auxiliary domains. Our contributionis the first deep architecture that tackles predictive domainadaptation, able to leverage over the information broughtby the auxiliary domains through a graph. Moreover, we present a simple yet effective strategy that allows us to take advantage of the incoming target data at test time, in a continuous domain adaptation scenario. Experiments on three benchmark databases support the value of our approach.Comment: CVPR 2019 (oral

    Técnicas de Inteligência Artificial Aplicadas ao Controlo Preditivo de Baterias Estacionárias

    Get PDF
    A tendência para a descida das chamadas tarifas feed-in que se espera que ocorra nos próximos anos vem ao encontro da necessidade de criar uma rede elétrica mais sustentável, mais autónoma e com maior capacidade de integração de energia vinda de fontes renováveis. Tornar-se-á assim de enorme relevância para os chamados prosumers, consumidores que possuem pequenas unidades de produção distribuída, nomeadamente ao nível doméstico e da pequena indústria, praticarem o chamado auto-consumo. Com os crescentes avanços na tecnologia de baterias estacionários que se refletem acima de tudo na sua viabilidade económica, as baterias estacionárias apresentam-se como uma das melhores soluções, a par dos veículos elétricos, para maximizar os níveis de auto-consumo dos prosumers.Os controladores que são hoje utilizados na gestão das ações de carga e descarga destas baterias têm, contudo, uma atuação reativa e imediata. Tornar-se-ia interessante para um prosumer que estes controladores tivessem uma ação que por um lado fosse preditiva, isto é, capaz de perceber de que forma irão evoluir os consumos e a produção para maximizar os níveis de auto-consumo. Se, por outro lado, considerarmos que o prosumer se encontra contratualizado num regime de mercado, o controlador deverá ter também uma atuação oportunista, jogando com os preços do mercado para que a requisição de energia à rede fosse feita, sempre que possível, em horas onde o preço fosse mais barato.Este problema enquadra-se matematicamente nas definições multi-objetivo e multi-temporal. Associando-lhe o elevado número de variáveis de estado que, no caso de serem previsões, virão afetas de erros torna-o de tal ordem complexo que apenas pode ser endereçado por agentes de inteligência artificial.No presente trabalho é avaliada a capacidade de técnicas de inteligência artificial no controlo preditivo de baterias estacionárias acopladas a unidades de produção fotovoltaica. Nomeadamente é avaliado o método de Proximal Policy Gradient disponibilizado pela OpenAI, inserido na categoria das metodologias de Deep Reinforcement Learning, que combinam redes neuronais com o treino de agentes artificiais através de Reinfocement Learning. É efetuada a sua comparação com algoritmos genéticos de modo a inferir a viabilidade desta metodologia na resolução do problema em questão.The expected trend of decreasing feed-in tariffs in the upcoming years meets the current necessity to secure a more sustainable and autonomous electric power grid, capable of integrating more renewable energy resources. This trend turns self-consumption particularly relevant for prosumers (consumers that own small distributed generation units), namely at the household and small industry levels. With the growing advances in stationary storage technologies, reflected utmost at their economic viability, stationary batteries along with electric vehicles are viewed as one of the best solutions to maximize such self-consumption levels of prosumers.Today's storage controllers, used on the management of charging and discharging these batteries present a reactive and immediate response. It can although be more interesting, for a prosumer, that such controllers could present a more predictive action, i.e., capable of understanding how consumption and production profiles will evolve, in order to maximize the self-consumption. If we also consider the prosumer to be involved in a market dynamic pricing scheme, the controller should also behave opportunistically, taking into account the market prices so that the energy requirements made to grid would be deviated to time windows were prices were cheaper.This problem can be mathematically framed on the definitions of multi-objective and multi-temporal. Associating the elevated number of state variables and the error possibilities inherent to the data's forecasting nature makes this problem extremely complex, narrowing its resolution to techniques based on artificial intelligence.In the present work, the capability of artificial intelligence techniques in predictively controlling stationary storage when coupled with photovoltaic generation units, is evaluated. Namely it is used the Proximal Policy Gradient method, made available by OpenAI and inserted in the category of Deep Reinforcement Learning which combine neural networks with the training of artificial intelligence agents through Reinforcement Learning. The comparison with genetic algorithms is made in order to infer the viability of this methodology in the resolutions of the problem at hand

    Bio-inspired optimization algorithms for multi-objective problems

    Get PDF
    Orientador : Aurora Trinidad Ramirez PozoCoorientador : Roberto Santana HermidaTese (doutorado) - Universidade Federal do Paraná, Setor de Ciências Exatas, Programa de Pós-Graduação em Informática. Defesa: Curitiba, 06/03/2017Inclui referências : f. 161-72Área de concentração : Computer ScienceResumo: Problemas multi-objetivo (MOPs) são caracterizados por terem duas ou mais funções objetivo a serem otimizadas simultaneamente. Nestes problemas, a meta é encontrar um conjunto de soluções não-dominadas geralmente chamado conjunto ótimo de Pareto cuja imagem no espaço de objetivos é chamada frente de Pareto. MOPs que apresentam mais de três funções objetivo a serem otimizadas são conhecidos como problemas com muitos objetivos (MaOPs) e vários estudos indicam que a capacidade de busca de algoritmos baseados em Pareto é severamente deteriorada nesses problemas. O desenvolvimento de otimizadores bio-inspirados para enfrentar MOPs e MaOPs é uma área que vem ganhando atenção na comunidade, no entanto, existem muitas oportunidades para inovar. O algoritmo de enxames de partículas multi-objetivo (MOPSO) é um dos algoritmos bio-inspirados adequados para ser modificado e melhorado, principalmente devido à sua simplicidade, flexibilidade e bons resultados. Para melhorar a capacidade de busca de MOPSOs, seguimos duas linhas de pesquisa diferentes: A primeira foca em métodos de líder e arquivamento. Trabalhos anteriores apontaram que esses componentes podem influenciar no desempenho do algoritmo, porém a seleção desses componentes pode ser dependente do problema. Uma alternativa para selecioná-los dinamicamente é empregando hiper-heurísticas. Ao combinar hiper-heurísticas e MOPSO, desenvolvemos um novo framework chamado H-MOPSO. A segunda linha de pesquisa também é baseada em trabalhos anteriores do grupo que focam em múltiplos enxames. Isso é feito selecionando como base o framework multi-enxame iterado (I-Multi), cujo procedimento de busca pode ser dividido em busca de diversidade e busca com múltiplos enxames, e a última usa agrupamento para dividir um enxame em vários sub-enxames. Para melhorar o desempenho do I-Multi, exploramos duas possibilidades: a primeira foi investigar o efeito de diferentes características do mecanismo de agrupamento do I-Multi. A segunda foi investigar alternativas para melhorar a convergência de cada sub-enxame, como hibridizá-lo com um algoritmo de estimativa de distribuição (EDA). Este trabalho com EDA aumentou nosso interesse nesta abordagem, portanto seguimos outra linha de pesquisa, investigando alternativas para criar versões multi-objetivo de um dos EDAs mais poderosos da literatura, chamado estratégia de evolução baseada na adaptação da matriz de covariância (CMA-ES). Para validar o nosso trabalho, vários estudos empíricos foram conduzidos para investigar a capacidade de busca das abordagens propostas. Em todos os estudos, nossos algoritmos investigados alcançaram resultados competitivos ou melhores do que algoritmos bem estabelecidos da literatura. Palavras-chave: multi-objetivo, algoritmo de estimativa de distribuição, otimização por enxame de partículas, multiplos enxames, híper-heuristicas.Abstract: Multi-Objective Problems (MOPs) are characterized by having two or more objective functions to be simultaneously optimized. In these problems, the goal is to find a set of non-dominated solutions usually called Pareto optimal set whose image in the objective space is called Pareto front. MOPs presenting more than three objective functions to be optimized are known as Many-Objective Problems (MaOPs) and several studies indicate that the search ability of Pareto-based algorithms is severely deteriorated in such problems. The development of bio-inspired optimizers to tackle MOPs and MaOPs is a field that has been gaining attention in the community, however there are many opportunities to innovate. Multi-objective Particle Swarm Optimization (MOPSO) is one of the bio-inspired algorithms suitable to be modified and improved, mostly due to its simplicity, flexibility and good results. To enhance the search ability of MOPSOs, we followed two different research lines: The first focus on leader and archiving methods. Previous works have pointed that these components can influence the algorithm performance, however the selection of these components can be problem-dependent. An alternative to dynamically select them is by employing hyper-heuristics. By combining hyper-heuristics and MOPSO, we developed a new framework called H-MOPSO. The second research line, is also based on previous works of the group that focus on multi-swarm. This is done by selecting as base framework the iterated multi swarm (I-Multi) algorithm, whose search procedure can be divided into diversity and multi-swarm searches, and the latter employs clustering to split a swarm into several sub-swarms. In order to improve the performance of I-Multi, we explored two possibilities: the first was to further investigate the effect of different characteristics of the clustering mechanism of I-Multi. The second was to investigate alternatives to improve the convergence of each sub-swarm, like hybridizing it to an Estimation of Distribution Algorithm (EDA). This work on EDA increased our interest in this approach, hence we followed another research line by investigating alternatives to create multi-objective versions of one of the most powerful EDAs from the literature, the Covariance Matrix Adaptation Evolution Strategy (CMA-ES). In order to validate our work, several empirical studies were conducted to investigate the search ability of the approaches proposed. In all studies, our investigated algorithms have reached competitive or better results than well established algorithms from the literature. Keywords: multi-objective, estimation of distribution algorithms, particle swarm optimization, multi-swarm, hyper-heuristics

    Learning to represent surroundings, anticipate motion and take informed actions in unstructured environments

    Get PDF
    Contemporary robots have become exceptionally skilled at achieving specific tasks in structured environments. However, they often fail when faced with the limitless permutations of real-world unstructured environments. This motivates robotics methods which learn from experience, rather than follow a pre-defined set of rules. In this thesis, we present a range of learning-based methods aimed at enabling robots, operating in dynamic and unstructured environments, to better understand their surroundings, anticipate the actions of others, and take informed actions accordingly

    Scaling up integrated photonic reservoirs towards low-power high-bandwidth computing

    No full text

    Scalable Transfer Evolutionary Optimization: Coping with Big Task Instances

    Full text link
    In today's digital world, we are confronted with an explosion of data and models produced and manipulated by numerous large-scale IoT/cloud-based applications. Under such settings, existing transfer evolutionary optimization frameworks grapple with satisfying two important quality attributes, namely scalability against a growing number of source tasks and online learning agility against sparsity of relevant sources to the target task of interest. Satisfying these attributes shall facilitate practical deployment of transfer optimization to big source instances as well as simultaneously curbing the threat of negative transfer. While applications of existing algorithms are limited to tens of source tasks, in this paper, we take a quantum leap forward in enabling two orders of magnitude scale-up in the number of tasks; i.e., we efficiently handle scenarios with up to thousands of source problem instances. We devise a novel transfer evolutionary optimization framework comprising two co-evolving species for joint evolutions in the space of source knowledge and in the search space of solutions to the target problem. In particular, co-evolution enables the learned knowledge to be orchestrated on the fly, expediting convergence in the target optimization task. We have conducted an extensive series of experiments across a set of practically motivated discrete and continuous optimization examples comprising a large number of source problem instances, of which only a small fraction show source-target relatedness. The experimental results strongly validate the efficacy of our proposed framework with two salient features of scalability and online learning agility.Comment: 12 pages, 5 figures, 2 tables, 2 algorithm pseudocode

    Maximum Entropy RL (Provably) Solves Some Robust RL Problems

    Full text link
    Many potential applications of reinforcement learning (RL) require guarantees that the agent will perform well in the face of disturbances to the dynamics or reward function. In this paper, we prove theoretically that standard maximum entropy RL is robust to some disturbances in the dynamics and the reward function. While this capability of MaxEnt RL has been observed empirically in prior work, to the best of our knowledge our work provides the first rigorous proof and theoretical characterization of the MaxEnt RL robust set. While a number of prior robust RL algorithms have been designed to handle similar disturbances to the reward function or dynamics, these methods typically require adding additional moving parts and hyperparameters on top of a base RL algorithm. In contrast, our theoretical results suggest that MaxEnt RL by itself is robust to certain disturbances, without requiring any additional modifications. While this does not imply that MaxEnt RL is the best available robust RL method, MaxEnt RL does possess a striking simplicity and appealing formal guarantees.Comment: Blog post and videos: https://bair.berkeley.edu/blog/2021/03/10/maxent-robust-rl/. arXiv admin note: text overlap with arXiv:1910.0191

    Search-Based Software Maintenance and Testing

    Get PDF
    2012 - 2013In software engineering there are many expensive tasks that are performed during development and maintenance activities. Therefore, there has been a lot of e ort to try to automate these tasks in order to signi cantly reduce the development and maintenance cost of software, since the automation would require less human resources. One of the most used way to make such an automation is the Search-Based Software Engineering (SBSE), which reformulates traditional software engineering tasks as search problems. In SBSE the set of all candidate solutions to the problem de nes the search space while a tness function di erentiates between candidate solutions providing a guidance to the optimization process. After the reformulation of software engineering tasks as optimization problems, search algorithms are used to solve them. Several search algorithms have been used in literature, such as genetic algorithms, genetic programming, simulated annealing, hill climbing (gradient descent), greedy algorithms, particle swarm and ant colony. This thesis investigates and proposes the usage of search based approaches to reduce the e ort of software maintenance and software testing with particular attention to four main activities: (i) program comprehension; (ii) defect prediction; (iii) test data generation and (iv) test suite optimiza- tion for regression testing. For program comprehension and defect prediction, this thesis provided their rst formulations as optimization problems and then proposed the usage of genetic algorithms to solve them. More precisely, this thesis investigates the peculiarity of source code against textual documents written in natural language and proposes the usage of Genetic Algorithms (GAs) in order to calibrate and assemble IR-techniques for di erent software engineering tasks. This thesis also investigates and proposes the usage of Multi-Objective Genetic Algorithms (MOGAs) in or- der to build multi-objective defect prediction models that allows to identify defect-prone software components by taking into account multiple and practical software engineering criteria. Test data generation and test suite optimization have been extensively investigated as search- based problems in literature . However, despite the huge body of works on search algorithms applied to software testing, both (i) automatic test data generation and (ii) test suite optimization present several limitations and not always produce satisfying results. The success of evolutionary software testing techniques in general, and GAs in particular, depends on several factors. One of these factors is the level of diversity among the individuals in the population, which directly a ects the exploration ability of the search. For example, evolutionary test case generation techniques that employ GAs could be severely a ected by genetic drift, i.e., a loss of diversity between solutions, which lead to a premature convergence of GAs towards some local optima. For these reasons, this thesis investigate the role played by diversity preserving mechanisms on the performance of GAs and proposed a novel diversity mechanism based on Singular Value Decomposition and linear algebra. Then, this mechanism has been integrated within the standard GAs and evaluated for evolutionary test data generation. It has been also integrated within MOGAs and empirically evaluated for regression testing. [edited by author]XII n.s
    corecore