439 research outputs found

    Automated stock trading : a multi-agent, evolutionary approach

    Get PDF
    Includes bibliographical references (leaves 125-130).Stock market trading has garnered much interest over the past few decades as it has been made easier for the general public to trade. It is certainly an avenue for wealth growth, but like all risky undertakings, it must be understood for one to be consistently successful. There are, however, too many factors that influence it for one to make completely confident predictions. Automated computer trading has therefore been championed as a potential solution to this problem and is used in major brokerage houses world-wide. In fact, a third of all EU and US stock trades in 2006 were driven by computer algorithms. In this thesis we look at the challenges posed by the automatic generation of stock trading rules and portfolio management. We explore the viability of evolutionary algorithms, including genetic algorithms and genetic programming, for this problem and introduce an agent-based learning framework for individual and social intelligence that is applicable to general stock markets. Statistical tests were applied to determine whether or not there was a significant difference between the evolutionary trading approach and an accepted benchmark. It was found that while the evolutionary trading agents comfortably realised higher portfolio values than the ALSI, there was insufficient evidence to suggest that the agents outperformed the ALSI in terms of portfolio performance. Additionally, it was observed that while the traders combined knowledge from the expert traders to form complex trading models, these models did not result in any statistically significant positive returns. It must be said, however, that there was overwhelming evidence to suggest that the traders learned rules that were highly successful in predicting stock movement

    Driving Sustainability through Engineering Management and Systems Engineering

    Get PDF
    Despite the ongoing impact of the COVID-19 pandemic, the challenge of realizing sustainability across the triple bottom line of social, environmental, and economic development remains an urgent priority. If anything, it is now imperative that we work towards achieving the United Nations Sustainable Development Goals (SDGs). However, the global challenges are significant. Many of the societal challenges represent complex problems that require multifaceted solutions drawing on multidisciplinary approaches. Engineering management involves the management of people and projects related to technological or engineering systems—this includes project management, engineering economy, and technology management, as well as the management and leadership of teams. Systems engineering involves the design, integration, and management of complex systems over the full life cycle—this includes requirements capture, integrated system design, as well as modelling and simulation. In addition to the theoretical underpinnings of both disciplines, they also provide a range of tools and techniques that can be used to address technological and organisational complexity. The disciplines of engineering management and systems engineering are therefore ideally suited to help tackle both the challenges and opportunities associated with realising a sustainable future for all. This book provides new insights on how engineering management and systems engineering can be utilised as part of the journey towards sustainability. The book includes discussion of a broad range of different approaches to investigate sustainability through utilising quantitative, qualitative and conceptual methodologies. The book will be of interest to researchers and students focused on the field of sustainability as well as practitioners concerned with devising strategies for sustainable development

    Driving Sustainability through Engineering Management and Systems Engineering

    Get PDF
    Despite the ongoing impact of the COVID-19 pandemic, the challenge of realizing sustainability across the triple bottom line of social, environmental, and economic development remains an urgent priority. If anything, it is now imperative that we work towards achieving the United Nations Sustainable Development Goals (SDGs). However, the global challenges are significant. Many of the societal challenges represent complex problems that require multifaceted solutions drawing on multidisciplinary approaches.Engineering management involves the management of people and projects related to technological or engineering systems—this includes project management, engineering economy and technology management, as well as the management and leadership of teams. Systems engineering involves the design, integration and management of complex systems over the full life cycle—this includes requirements capture and integrated system design, as well as modelling and simulation. In addition to the theoretical underpinnings of both disciplines, they also provide a range of tools and techniques that can be used to address technological and organisational complexity. The disciplines of engineering management and systems engineering are therefore ideally suited to help tackle both the challenges and the opportunities associated with realising a sustainable future for all.This book provides new insights on how engineering management and systems engineering can be utilised as part of the journey towards sustainability. The book includes a discussion of a broad range of different approaches to investigate sustainability through utilising quantitative, qualitative and conceptual methodologies. The book will be of interest to researchers and students focused on the field of sustainability as well as practitioners concerned with devising strategies for sustainable development

    Optimización multi-objetivo en las ciencias de la vida.

    Get PDF
    Para conseguir este objetivo, en lugar de intentar incorporar nuevos algoritmos directamente en el código fuente de AutoDock, se utilizó un framework orientado a la resolución de problemas de optimización con metaheurísticas. Concretamente, se usó jMetal, que es una librería de código libre basada en Java. Ya que AutoDock está implementado en C++, se desarrolló una versión en C++ de jMetal (posteriormente distribuida públicamente). De esta manera, se consiguió integrar ambas herramientas (AutoDock 4.2 y jMetal) para optimizar la energía libre de unión entre compuesto químico y receptor. Después de disponer de una amplia colección de metaheurísticas implementadas en jMetalCpp, se realizó un detallado estudio en el cual se aplicaron un conjunto de metaheurísticas para optimizar un único objetivo minimizando la energía libre de unión, el cual es el resultado de la suma de todos los términos de energía de la función objetivo de energía de AutoDock 4.2. Por lo tanto, cuatro metaheurísticas tales como dos variantes de algoritmo genético gGA (Algoritmo Genético generacional) y ssGA (Algoritmo Genético de estado estacionario), DE (Evolución Diferencial) y PSO (Optimización de Enjambres de Partículas) fueron aplicadas para resolver el problema del acoplamiento molecular. Esta fase se dividió en dos subfases en las que se usaron dos conjuntos de instancias diferentes, utilizando como receptores HIV-proteasas con cadenas laterales de aminoacidos flexibles y como ligandos inhibidores HIV-proteasas flexibles. El primer conjunto de instancias se usó para un estudio de configuración de parámetros de los algoritmos y el segundo para comparar la precisión de las conformaciones ligando-receptor obtenidas por AutoDock y AutoDock+jMetalCpp. La siguiente fase implicó aplicar una formulación multi-objetivo para resolver problemas de acoplamiento molecular dados los resultados interesantes obtenidos en estudios previos existentes en los que dos objetivos como la energía intermolecular y la energía intramolecular fueron minimizados. Por lo tanto, se comparó y analizó el rendimiento de un conjunto de metaheurísticas multi-objetivo mediante la resolución de complejos flexibles de acoplamiento molecular minimizando la energía inter- e intra-molecular. Estos algoritmos fueron: NSGA-II (Algoritmo Genético de Ordenación No dominada) y su versión de estado estacionario (ssNSGA-II), SMPSO (Optimización Multi-objetivo de Enjambres de Partículas con Modulación de Velocidad), GDE3 (Tercera versión de la Evolución Diferencial Generalizada), MOEA/D (Algoritmo Evolutivo Multi-Objetivo basado en la Decomposición) y SMS-EMOA (Optimización Multi-objetivo Evolutiva con Métrica S). Después de probar enfoques multi-objetivo ya existentes, se probó uno nuevo. En concreto, el uso del RMSD como un objetivo para encontrar soluciones similares a la de la solución de referencia. Se replicó el estudio previo usando este conjunto diferente de objetivos. Por último, se analizó de forma detallada el algoritmo que obtuvo mejores resultados en los estudios previos. En concreto, se realizó un estudio de variantes del SMPSO minimizando la energía intermolecular y el RMSD. Este estudio proporcionó algunas pistas sobre cómo nuevos algoritmos basados en SMPSO pueden ser adaptados para mejorar los resultados de acoplamiento molecular para aquellas simulaciones que involucren ligandos y receptores flexibles. Esta tesis demuestra que la inclusión de técnicas metaheurísticas de jMetalCpp en la herramienta de acoplamiento molecular AutoDock incrementa las posibilidades a los usuarios de ámbito biológico cuando resuelven el problema del acoplamiento molecular. El uso de técnicas de optimización mono-objetivo diferentes aparte de aquéllas ampliamente usadas en las comunidades de acoplamiento molecular podría dar lugar a soluciones de mayor calidad. En nuestro caso de estudio mono-objetivo, el algoritmo de evolución diferencial obtuvo mejores resultados que aquellos obtenidos por AutoDock. También se propone diferentes enfoques multi-objetivo para resolver el problema del acoplamiento molecular, tales como la decomposición de los términos de la energía de unión o el uso del RMSD como un objetivo. Finalmente, se demuestra que el SMPSO, una metaheurística de optimización multi-objetivo de enjambres de partículas, es una técnica remarcable para resolver problemas de acoplamiento molecular cuando se usa un enfoque multi-objetivo, obteniendo incluso mejores soluciones que las técnicas mono-objetivo.Las herramientas de acoplamiento molecular han llegado a ser bastante eficientes en el descubrimiento de fármacos y en el desarrollo de la investigación de la industria farmacéutica. Estas herramientas se utilizan para elucidar la interacción de una pequeña molécula (ligando) y una macro-molécula (diana) a un nivel atómico para determinar cómo el ligando interactúa con el sitio de unión de la proteína diana y las implicaciones que estas interacciones tienen en un proceso bioquímico dado. En el desarrollo computacional de las herramientas de acoplamiento molecular los investigadores de este área se han centrado en mejorar los componentes que determinan la calidad del software de acoplamiento molecular: 1) la función objetivo y 2) los algoritmos de optimización. La función objetivo de energía se encarga de proporcionar una evaluación de las conformaciones entre el ligando y la proteína calculando la energía de unión, que se mide en kcal/mol. En esta tesis, se ha usado AutoDock, ya que es una de las herramientas de acoplamiento molecular más citada y usada, y cuyos resultados son muy precisos en términos de energía y valor de RMSD (desviación de la media cuadrática). Además, se ha seleccionado la función de energía de AutoDock versión 4.2, ya que permite realizar una mayor cantidad de simulaciones realistas incluyendo flexibilidad en el ligando y en las cadenas laterales de los aminoácidos del receptor que están en el sitio de unión. Se han utilizado algoritmos de optimización para mejorar los resultados de acoplamiento molecular de AutoDock 4.2, el cual minimiza la energía libre de unión final que es la suma de todos los términos de energía de la función objetivo de energía. Dado que encontrar la solución óptima en el acoplamiento molecular es un problema de gran complejidad y la mayoría de las veces imposible, se suelen utilizar algoritmos no exactos como las metaheurísticas, para así obtener soluciones lo suficientemente buenas en un tiempo razonable

    Data Science: Measuring Uncertainties

    Get PDF
    With the increase in data processing and storage capacity, a large amount of data is available. Data without analysis does not have much value. Thus, the demand for data analysis is increasing daily, and the consequence is the appearance of a large number of jobs and published articles. Data science has emerged as a multidisciplinary field to support data-driven activities, integrating and developing ideas, methods, and processes to extract information from data. This includes methods built from different knowledge areas: Statistics, Computer Science, Mathematics, Physics, Information Science, and Engineering. This mixture of areas has given rise to what we call Data Science. New solutions to the new problems are reproducing rapidly to generate large volumes of data. Current and future challenges require greater care in creating new solutions that satisfy the rationality for each type of problem. Labels such as Big Data, Data Science, Machine Learning, Statistical Learning, and Artificial Intelligence are demanding more sophistication in the foundations and how they are being applied. This point highlights the importance of building the foundations of Data Science. This book is dedicated to solutions and discussions of measuring uncertainties in data analysis problems

    How messenger characteristics influence expertise learning and information-seeking choices

    Get PDF
    When trying to form accurate beliefs and make good choices, people often turn to one another for information and advice. But deciding whom to listen to can be a challenging task. While people may be motivated to receive information from accurate sources, in many circumstances it can be difficult to estimate others’ task-relevant expertise. Moreover, evidence suggests that perceptions of others’ attributes are influenced by irrelevant factors, such as facial appearances and one’s own beliefs about the world. In this thesis, I present six studies that investigate whether messenger characteristics that are unrelated to the domain in question interfere with the ability to learn about others’ expertise and, consequently, lead people to make suboptimal social learning decisions. Studies one and two explored whether (dis)similarity in political views affects perceptions of others’ expertise in a non-political shape categorisation task. The findings suggest that people are biased to believe that messengers who share their political opinions are better at tasks that have nothing to do with politics than those who do not, even when they have all the information needed to accurately assess expertise. Consequently, they are more likely to seek information from, and are more influenced by, politically similar than dissimilar sources. Studies three and four aimed to formalise this learning bias using computational models and explore whether it generalises to a messenger characteristic other than political similarity. Surprisingly, in contrast to the results of studies one and two, in these studies there was no effect of observed generosity or political similarity on expertise learning, information-seeking choices, or belief updating. Studies five and six were then conducted to reconcile these conflicting results and investigate the boundary conditions of the learning bias observed in studies one and two. Here, we found that, under the right conditions, non-politics-based similarities can influence expertise learning and whom people choose to hear from; that asking people to predict how others will answer questions enhances learning from observed outcomes; and that it is unlikely that inattentiveness explains why we observed null effects in studies three and four

    Model-based hyperparameter optimization

    Full text link
    The primary goal of this work is to propose a methodology for discovering hyperparameters. Hyperparameters aid systems in convergence when well-tuned and handcrafted. However, to this end, poorly chosen hyperparameters leave practitioners in limbo, between concerns with implementation or improper choice in hyperparameter and system configuration. We specifically analyze the choice of learning rate in stochastic gradient descent (SGD), a popular algorithm. As a secondary goal, we attempt the discovery of fixed points using smoothing of the loss landscape by exploiting assumptions about its distribution to improve the update rule in SGD. Smoothing of the loss landscape has been shown to make convergence possible in large-scale systems and difficult black-box optimization problems. However, we use stochastic value gradients (SVG) to smooth the loss landscape by learning a surrogate model and then backpropagate through this model to discover fixed points on the real task SGD is trying to solve. Additionally, we construct a gym environment for testing model-free algorithms, such as Proximal Policy Optimization (PPO) as a hyperparameter optimizer for SGD. For tasks, we focus on a toy problem and analyze the convergence of SGD on MNIST using model-free and model-based reinforcement learning methods for control. The model is learned from the parameters of the true optimizer and used specifically for learning rates rather than for prediction. In experiments, we perform in an online and offline setting. In the online setting, we learn a surrogate model alongside the true optimizer, where hyperparameters are tuned in real-time for the true optimizer. In the offline setting, we show that there is more potential in the model-based learning methodology than in the model-free configuration due to this surrogate model that smooths out the loss landscape and makes for more helpful gradients during backpropagation.L’objectif principal de ce travail est de proposer une méthodologie de découverte des hyperparamètres. Les hyperparamètres aident les systèmes à converger lorsqu’ils sont bien réglés et fabriqués à la main. Cependant, à cette fin, des hyperparamètres mal choisis laissent les praticiens dans l’incertitude, entre soucis de mise en oeuvre ou mauvais choix d’hyperparamètre et de configuration du système. Nous analysons spécifiquement le choix du taux d’apprentissage dans la descente de gradient stochastique (SGD), un algorithme populaire. Comme objectif secondaire, nous tentons de découvrir des points fixes en utilisant le lissage du paysage des pertes en exploitant des hypothèses sur sa distribution pour améliorer la règle de mise à jour dans SGD. Il a été démontré que le lissage du paysage des pertes rend la convergence possible dans les systèmes à grande échelle et les problèmes difficiles d’optimisation de la boîte noire. Cependant, nous utilisons des gradients de valeur stochastiques (SVG) pour lisser le paysage des pertes en apprenant un modèle de substitution, puis rétropropager à travers ce modèle pour découvrir des points fixes sur la tâche réelle que SGD essaie de résoudre. De plus, nous construisons un environnement de gym pour tester des algorithmes sans modèle, tels que Proximal Policy Optimization (PPO) en tant qu’optimiseur d’hyperparamètres pour SGD. Pour les tâches, nous nous concentrons sur un problème de jouet et analysons la convergence de SGD sur MNIST en utilisant des méthodes d’apprentissage par renforcement sans modèle et basées sur un modèle pour le contrôle. Le modèle est appris à partir des paramètres du véritable optimiseur et utilisé spécifiquement pour les taux d’apprentissage plutôt que pour la prédiction. Dans les expériences, nous effectuons dans un cadre en ligne et hors ligne. Dans le cadre en ligne, nous apprenons un modèle de substitution aux côtés du véritable optimiseur, où les hyperparamètres sont réglés en temps réel pour le véritable optimiseur. Dans le cadre hors ligne, nous montrons qu’il y a plus de potentiel dans la méthodologie d’apprentissage basée sur un modèle que dans la configuration sans modèle en raison de ce modèle de substitution qui lisse le paysage des pertes et crée des gradients plus utiles lors de la rétropropagation

    Towards Player-Driven Procedural Content Generation

    Get PDF

    Datacenter management for on-site intermittent and uncertain renewable energy sources

    Get PDF
    Les technologies de l'information et de la communication sont devenues, au cours des dernières années, un pôle majeur de consommation énergétique avec les conséquences environnementales associées. Dans le même temps, l'émergence du Cloud computing et des grandes plateformes en ligne a causé une augmentation en taille et en nombre des centres de données. Pour réduire leur impact écologique, alimenter ces centres avec des sources d'énergies renouvelables (EnR) apparaît comme une piste de solution. Cependant, certaines EnR telles que les énergies solaires et éoliennes sont liées aux conditions météorologiques, et sont par conséquent intermittentes et incertaines. L'utilisation de batteries ou d'autres dispositifs de stockage est souvent envisagée pour compenser ces variabilités de production. De par leur coût important, économique comme écologique, ainsi que les pertes énergétiques engendrées, l'utilisation de ces dispositifs sans intégration supplémentaire est insuffisante. La consommation électrique d'un centre de données dépend principalement de l'utilisation des ressources de calcul et de communication, qui est déterminée par la charge de travail et les algorithmes d'ordonnancement utilisés. Pour utiliser les EnR efficacement tout en préservant la qualité de service du centre, une gestion coordonnée des ressources informatiques, des sources électriques et du stockage est nécessaire. Il existe une grande diversité de centres de données, ayant différents types de matériel, de charge de travail et d'utilisation. De la même manière, suivant les EnR, les technologies de stockage et les objectifs en termes économiques ou environnementaux, chaque infrastructure électrique est modélisée et gérée différemment des autres. Des travaux existants proposent des méthodes de gestion d'EnR pour des couples bien spécifiques de modèles électriques et informatiques. Cependant, les multiples combinaisons de ces deux parties rendent difficile l'extrapolation de ces approches et de leurs résultats à des infrastructures différentes. Cette thèse explore de nouvelles méthodes pour résoudre ce problème de coordination. Une première contribution reprend un problème d'ordonnancement de tâches en introduisant une abstraction des sources électriques. Un algorithme d'ordonnancement est proposé, prenant les préférences des sources en compte, tout en étant conçu pour être indépendant de leur nature et des objectifs de l'infrastructure électrique. Une seconde contribution étudie le problème de planification de l'énergie d'une manière totalement agnostique des infrastructures considérées. Les ressources informatiques et la gestion de la charge de travail sont encapsulées dans une boîte noire implémentant un ordonnancement sous contrainte de puissance. La même chose s'applique pour le système de gestion des EnR et du stockage, qui agit comme un algorithme d'optimisation d'engagement de sources pour répondre à une demande. Une optimisation coopérative et multiobjectif, basée sur un algorithme évolutionnaire, utilise ces deux boîtes noires afin de trouver les meilleurs compromis entre les objectifs électriques et informatiques. Enfin, une troisième contribution vise les incertitudes de production des EnR pour une infrastructure plus spécifique. En utilisant une formulation en processus de décision markovien (MDP), la structure du problème de décision sous-jacent est étudiée. Pour plusieurs variantes du problème, des méthodes sont proposées afin de trouver les politiques optimales ou des approximations de celles-ci avec une complexité raisonnable.In recent years, information and communication technologies (ICT) became a major energy consumer, with the associated harmful ecological consequences. Indeed, the emergence of Cloud computing and massive Internet companies increased the importance and number of datacenters around the world. In order to mitigate economical and ecological cost, powering datacenters with renewable energy sources (RES) began to appear as a sustainable solution. Some of the commonly used RES, such as solar and wind energies, directly depends on weather conditions. Hence they are both intermittent and partly uncertain. Batteries or other energy storage devices (ESD) are often considered to relieve these issues, but they result in additional energy losses and are too costly to be used alone without more integration. The power consumption of a datacenter is closely tied to the computing resource usage, which in turn depends on its workload and on the algorithms that schedule it. To use RES as efficiently as possible while preserving the quality of service of a datacenter, a coordinated management of computing resources, electrical sources and storage is required. A wide variety of datacenters exists, each with different hardware, workload and purpose. Similarly, each electrical infrastructure is modeled and managed uniquely, depending on the kind of RES used, ESD technologies and operating objectives (cost or environmental impact). Some existing works successfully address this problem by considering a specific couple of electrical and computing models. However, because of this combined diversity, the existing approaches cannot be extrapolated to other infrastructures. This thesis explores novel ways to deal with this coordination problem. A first contribution revisits batch tasks scheduling problem by introducing an abstraction of the power sources. A scheduling algorithm is proposed, taking preferences of electrical sources into account, though designed to be independent from the type of sources and from the goal of the electrical infrastructure (cost, environmental impact, or a mix of both). A second contribution addresses the joint power planning coordination problem in a totally infrastructure-agnostic way. The datacenter computing resources and workload management is considered as a black-box implementing a scheduling under variable power constraint algorithm. The same goes for the electrical sources and storage management system, which acts as a source commitment optimization algorithm. A cooperative multiobjective power planning optimization, based on a multi-objective evolutionary algorithm (MOEA), dialogues with the two black-boxes to find the best trade-offs between electrical and computing internal objectives. Finally, a third contribution focuses on RES production uncertainties in a more specific infrastructure. Based on a Markov Decision Process (MDP) formulation, the structure of the underlying decision problem is studied. For several variants of the problem, tractable methods are proposed to find optimal policies or a bounded approximation
    • …
    corecore