8 research outputs found

    Wizualizacja zjawisk topnienia i sublimacji

    Get PDF
    Niniejsza monografia dotyczy wizualizacji zjawisk topnienia i sublimacji, które są przejściem fazowym z ciała stałego odpowiednio do cieczy i gazu. Modelem granicy miedzy dwoma fazami jest powierzchnia międzyfazowa, dlatego topnienie i sublimacja mogą być rozpatrywane jako przesuwanie powierzchni międzyfazowej z towarzysząca mu wymiana ciepła. Wizualizacja omawianych zjawisk wymaga omówienia różnych jej aspektów – od sposobu reprezentacji danych graficznych, przez algorytmy przetwarzania tych danych i ich optymalizacje, problemy renderingu czasu rzeczywistego, po metody weryfikacji jej wyników. Wymienione kwestie zostały zebrane w niniejszej ksiażce

    Trading the stock market : hybrid financial analyses and evolutionary computation

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Arquitectura de Computadores y Automática, leída el 02-07-2014Esta tesis presenta la implementación de un innovador sistema de comercio automatizado que utiliza tres importantes análisis para determinar lugares y momentos de inversión. Para ello, este trabajo profundiza en sistemas automáticos de comercio y estudia series temporales de precios históricos pertenecientes a empresas que cotizan en el mercado bursátil. Estudiamos y clasifcamos las series temporales mediante el uso de una novedosa metodología basada en compresores de software. Este nuevo enfoque permite un estudio teórico de la formación de precios que demuestra resultados de divergencia entre precios reales de mercado y precios modelados mediante paseos aleatorios, apoyando así el desarrollo de modelos predictivos basados en el análisis de patrones históricos como los descritos en este documento. Además, esta metodología nos permite estudiar el comportamiento de series temporales de precios históricos en distintos sectores industriales mediante la búsqueda de patrones en empresas pertenecientes al mismo sector. Los resultados muestran agrupaciones que indican tendencias de mercado compartidas y ,por tanto, señalan que la inclusión de un análisis industrial puede reportar ventajas en la toma de decisiones de inversión. Comprobada la factibilidad de un sistema de predicción basado en series temporales y demostrada la existencia de tendencias macroeconómicas en las diferentes industrias, proponemos el desarrollo del sistema completo a través de diferentes etapas. Iterativamente y mediante varias aproximaciones, testeamos y analizamos las piezas que componen el sistema nal. Las primeras fases describen un sistema de comercio automatizado, basado en análisis técnico y fundamental de empresas, que presenta altos rendimientos y reduce el riesgo de pérdidas. El sistema utiliza un motor de optimización guiado por una versión modi cada de un algoritmo genético el la que presentamos operadores innovadores que proporcionan mecanismos para evitar una convergencia prematura del algoritmo y mejorar los resultados de rendimiento nales. Utilizando este mismo sistema de comercio automático proponemos técnicas de optimización novedosas en relación a uno de los problemas más característicos de estos sistemas, el tiempo de ejecución. Presentamos la paralelización del sistema de comercio automatizado mediante dos técnicas de computación paralela, computación distribuida y procesamiento grá co. Ambas arquitecturas presentan aceleraciones elevadas alcanzando los x50 y x256 respectivamente. Estápas posteriores presentan un cambio de metodologia de optimización, algoritmos genéticos por evolución gramatical, que nos permite comparar ambas estrategias e implementar características más avanzadas como reglas más complejas o la auto-generación de nuevos indicadores técnicos. Testearemos, con datos nancieros recientes, varios sistemas de comercio basados en diferentes funciones de aptitud, incluyendo una innovadora versión multi-objetivo, que nos permitirán analizar las ventajas de cada función de aptitud. Finalmente, describimos y testeamos la metodología del sistema de comercio automatizado basado en una doble capa de gramáticas evolutivas y que combina un análisis técnico, fundamental y macroeconómico en un análisis top-down híbrido. Los resultados obtenidos muestran rendimientos medios del 30% con muy pocas operaciones de perdidas.This thesis concerns to the implementation of a complex and pioneering automated trading system which uses three critical analysis to determine time-decisions and portfolios for investments. To this end, this work delves into automated trading systems and studies time series of historical prices related to companies listed in stock markets. Time series are studied using a novel methodology based on clusterings by software compressors. This new approach allows a theoretical study of price formation which shows results of divergence between market prices and prices modelled by random walks, thus supporting the implementation of predictive models based on the analysis of historical patterns. Furthermore, this methodology also provides us the tool to study behaviours of time series of historical prices from di erent industrial sectors seeking patterns among companies in the same industry. Results show clusters of companies pointing out market trends among companies developing similar activities, and suggesting a macroeconomic analysis to take advantage of investment decisions. Tested the feasibility of prediction systems based on analyses related to time series of historical prices and tested the existence of macroeconomic trends in the industries, we propose the implementation of a hybrid automated trading system through several stages which iteratively describe and test the components of the nal system. In the early stages, we implement an automated trading system based on technical and fundamental analysis of companies, it presents high returns and reducing losses. The implementation uses a methodology guided by a modi ed version of a genetic algorithm which presents novel genetic operators avoiding the premature convergence and improving nal results. Using the same automated trading system we propose novel optimization techniques related to one of the characteristic problems of these systems: the execution time. We present the parallelisation of the system using two parallel computing techniques, rst using distributed computation and, second, implementing a version for graphics processors. Both architectures achieve high speed-ups, reaching 50x and 256x respectively, thus, they present the necessary speed-ups required by systems analysing huge amount of nancial data. Subsequent stages present a transformation in the methodology, genetic algorithms for grammatical evolution, which allows us to compare the two evolutionary strategies and to implement more advanced features such as more complex rules or the self-generation of new technical indicators. In this context, we describe several automated trading system versions guided by di erent tness functions, including an innovative multi-objective version that we test with recent nancial data analysing the advantages of each tness function. Finally, we describe and test the methodology of an automated trading system based on a double layer of grammatical evolution combining technical, fundamental and macroeconomic analysis on a hybrid topdown analysis. The results show average returns of 30% with low number of negative operations.Depto. de Arquitectura de Computadores y AutomáticaFac. de InformáticaTRUEunpu

    An efficient automated parameter tuning framework for spiking neural networks

    Get PDF
    As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier

    Preventing premature convergence and proving the optimality in evolutionary algorithms

    Get PDF
    http://ea2013.inria.fr//proceedings.pdfInternational audienceEvolutionary Algorithms (EA) usually carry out an efficient exploration of the search-space, but get often trapped in local minima and do not prove the optimality of the solution. Interval-based techniques, on the other hand, yield a numerical proof of optimality of the solution. However, they may fail to converge within a reasonable time due to their inability to quickly compute a good approximation of the global minimum and their exponential complexity. The contribution of this paper is a hybrid algorithm called Charibde in which a particular EA, Differential Evolution, cooperates with a Branch and Bound algorithm endowed with interval propagation techniques. It prevents premature convergence toward local optima and outperforms both deterministic and stochastic existing approaches. We demonstrate its efficiency on a benchmark of highly multimodal problems, for which we provide previously unknown global minima and certification of optimality

    Evolutionary algorithm-based analysis of gravitational microlensing lightcurves

    Full text link
    A new algorithm developed to perform autonomous fitting of gravitational microlensing lightcurves is presented. The new algorithm is conceptually simple, versatile and robust, and parallelises trivially; it combines features of extant evolutionary algorithms with some novel ones, and fares well on the problem of fitting binary-lens microlensing lightcurves, as well as on a number of other difficult optimisation problems. Success rates in excess of 90% are achieved when fitting synthetic though noisy binary-lens lightcurves, allowing no more than 20 minutes per fit on a desktop computer; this success rate is shown to compare very favourably with that of both a conventional (iterated simplex) algorithm, and a more state-of-the-art, artificial neural network-based approach. As such, this work provides proof of concept for the use of an evolutionary algorithm as the basis for real-time, autonomous modelling of microlensing events. Further work is required to investigate how the algorithm will fare when faced with more complex and realistic microlensing modelling problems; it is, however, argued here that the use of parallel computing platforms, such as inexpensive graphics processing units, should allow fitting times to be constrained to under an hour, even when dealing with complicated microlensing models. In any event, it is hoped that this work might stimulate some interest in evolutionary algorithms, and that the algorithm described here might prove useful for solving microlensing and/or more general model-fitting problems.Comment: 14 pages, 3 figures; accepted for publication in MNRA

    Energy efficient heterogeneous virtualized data centers

    Get PDF
    Meine Dissertation befasst sich mit software-gesteuerter Steigerung der Energie-Effizienz von Rechenzentren. Deren Anteil am weltweiten Gesamtstrombedarf wurde auf 1-2%geschätzt, mit stark steigender Tendenz. Server verursachen oft innerhalb von 3 Jahren Stromkosten, die die Anschaffungskosten übersteigen. Die Steigerung der Effizienz aller Komponenten eines Rechenzentrums ist daher von hoher ökonomischer und ökologischer Bedeutung. Meine Dissertation befasst sich speziell mit dem effizienten Betrieb der Server. Ein Großteil wird sehr ineffizient genutzt, Auslastungsbereiche von 10-20% sind der Normalfall, bei gleichzeitig hohem Strombedarf. In den letzten Jahren wurde im Bereich der Green Data Centers bereits Erhebliches an Forschung geleistet, etwa bei Kühltechniken. Viele Fragestellungen sind jedoch derzeit nur unzureichend oder gar nicht gelöst. Dazu zählt, inwiefern eine virtualisierte und heterogene Server-Infrastruktur möglichst stromsparend betrieben werden kann, ohne dass Dienstqualität und damit Umsatzziele Schaden nehmen. Ein Großteil der bestehenden Arbeiten beschäftigt sich mit homogenen Cluster-Infrastrukturen, deren Rahmenbedingungen nicht annähernd mit Business-Infrastrukturen vergleichbar sind. Hier dürfen verringerte Stromkosten im Allgemeinen nicht durch Umsatzeinbußen zunichte gemacht werden. Insbesondere ist ein automatischer Trade-Off zwischen mehreren Kostenfaktoren, von denen einer der Energiebedarf ist, nur unzureichend erforscht. In meiner Arbeit werden mathematische Modelle und Algorithmen zur Steigerung der Energie-Effizienz von Rechenzentren erforscht und bewertet. Es soll immer nur so viel an stromverbrauchender Hardware online sein, wie zur Bewältigung der momentan anfallenden Arbeitslast notwendig ist. Bei sinkender Arbeitslast wird die Infrastruktur konsolidiert und nicht benötigte Server abgedreht. Bei steigender Arbeitslast werden zusätzliche Server aufgedreht, und die Infrastruktur skaliert. Idealerweise geschieht dies vorausschauend anhand von Prognosen zur Arbeitslastentwicklung. Die Arbeitslast, gekapselt in VMs, wird in beiden Fällen per Live Migration auf andere Server verschoben. Die Frage, welche VM auf welchem Server laufen soll, sodass in Summe möglichst wenig Strom verbraucht wird und gewisse Nebenbedingungen nicht verletzt werden (etwa SLAs), ist ein kombinatorisches Optimierungsproblem in mehreren Variablen. Dieses muss regelmäßig neu gelöst werden, da sich etwa der Ressourcenbedarf der VMs ändert. Weiters sind Server hinsichtlich ihrer Ausstattung und ihres Strombedarfs nicht homogen. Aufgrund der Komplexität ist eine exakte Lösung praktisch unmöglich. Eine Heuristik aus verwandten Problemklassen (vector packing) wird angepasst, ein meta-heuristischer Ansatz aus der Natur (Genetische Algorithmen) umformuliert. Ein einfach konfigurierbares Kostenmodell wird formuliert, um Energieeinsparungen gegenüber der Dienstqualität abzuwägen. Die Lösungsansätze werden mit Load-Balancing verglichen. Zusätzlich werden die Forecasting-Methoden SARIMA und Holt-Winters evaluiert. Weiters werden Modelle entwickelt, die den negativen Einfluss einer Live Migration auf die Dienstqualität voraussagen können, und Ansätze evaluiert, die diesen Einfluss verringern. Abschließend wird untersucht, inwiefern das Protokollieren des Energieverbrauchs Auswirkungen auf Aspekte der Security und Privacy haben kann.My thesis is about increasing the energy efficiency of data centers by using a management software. It was estimated that world-wide data centers already consume 1-2%of the globally provided electrical energy. Furthermore, a typical server causes higher electricity costs over a 3 year lifespan than the purchase cost. Hence, increasing the energy efficiency of all components found in a data center is of high ecological as well as economic importance. The focus of my thesis is to increase the efficiency of servers in a data center. The vast majority of servers in data centers are underutilized for a significant amount of time, operating regions of 10-20%utilization are common. Still, these servers consume huge amounts of energy. A lot of efforts have been made in the area of Green Data Centers during the last years, e.g., regarding cooling efficiency. Nevertheless, there are still many open issues, e.g., operating a virtualized, heterogeneous business infrastructure with the minimum possible power consumption, under the constraint that Quality of Service, and in consequence, revenue are not severely decreased. The majority of existing work is dealing with homogeneous cluster infrastructures, where large assumptions can be made. Especially, an automatic trade-off between competing cost categories, with energy costs being just one of them, is insufficiently studied. In my thesis, I investigate and evaluate mathematical models and algorithms in the context of increasing the energy efficiency of servers in a data center. The amount of online, power consuming resources should at all times be close to the amount of actually required resources. If the workload intensity is decreasing, the infrastructure is consolidated by shutting down servers. If the intensity is rising, the infrastructure is scaled by waking up servers. Ideally, this happens pro-actively by making forecasts about the workload development. Workload is encapsulated in VMs and is live migrated to other servers. The problem of mapping VMs to physical servers in a way that minimizes power consumption, but does not lead to severe Quality of Service violations, is a multi-objective combinatorial optimization problem. It has to be solved frequently as the VMs' resource demands are usually dynamic. Further, servers are not homogeneous regarding their performance and power consumption. Due to the computational complexity, exact solutions are practically intractable. A greedy heuristic stemming from the problem of vector packing and a meta-heuristic genetic algorithm are investigated and evaluated. A configurable cost model is created in order to trade-off energy cost savings with QoS violations. The base for comparison is load balancing. Additionally, the forecasting methods SARIMA and Holt-Winters are evaluated. Further, models able to predict the negative impact of live migration on QoS are developed, and approaches to decrease this impact are investigated. Finally, an examination is carried out regarding the possible consequences of collecting and storing energy consumption data of servers on security and privacy

    Parallel Genetic Algorithm on the CUDA Architecture

    Full text link
    Abstract. This paper deals with the mapping of the parallel island-based genetic algorithm with unidirectional ring migrations to nVidia CUDA software model. The proposed mapping is tested using Rosen-brock’s, Griewank’s and Michalewicz’s benchmark functions. The ob-tained results indicate that our approach leads to speedups up to seven thousand times higher compared to one CPU thread while maintaining a reasonable results quality. This clearly shows that GPUs have a potential for acceleration of GAs and allow to solve much complex tasks.

    A GPU-based iterated tabu search for solving the quadratic 3-dimensional assignment problem

    Full text link
    corecore