1,406 research outputs found
Ranking Methods for Global Optimization of Molecular Structures
This work presents heuristics for searching large sets of molecular structures for low-energy, stable systems. The goal is to find the globally optimal structures in less time or by consuming less computational resources. The strategies intermittently evaluate and rank structures during molecular dynamics optimizations, culling possible weaker solutions from evaluations earlier, leaving better solutions to receive more simulation time. Although some imprecision was introduced from not allowing all structures to fully optimize before ranking, the strategies identify metrics that can be used to make these searches more efficient when computational resources are limited
The Application of Hybridized Genetic Algorithms to the Protein Folding Problem
The protein folding problem consists of attempting to determine the native conformation of a protein given its primary structure. This study examines various methods of hybridizing a genetic algorithm implementation in order to minimize an energy function and predict the conformation (structure) of Met-enkephalin. Genetic Algorithms are semi-optimal algorithms designed to explore and exploit a search space. The genetic algorithm uses selection, recombination, and mutation operators on populations of strings which represent possible solutions to the given problem. One step in solving the protein folding problem is the design of efficient energy minimization techniques. A conjugate gradient minimization technique is described and tested with different replacement frequencies. Baidwinian, Lamarckian, and probabilistic Lamarckian evolution are all tested. Another extension of simple genetic algorithms can be accomplished with niching. Niching works by de-emphasizing solutions based on their proximity to other solutions in the space. Several variations of niching are tested. Experiments are conducted to determine the benefits of each hybridization technique versus each other and versus the genetic algorithm by itself. The experiments are geared toward trying to find the lowest possible energy and hence the minimum conformation of Met-enkephalin. In the experiments, probabilistic Lamarckian strategies were successful in achieving energies below that of the published minimum in QUANTA
GA-Par: Dependable Microservice Orchestration Framework for Geo-Distributed Clouds
Recent advances in composing Cloud applications have been driven by deployments of inter-networking heterogeneous microservices across multiple Cloud datacenters. System dependability has been of the upmost importance and criticality to both service vendors and customers. Security, a measurable attribute, is increasingly regarded as the representative example of dependability. Literally, with the increment of microservice types and dynamicity, applications are exposed to aggravated internal security threats and externally environmental uncertainties. Existing work mainly focuses on the QoS-aware composition of native VM-based Cloud application components, while ignoring uncertainties and security risks among interactive and interdependent container-based microservices. Still, orchestrating a set of microservices across datacenters under those constraints remains computationally intractable. This paper describes a new dependable microservice orchestration framework GA-Par to effectively select and deploy microservices whilst reducing the discrepancy between user security requirements and actual service provision. We adopt a hybrid (both whitebox and blackbox based) approach to measure the satisfaction of security requirement and the environmental impact of network QoS on system dependability. Due to the exponential grow of solution space, we develop a parallel Genetic Algorithm framework based on Spark to accelerate the operations for calculating the optimal or near-optimal solution. Large-scale real world datasets are utilized to validate models and orchestration approach. Experiments show that our solution outperforms the greedy-based security aware method with 42.34 percent improvement. GA-Par is roughly 4Ă— faster than a Hadoop-based genetic algorithm solver and the effectiveness can be constantly guaranteed under different application scales
Quantile regression methods in finance: the caviar case
The thesis tries to investigate how quantile regression methods can be apply to measures of riskope
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Evolutionary computation for trading systems
2007/2008Evolutionary computations, also called evolutionary algorithms, consist of
several heuristics, which are able to solve optimization tasks by imitating
some aspects of natural evolution. They may use different levels of abstraction, but they are always working on populations of possible solutions for a
given task. The basic idea is that if only those individuals of a population
which meet a certain selection criteria reproduce, while the remaining individuals die, the population will converge to those individuals that best meet
the selection criteria. If imperfect reproduction is added the population can
begin to explore the search space and will move to individuals that have an
increased selection probability and that hand down this property to their
descendants. These population dynamics follow the basic rule of the Darwinian evolution theory, which can be described in short as the “survival of the fittest”.
Although evolutionary computations belong to a relative new research area,
from a computational perspective they have already showed some promising
features such as:
• evolutionary methods reveal a remarkable balance between efficiency
and efficacy;
• evolutionary computations are well suited for parameter optimisation;
• this type of algorithms allows a wide variety of extensions and constraints that cannot be provided in traditional methods;
• evolutionary methods are easily combined with other optimization
techniques and can also be extended to multi-objective optimization.
From an economic perspective, these methods appear to be particularly well
suited for a wide range of possible financial applications, in particular in this
thesis I study evolutionary algorithms
• for time series prediction;
• to generate trading rules;
• for portfolio selection.
It is commonly believed that asset prices are not random, but are permeated by complex interrelations that often translate in assets mispricing and
may give rise to potentially profitable opportunities. Classical financial approaches, such as dividend discount models or even capital asset pricing theories, are not able to capture these market complexities. Thus, in the
last decades, researchers have employed intensive econometric and statistical
modeling that examine the effects of a multitude of variables, such as price-
earnings ratios, dividend yields, interest rate spreads and changes in foreign
exchange rates, on a broad and variegated range of stocks at the same time.
However, these models often result in complex functional forms difficult to
manage or interpret and, in the worst case, are solely able to fit a given time
series but are useless to predict it. Parallelly to quantitative approaches,
other researchers have focused on the impact of investor psychology (in particular, herding and overreaction) and on the consequences of considering
informed signals from management and analysts, such as share repurchases
and analyst recommendations. These theories are guided by intuition and
experience, and thus are difficult to be translated into a mathematical environment.
Hence, the necessity to combine together these point of views in order to
develop models that examine simultaneously hundreds of variables, including qualitative informations, and that have user friendly representations, is
urged. To this end, the thesis focuses on the study of methodologies that
satisfy these requirements by integrating economic insights, derived from
academic and professional knowledge, and evolutionary computations.
The main task of this work is to provide efficient algorithms based on the
evolutionary paradigm of biological systems in order to compute optimal
trading strategies for various profit objectives under economic and statistical constraints. The motivations for constructing such optimal strategies
are:
i) the necessity to overcome data-snooping and supervisorship bias in
order to learn to predict good trading opportunities by using market
and/or technical indicators as features on which to base the forecasting;
ii) the feasibility of using these rules as benchmark for real trading
systems;
iii) the capability of ranking quantitatively various markets with respect
to their profitability according to a given criterion, thus making possible portfolio allocations.
More precisely, I present two algorithms that use artificial expert trading
systems to predict financial time series, and a procedure to generate integrated neutral strategies for active portfolio management.
The first algorithm is an automated procedure that simultaneously selects
variables and detect outliers in a dynamic linear model using information
criteria as objective functions and diagnostic tests as constraints for the
distributional properties of errors. The novelties are the automatic implementation of econometric conditions in the model selection step, making
possible a better exploration of the solution space on one hand, and the use
of evolutionary computations to efficiently generate a reduction procedure from a very large number of independent variables on the other hand.
In the second algorithm, the novelty is given by the definition of evolutionary
learning in financial terms and its use in a multi-objective genetic algorithm
in order to generate technical trading systems.
The last tool is based on a trading strategy on six assets, where future
movements of each variable are obtained by an evolutionary procedure that
integrates various types of financial variables. The contribution is given
by the introduction of a genetic algorithm to optimize trading signals parameters and the way in which different informations are represented and
collected.
In order to compare the contribution of this work to “classical” techniques
and theories, the thesis is divided into three parts. The first part, titled
Background, collects Chapters 2 and 3. Its purpose is to provide an introduction to search/optimization evolutionary techniques on one hand, and to
the theories that relate the predictability in financial markets with the concept of efficiency proposed over time by scholars on the other hand. More
precisely, Chapter 2 introduces the basic concepts and major areas of evolutionary computation. It presents a brief history of three major types of evolutionary algorithms, i.e. evolution strategies, evolutionary programming
and genetic algorithms, and points out similarities and differences among
them. Moreover it gives an overview of genetic algorithms and describes
classical and genetic multi-objective optimization techniques. Chapter 3
first presents an overview of the literature on the predictability of financial
time series. In particular, the extent to which the efficiency paradigm is
affected by the introduction of new theories, such as behavioral finance, is
described in order to justify the market forecasting methodologies developed
by practitioners and academics in the last decades. Then, a description of
the econometric and financial techniques that will be used in conjunction
with evolutionary algorithms in the successive chapters is provided. Special
attention is paid to economic implications, in order to highlight merits and
shortcomings from a practitioner perspective.
The second part of the thesis, titled Trading Systems, is devoted to the description of two procedures I have developed in order to generate artificial
trading strategies on the basis of evolutionary algorithms, and it groups
Chapters 4 and 5. In particular, chapter 4 presents a genetic algorithm for
variable selection by minimizing the error in a multiple regression model.
Measures of errors such as ME, RMSE, MAE, Theil’s inequality coefficient
and CDC are analyzed choosing models based on AIC, BIC, ICOMP and
similar criteria. Two components of penalty functions are taken in analysis-
level of significance and Durbin Watson statistics. Asymptotic properties of
functions are tested on several financial variables including stocks, bonds,
returns, composite prices indices from the US and the EU economies. Variables with outliers that distort the efficiency and consistency of estimators
are removed to solve masking and smearing problems that they may cause in
estimations. Two examples complete the chapter. In both cases, models are
designed to produce short-term forecasts for the excess returns of the MSCI
Europe Energy sector on the MSCI Europe index and a recursive estimation-
window is used to shed light on their predictability performances. In the first
application the data-set is obtained by a reduction procedure from a very
large number of leading macro indicators and financial variables stacked
at various lags, while in the second the complete set of 1-month lagged
variables is considered. Results show a promising capability to predict excess sector returns through the selection, using the proposed methodology,
of most valuable predictors. In Chapter 5 the paradigm of evolutionary
learning is defined and applied in the context of technical trading rules for
stock timing. A new genetic algorithm is developed by integrating statistical
learning methods and bootstrap to a multi-objective non dominated sorting
algorithm with variable string length, making possible to evaluate statistical
and economic criteria at the same time. Subsequently, the chapter discusses
a practical case, represented by a simple trading strategy where total funds
are invested in either the S&P 500 Composite Index or in 3-month Treasury
Bills. In this application, the most informative technical indicators are selected from a set of almost 5000 signals by the algorithm. Successively, these
signals are combined into a unique trading signal by a learning method. I
test the expert weighting solution obtained by the plurality voting committee, the Bayesian model averaging and Boosting procedures with data from
the the S&P 500 Composite Index, in three market phases, up-trend, down-
trend and sideways-movements, covering the period 2000–2006.
In the third part, titled Portfolio Selection, I explain how portfolio optimization models may be constructed on the basis of evolutionary algorithms and
on the signals produced by artificial trading systems. First, market neutral
strategies from an economic point of view are introduced, highlighting their
risks and benefits and focusing on their quantitative formulation. Then, a
description of the GA-Integrated Neutral tool, a MATLAB set of functions
based on genetic algorithms for active portfolio management, is given. The
algorithm specializes in the parameter optimization of trading signals for
an integrated market neutral strategy. The chapter concludes showing an
application of the tool as a support to decisions in the Absolute Return
Interest Rate Strategies sub-fund of Generali Investments.Gli “algoritmi evolutivi”, noti anche come “evolutionary computations”
comprendono varie tecniche di ottimizzazione per la risoluzione di problemi,
mediante alcuni aspetti suggeriti dall’evoluzione naturale. Tali metodologie
sono accomunate dal fatto che non considerano un’unica soluzione alla
volta, bens`ı trattano intere popolazioni di possibili soluzioni per un dato
problema. L’idea sottostante `e che, se un algoritmo fa evolvere solamente
gli individui di una data popolazione che soddisfano a un certo criterio di
selezione, e lascia morire i restanti, la popolazione converger`a agli individui
che meglio soddisfano il criterio di selezione. Con una selezione non ottimale,
cio`e una che ammette pure soluzioni sub-ottimali, la popolazione rappresenter`
a meglio l’intero spazio di ricerca e sar`a in grado di individuare in modo
pi`u consistente gli individui migliori da far evolvere. Queste dinamiche interne
alle popolazioni seguono i principi Darwiniani dell’evoluzione, che si
possono sinteticamente riassumere nella dicitura “la sopravvivenza del più
adatto”.
Sebbene gli algoritmi evolutivi siano un’area di ricerca relativamente nuova,
dal punto di vista computazionale hanno dimostrato alcune caratteristiche
interessanti fra cui le seguenti:
• permettono un notevole equilibrio tra efficienza ed efficacia;
• sono particolarmente indicati per la configurazione dei parametri in
problemi di ottimizzazione;
• consentono una flessibilit`a nella definizione matematica dei problemi
e dei vincoli che non si trova nei metodi tradizionali;
• possono facilmente essere integrati con altre tecniche di ottimizzazione
ed essere essere modificati per risolvere problemi multi-obiettivo.
Dal un punto di vista economico, l’applicazione di queste procedure pu`o
risultare utile specialmente in campo finanziario. In particolare, nella mia
tesi ho studiato degli algoritmi evolutivi per
• la previsione di serie storiche finanziarie;
• la costruzione di regole di trading;
• la selezione di portafogli.
Da un punto di vista pi`u ampio, lo scopo di questa ricerca `e dunque l’analisi
dell’evoluzione e della complessit`a dei mercati finanziari. In tal senso, dal
momento che i prezzi non seguono andamenti puramente casuali, ma sono
governati da un insieme molto articolato di eventi correlati, i modelli e le
teorie classiche, come i dividend discount model e le varie capital asset pricing
theories, non sono pi`u sufficienti per determinare potenziali opportunit`a di
profitto. A tal fine, negli ultimi decenni, alcuni ricercatori hanno sviluppato
una vasta gamma di modelli econometrici e statistici in grado di esaminare
contemporaneamente le relazioni e gli effetti di centinaia di variabili, come
ad esempio, price-earnings ratios, dividendi, differenziali fra tassi di interesse
e variazioni dei tassi di cambio, per una vasta gamma di assets. Comunque,
questo approccio, che fa largo impiego di strumenti di calcolo, spesso porta
a dei modelli troppo complicati per essere gestiti o interpretati, e, nel peggiore
dei casi, pur essendo ottimi per descrivere situazioni passate, risultano
inutili per fare previsioni. Parallelamente a questi approcci quantitativi, si
`e manifestato un grande interesse sulla psicologia degli investitori e sulle
conseguenze derivanti dalle opinioni di esperti e analisti nelle dinamiche del
mercato. Questi studi sono difficilmente traducibili in modelli matematici
e si basano principalmente sull’intuizione e sull’esperienza. Da qui la necessit`
a di combinare insieme questi due punti di vista, al fine di sviluppare
modelli che siano in grado da una parte di trattare contemporaneamente
un elevato numero di variabili in modo efficiente e, dall’altra, di incorporare
informazioni e opinioni qualitative. La tesi affronta queste tematiche integrando
le conoscenze economiche, sia accademiche che professionali, con gli
algoritmi evolutivi. Pi`u pecisamente, il principale obiettivo di questo lavoro
`e lo sviluppo di algoritmi efficienti basati sul paradigma dell’evoluzione dei
sistemi biologici al fine di determinare strategie di trading ottimali in termini
di profitto e di vincoli economici e statistici. Le ragioni che motivano
lo studio di tali strategie ottimali sono:
i) la necessit`a di risolvere i problemi di data-snooping e supervivorship
bias al fine di ottenere regole di investimento vantaggiose utilizzando
indicatori di mercato e/o tecnici per la previsione;
ii) la possibilitĂ di impiegare queste regole come benchmark per sistemi
di trading reali;
iii) la capacit`a di individuare gli asset pi`u vantaggiosi in termini di profitto,
o di altri criteri, rendendo possibile una migliore allocazione di
risorse nei portafogli.
In particolare, nella tesi descrivo due algoritmi che impiegano sistemi di trading
artificiali per predire serie storiche finanziarie e una procedura di calcolo
per strategie integrate neutral market per la gestione attiva di portafogli.
Il primo algoritmo `e una procedura automatica che seleziona le variabili
e simultaneamente determina gli outlier in un modello dinamico lineare
utilizzando criteri informazionali come funzioni obiettivo e test diagnostici
come vincoli per le caratteristiche delle distribuzioni degli errori. Le novit`a
del metodo sono da una parte l’implementazione automatica di condizioni
econometriche nella fase di selezione, consentendo una migliore analisi dello
EVOLUTIONARY COMPUTATIONS FOR TRADING SYSTEMS 3
spazio delle soluzioni, e dall’altra parte, l’introduzione di una procedura di
riduzione evolutiva capace di riconoscere in modo efficiente le variabili pi`u
informative.
Nel secondo algoritmo, le novità sono costituite dalla definizione dell’apprendimento
evolutivo in termini finanziari e dall’applicazione di un algoritmo
genetico multi-obiettivo per la costruzione di sistemi di trading basati
su indicatori tecnici.
L’ultimo metodo proposto si basa su una strategia di trading su sei assets,
in cui le dinamiche future di ciascuna variabile sono ottenute impiegando
una procedura evolutiva che integra diverse tipologie di variabili finanziarie.
Il contributo è dato dall’impiego di un algoritmo genetico per ottimizzare i
parametri negli indicatori tecnici e dal modo in cui le differenti informazioni
sono presentate e collegate.
La tesi `e organizzata in tre parti. La prima parte, intitolata Background,
comprende i Capitoli 2 e 3, ed è intesa a fornire un’introduzione alle tecniche
di ricerca/ottimizzazione su base evolutiva da una parte, e alle teorie
che si occupano di efficienza e prevedibilit`a dei mercati finanziari dall’altra.
PiĂą precisamente, il Capitolo 2 introduce i concetti base e i maggiori
campi di studio della computazione evolutiva. In tal senso, si dĂ una breve
presentazione storica di tre dei maggiori tipi di algoritmi evolutivi, ciò e le
strategie evolutive, la programmazione evolutiva e gli algoritmi genetici,
evidenziandone caratteri comuni e differenze. Il capitolo si chiude con una
panoramica sugli algoritmi genetici e sulle tecniche classiche e genetiche di
ottimizzazione multi-obiettivo. Il Capitolo 3 affronta nel dettaglio la problematica
della prevedibilit`a delle serie storiche finanziarie mettendo in luce,
in particolare, quanto il paradigma dell’efficienza sia influenzato dalle pi`u
recenti teorie finanziarie, come ad esempio la finanza comportamentale. Lo
scopo è quello di dare una giustificazione su basi teoriche per le metodologie
di previsione sviluppate nella tesi. Segue una descrizione dei metodi
econometrici e di analisi tecnica che nei capitoli successivi verrano impiegati
assieme agli algoritmi evolutivi. Una particolare attenzione è data alle implicazioni
economiche, al fine di evidenziare i loro meriti e i loro difetti da
un punto di vista pratico.
La seconda parte, intitolata Trading Systems, raggruppa i Capitoli 4 e 5 ed
è dedicata alla descrizione di due procedure che ho sviluppato per generare
sistemi di trading artificiali sulla base di algoritmi evolutivi. In particolare,
il Capitolo 4 presenta un algortimo genetico per la selezione di variabili attraverso
la minimizzazione dell’errore in un modello di regressione multipla.
Misure di errore, quali il ME, il RMSE, il MAE, il coefficiente di Theil e
il CDC sono analizzate a partire da modelli selezionati sulla scorta di criteri
informazionali, come ad esempio AIC, BIC, ICOMP. A livello di vincoli
diagnostici, ho considerato una funzione di penalitĂ a due componenti e la
statistica di Durbin Watson. Il programma impiega variabili finanziarie di
vario tipo, come rendimenti di titoli, bond e prezzi di indici composti ottenuti
dalle economie Statunitense ed Europea. Nel caso le serie storiche
4 MASSIMILIANO KAUCIC
considerate presentino outliers che distorcono l’efficienza e la consistenza
degli stimatori, l’algoritmo `e in grado di individuarle e rimuoverle dalla serie,
risolvendo il problema di masking and smearing. Il capitolo si conclude
con due applicazioni, in cui i modelli sono progettati per produrre previsioni
di breve periodo per l’extra rendimento del settore MSCI Europe Energy sull’indice
MSCI Europe e una procedura di tipo recursive estimation-window è
utilizzata per evidenziarne le performance previsionali. Nel primo esempio,
l’insieme dei dati `e ottenuto estraendo le variabili di interesse da un considerevole
numero di indicatori di tipo macro e da variabili finanziarie ritardate
rispetto alla variabile dipendente. Nel secondo esempio ho invece considerato
l’intero insieme di variabili ritardate di 1 mese. I risultati mostrano una
notevole capacità previsiva per l’extra rendimento, individuando gli indicatori
maggiormente informativi. Nel Capitolo 5, il concetto di apprendimento
evolutivo viene definito ed applicato alla costruzione di regole di trading su
indicatori tecnici per lo stock timing. In tal senso, ho sviluppato un algoritmo
che integra metodi di apprendimento statistico e di boostrap con un
particolare algoritmo multi-obiettivo. La procedura derivante è in grado di
valutare contemporaneamente criteri economici e statistici. Per descrivere
il suo funzionamento, ho considerato un semplice esempio di trading in cui
tutto il capitale è investito in un indice (che nel caso trattato è l’indice S&P
500 Composite) o in un titolo a basso rischio (nell’esempio, i Treasury Bills
a 3 mesi). Il segnale finale di trading `e il risultato della selezione degli indicatori
tecnici pi`u informativi a partire da un insieme di circa 5000 indicatori
e la loro conseguente integrazione mediante un metodo di apprendimento
(il plurality voting committee, il bayesian model averaging o il Boosting).
L’analisi è stata condotta sull’intervallo temporale dal 2000 al 2006, suddiviso
in tre sottoperiodi: il primo rappresenta l’indice in un
Autonomous Recovery Of Reconfigurable Logic Devices Using Priority Escalation Of Slack
Field Programmable Gate Array (FPGA) devices offer a suitable platform for survivable hardware architectures in mission-critical systems. In this dissertation, active dynamic redundancy-based fault-handling techniques are proposed which exploit the dynamic partial reconfiguration capability of SRAM-based FPGAs. Self-adaptation is realized by employing reconfiguration in detection, diagnosis, and recovery phases. To extend these concepts to semiconductor aging and process variation in the deep submicron era, resilient adaptable processing systems are sought to maintain quality and throughput requirements despite the vulnerabilities of the underlying computational devices. A new approach to autonomous fault-handling which addresses these goals is developed using only a uniplex hardware arrangement. It operates by observing a health metric to achieve Fault Demotion using Recon- figurable Slack (FaDReS). Here an autonomous fault isolation scheme is employed which neither requires test vectors nor suspends the computational throughput, but instead observes the value of a health metric based on runtime input. The deterministic flow of the fault isolation scheme guarantees success in a bounded number of reconfigurations of the FPGA fabric. FaDReS is then extended to the Priority Using Resource Escalation (PURE) online redundancy scheme which considers fault-isolation latency and throughput trade-offs under a dynamic spare arrangement. While deep-submicron designs introduce new challenges, use of adaptive techniques are seen to provide several promising avenues for improving resilience. The scheme developed is demonstrated by hardware design of various signal processing circuits and their implementation on a Xilinx Virtex-4 FPGA device. These include a Discrete Cosine Transform (DCT) core, Motion Estimation (ME) engine, Finite Impulse Response (FIR) Filter, Support Vector Machine (SVM), and Advanced Encryption Standard (AES) blocks in addition to MCNC benchmark circuits. A iii significant reduction in power consumption is achieved ranging from 83% for low motion-activity scenes to 12.5% for high motion activity video scenes in a novel ME engine configuration. For a typical benchmark video sequence, PURE is shown to maintain a PSNR baseline near 32dB. The diagnosability, reconfiguration latency, and resource overhead of each approach is analyzed. Compared to previous alternatives, PURE maintains a PSNR within a difference of 4.02dB to 6.67dB from the fault-free baseline by escalating healthy resources to higher-priority signal processing functions. The results indicate the benefits of priority-aware resiliency over conventional redundancy approaches in terms of fault-recovery, power consumption, and resource-area requirements. Together, these provide a broad range of strategies to achieve autonomous recovery of reconfigurable logic devices under a variety of constraints, operating conditions, and optimization criteria
DISCOVERING INTERESTING PATTERNS FOR INVESTMENT DECISION MAKING WITH GLOWER C - A GENETIC LEARNER OVERLAID WITH ENTROPY REDUCTION
Prediction in financial domains is notoriously difficult for a number of reasons. First, theories tend to be
weak or non-existent, which makes problem formulation open-ended by forcing us to consider a large
number of independent variables and thereby increasing the dimensionality of the search space. Second, the
weak relationships among variables tend to be nonlinear, and may hold only in limited areas of the search
space. Third, in financial practice, where analysts conduct extensive manual analysis of historically well
performing indicators, a key is to find the hidden interactions among variables that perform well in
combination. Unfortunately, these are exactly the patterns that the greedy search biases incorporated by
many standard rule algorithms will miss. In this paper, we describe and evaluate several variations of a new
genetic learning algorithm (GLOWER) on a variety of data sets. The design of GLOWER has been motivated
by financial prediction problems, but incorporates successful ideas from tree induction and rule learning.
We examine the performance of several GLOWER variants on two UCI data sets as well as on a standard
financial prediction problem (S&P500 stock returns), using the results to identify and use one of the better
variants for further comparisons. We introduce a new (to KDD) financial prediction problem (predicting
positive and negative earnings surprises), and experiment withGLOWER, contrasting it with tree- and rule-induction
approaches. Our results are encouraging, showing that GLOWER has the ability to uncover
effective patterns for difficult problems that have weak structure and significant nonlinearities.Information Systems Working Papers Serie
A Multiobjective Approach Applied to the Protein Structure Prediction Problem
Interest in discovering a methodology for solving the Protein Structure Prediction problem extends into many fields of study including biochemistry, medicine, biology, and numerous engineering and science disciplines. Experimental approaches, such as, x-ray crystallographic studies or solution Nuclear Magnetic Resonance Spectroscopy, to mathematical modeling, such as minimum energy models are used to solve this problem. Recently, Evolutionary Algorithm studies at the Air Force Institute of Technology include the following: Simple Genetic Algorithm (GA), messy GA, fast messy GA, and Linkage Learning GA, as approaches for potential protein energy minimization. Prepackaged software like GENOCOP, GENESIS, and mGA are in use to facilitate experimentation of these techniques. In addition to this software, a parallelized version of the fmGA, the so-called parallel fast messy GA, is found to be good at finding semi-optimal answers in reasonable wall clock time. The aim of this work is to apply a Multiobjective approach to solving this problem using a modified fast messy GA. By dividing the CHARMm energy model into separate objectives, it should be possible to find structural configurations of a protein that yield lower energy values and ultimately more correct conformations
- …