211,294 research outputs found
Data-efficient methods applied to general spectral image capture
Commercialization of spectral imaging for color reproduction will require low bandwidth but highly accurate spectral image acquisition systems. Self-adapting systems are proposed as potential solutions. Such systems perform spectral content analysis on an encountered scene, reacting to the analysis by configuring efficient high quality spectral reconstruction. An experiment is reported comparing scene-derived spectral estimation transforms to static global transforms in multi-channel imaging simulations. For noisefree simulations, the adaptive approach showed clear benefit in terms of colorimetric and spectral statistics. When noise was added, the adaptive method continued to be superior in terms of spectral evaluations, but colorimetric degradation for the adaptive approach exceeded that of the static. This provided additional evidence that spectral reconstruction methods should reference psychometrics as an integral part of spectral error management
Self-adaptive predictor-corrector algorithm for static nonlinear structural analysis
A multiphase selfadaptive predictor corrector type algorithm was developed. This algorithm enables the solution of highly nonlinear structural responses including kinematic, kinetic and material effects as well as pro/post buckling behavior. The strategy involves three main phases: (1) the use of a warpable hyperelliptic constraint surface which serves to upperbound dependent iterate excursions during successive incremental Newton Ramphson (INR) type iterations; (20 uses an energy constraint to scale the generation of successive iterates so as to maintain the appropriate form of local convergence behavior; (3) the use of quality of convergence checks which enable various self adaptive modifications of the algorithmic structure when necessary. The restructuring is achieved by tightening various conditioning parameters as well as switch to different algorithmic levels to improve the convergence process. The capabilities of the procedure to handle various types of static nonlinear structural behavior are illustrated
TVA: A Requirements Driven, Machine-Learning Approach for Addressing Tactic Volatility in Self-Adaptive Systems
From self-driving cars to self-adaptive websites, the world is increasingly becoming more reliant on autonomous systems. Similar to many other domains, the system\u27s behavior is often determined by its requirements. For example, a self-adaptive web service is likely to have some maximum value that response time should not surpass. To maintain this requirement, the system uses tactics, which may include activating additional computing resources. In real-world environments, tactics will frequently experience volatility, known as tactic volatility. This can include unstable time required to execute the tactic or frequent fluctuations in the cost to execute the tactic. Unfortunately, current self-adaptive approaches do not account for tactic volatility in their decision-making processes, and merely assume that tactics have static attributes.
To address the limitations in current processes, we propose a Tactic Volatility Aware (TVA) solution. Our approach focuses on providing a volatility aware solution that enables the system to properly maintain requirements. Specifically, TVA utilizes a Autoregressive Integrated Moving Average Model (ARIMA) to estimate potential future values for requirements, while also using a Multiple Regression Analysis (MRA) model to make predictions of tactic latency and tactic cost at runtime. This enables the system to both better estimate the true behavior of its tactics and it allows the system to properly maintain its requirements. Using data containing real-world volatility, we demonstrate the effectiveness of using TVA with both statistical analysis methods and self-adaptive experiments. In this work, we demonstrate (I) The negative impact of not accounting for tactic volatility (II) The benefits of a ARIMA-modeling approach in monitoring system requirements (III) The effectiveness of MRA in predicting tactic volatility (IV) The overall benefits of TVA to the self-adaptive process. This work also presents the first known publicly available dataset of real-world tactic volatility in terms of both cost and latency
Optimal Parameter Choices Through Self-Adjustment: Applying the 1/5-th Rule in Discrete Settings
While evolutionary algorithms are known to be very successful for a broad
range of applications, the algorithm designer is often left with many
algorithmic choices, for example, the size of the population, the mutation
rates, and the crossover rates of the algorithm. These parameters are known to
have a crucial influence on the optimization time, and thus need to be chosen
carefully, a task that often requires substantial efforts. Moreover, the
optimal parameters can change during the optimization process. It is therefore
of great interest to design mechanisms that dynamically choose best-possible
parameters. An example for such an update mechanism is the one-fifth success
rule for step-size adaption in evolutionary strategies. While in continuous
domains this principle is well understood also from a mathematical point of
view, no comparable theory is available for problems in discrete domains.
In this work we show that the one-fifth success rule can be effective also in
discrete settings. We regard the ~GA proposed in
[Doerr/Doerr/Ebel: From black-box complexity to designing new genetic
algorithms, TCS 2015]. We prove that if its population size is chosen according
to the one-fifth success rule then the expected optimization time on
\textsc{OneMax} is linear. This is better than what \emph{any} static
population size can achieve and is asymptotically optimal also among
all adaptive parameter choices.Comment: This is the full version of a paper that is to appear at GECCO 201
The Simulation Model Partitioning Problem: an Adaptive Solution Based on Self-Clustering (Extended Version)
This paper is about partitioning in parallel and distributed simulation. That
means decomposing the simulation model into a numberof components and to
properly allocate them on the execution units. An adaptive solution based on
self-clustering, that considers both communication reduction and computational
load-balancing, is proposed. The implementation of the proposed mechanism is
tested using a simulation model that is challenging both in terms of structure
and dynamicity. Various configurations of the simulation model and the
execution environment have been considered. The obtained performance results
are analyzed using a reference cost model. The results demonstrate that the
proposed approach is promising and that it can reduce the simulation execution
time in both parallel and distributed architectures
Runtime Analysis for Self-adaptive Mutation Rates
We propose and analyze a self-adaptive version of the
evolutionary algorithm in which the current mutation rate is part of the
individual and thus also subject to mutation. A rigorous runtime analysis on
the OneMax benchmark function reveals that a simple local mutation scheme for
the rate leads to an expected optimization time (number of fitness evaluations)
of when is at least for
some constant . For all values of , this
performance is asymptotically best possible among all -parallel
mutation-based unbiased black-box algorithms.
Our result shows that self-adaptation in evolutionary computation can find
complex optimal parameter settings on the fly. At the same time, it proves that
a relatively complicated self-adjusting scheme for the mutation rate proposed
by Doerr, Gie{\ss}en, Witt, and Yang~(GECCO~2017) can be replaced by our simple
endogenous scheme.
On the technical side, the paper contributes new tools for the analysis of
two-dimensional drift processes arising in the analysis of dynamic parameter
choices in EAs, including bounds on occupation probabilities in processes with
non-constant drift
- …