7 research outputs found
Prediksi Akumulasi Kasus Terkonfirmasi Covid-19 Di Indonesia Menggunakan Support Vector Regression
Indonesia merupakan salah satu negara di dunia yang terdampak parah oleh gelombang kedua COVID-19. Salah satu cara untuk meningkatkan kesadaran masyarakat terhadap wabah penyebaran virus adalah dengan memberikan informasi tentang prediksi kasus baru. Memprediksi akumulasi kasus dalam beberapa hari ke depan juga sangat penting untuk memperkirakan kebutuhan rumah sakit dan membantu pemerintah dalam membuat kebijakan. Di sisi lain, pola kasus gelombang kedua sulit untuk disimulasikan dengan pendekatan regresi tradisional. Penelitian ini berfokus pada pembuatan sistem informasi yang memberikan visualisasi prediksi akumulasi kasus COVID-19 di Indonesia dengan menggunakan Support Vector Regression (SVR). Algoritma pembelajaran ini dipilih karena kinerjanya yang sangat baik untuk menangani prediksi deret waktu. Hasil eksperimen menunjukkan bahwa SVR dapat memprediksi jumlah akumulasi kasus selama 30 hari ke depan dengan akurasi di atas 80%. Model prediksi tersebut kemudian dipasang pada aplikasi berbasis web, dan hasilnya divisualisasikan sesuai dengan data terbaru
Hybridizing the 1/5-th Success Rule with Q-Learning for Controlling the Mutation Rate of an Evolutionary Algorithm
It is well known that evolutionary algorithms (EAs) achieve peak performance
only when their parameters are suitably tuned to the given problem. Even more,
it is known that the best parameter values can change during the optimization
process. Parameter control mechanisms are techniques developed to identify and
to track these values.
Recently, a series of rigorous theoretical works confirmed the superiority of
several parameter control techniques over EAs with best possible static
parameters. Among these results are examples for controlling the mutation rate
of the ~EA when optimizing the OneMax problem. However, it was
shown in [Rodionova et al., GECCO'19] that the quality of these techniques
strongly depends on the offspring population size .
We introduce in this work a new hybrid parameter control technique, which
combines the well-known one-fifth success rule with Q-learning. We demonstrate
that our HQL mechanism achieves equal or superior performance to all techniques
tested in [Rodionova et al., GECCO'19] and this -- in contrast to previous
parameter control methods -- simultaneously for all offspring population sizes
. We also show that the promising performance of HQL is not restricted
to OneMax, but extends to several other benchmark problems.Comment: To appear in the Proceedings of Parallel Problem Solving from Nature
(PPSN'2020
Benchmarking a Genetic Algorithm with Configurable Crossover Probability
We investigate a family of Genetic Algorithms (GAs) which
creates offspring either from mutation or by recombining two randomly chosen
parents. By scaling the crossover probability, we can thus interpolate from a
fully mutation-only algorithm towards a fully crossover-based GA. We analyze,
by empirical means, how the performance depends on the interplay of population
size and the crossover probability.
Our comparison on 25 pseudo-Boolean optimization problems reveals an
advantage of crossover-based configurations on several easy optimization tasks,
whereas the picture for more complex optimization problems is rather mixed.
Moreover, we observe that the ``fast'' mutation scheme with its are power-law
distributed mutation strengths outperforms standard bit mutation on complex
optimization tasks when it is combined with crossover, but performs worse in
the absence of crossover.
We then take a closer look at the surprisingly good performance of the
crossover-based GAs on the well-known LeadingOnes benchmark
problem. We observe that the optimal crossover probability increases with
increasing population size . At the same time, it decreases with
increasing problem dimension, indicating that the advantages of the crossover
are not visible in the asymptotic view classically applied in runtime analysis.
We therefore argue that a mathematical investigation for fixed dimensions might
help us observe effects which are not visible when focusing exclusively on
asymptotic performance bounds
From Understanding Genetic Drift to a Smart-Restart Parameter-less Compact Genetic Algorithm
One of the key difficulties in using estimation-of-distribution algorithms is
choosing the population size(s) appropriately: Too small values lead to genetic
drift, which can cause enormous difficulties. In the regime with no genetic
drift, however, often the runtime is roughly proportional to the population
size, which renders large population sizes inefficient.
Based on a recent quantitative analysis which population sizes lead to
genetic drift, we propose a parameter-less version of the compact genetic
algorithm that automatically finds a suitable population size without spending
too much time in situations unfavorable due to genetic drift.
We prove a mathematical runtime guarantee for this algorithm and conduct an
extensive experimental analysis on four classic benchmark problems both without
and with additive centered Gaussian posterior noise. The former shows that
under a natural assumption, our algorithm has a performance very similar to the
one obtainable from the best problem-specific population size. The latter
confirms that missing the right population size in the original cGA can be
detrimental and that previous theory-based suggestions for the population size
can be far away from the right values; it also shows that our algorithm as well
as a previously proposed parameter-less variant of the cGA based on parallel
runs avoid such pitfalls. Comparing the two parameter-less approaches, ours
profits from its ability to abort runs which are likely to be stuck in a
genetic drift situation.Comment: 4 figures. Extended version of a paper appearing at GECCO 202
Recommended from our members
Drift analysis with fitness levels for elitist evolutionary algorithms
The fitness level method is a popular tool for analyzing the hitting time of elitist evolutionary algorithms. Its idea is to divide the search space into multiple fitness levels and estimate lower and upper bounds on the hitting time using transition probabilities between fitness levels. However, the lower bound generated by this method is often loose. An open question regarding the fitness level method is what are the tightest lower and upper time bounds that can be constructed based on transition probabilities between fitness levels. To answer this question, we combine drift analysis with fitness levels and define the tightest bound problem as a constrained multi-objective optimization problem subject to fitness levels. The tightest metric bounds by fitness levels are constructed and proven for the first time. Then linear bounds are derived from metric bounds and a framework is established that can be used to develop different fitness level methods for different types of linear bounds. The framework is generic and promising, as it can be used to draw tight time bounds on both fitness landscapes with and without shortcuts. This is demonstrated in the example of the (1+1) EA maximizing the TwoMax1 function