118,159 research outputs found
Recommended from our members
Recursive Percentage based Hybrid Pattern Training for Supervised Learning
Supervised learning algorithms, often used to find the I/O relationship in data, have the tendency to be trapped in local optima as opposed to the desirable global optima. In this paper, we discuss the RPHP learning algorithm. The algorithm uses Real Coded Genetic Algorithm based global and local searches to find a set of pseudo global optimal solutions. Each pseudo global optimum is a local optimal solution from the point of view of all the patterns but globally optimal from the point of view of a subset of patterns. Together with RPHP, a Kth nearest neighbor algorithm is used as a second level pattern distributor to solve a test pattern. We also show theoretically the condition under which finding several pseudo global optimal solutions requires a shorter training time than finding a single global optimal solution. As the difficulty of curve fitting problems is easily estimated, we verify the capability of the RPHP algorithm against them and compare the RPHP algorithm with three counterparts to show the benefits of hybrid learning and active recursive subset selection. The RPHP shows a clear superiority in performance. We conclude our paper by identifying possible loopholes in the RPHP algorithm and proposing possible solutions
Robust Classification for Imprecise Environments
In real-world environments it usually is difficult to specify target
operating conditions precisely, for example, target misclassification costs.
This uncertainty makes building robust classification systems problematic. We
show that it is possible to build a hybrid classifier that will perform at
least as well as the best available classifier for any target conditions. In
some cases, the performance of the hybrid actually can surpass that of the best
known classifier. This robust performance extends across a wide variety of
comparison frameworks, including the optimization of metrics such as accuracy,
expected cost, lift, precision, recall, and workforce utilization. The hybrid
also is efficient to build, to store, and to update. The hybrid is based on a
method for the comparison of classifier performance that is robust to imprecise
class distributions and misclassification costs. The ROC convex hull (ROCCH)
method combines techniques from ROC analysis, decision analysis and
computational geometry, and adapts them to the particulars of analyzing learned
classifiers. The method is efficient and incremental, minimizes the management
of classifier performance data, and allows for clear visual comparisons and
sensitivity analyses. Finally, we point to empirical evidence that a robust
hybrid classifier indeed is needed for many real-world problems.Comment: 24 pages, 12 figures. To be published in Machine Learning Journal.
For related papers, see http://www.hpl.hp.com/personal/Tom_Fawcett/ROCCH
Forecasting Recharging Demand to Integrate Electric Vehicle Fleets in Smart Grids
Electric vehicle fleets and smart grids are two growing technologies. These technologies
provided new possibilities to reduce pollution and increase energy efficiency.
In this sense, electric vehicles are used as mobile loads in the power grid. A distributed
charging prioritization methodology is proposed in this paper. The solution is based
on the concept of virtual power plants and the usage of evolutionary computation
algorithms. Additionally, the comparison of several evolutionary algorithms, genetic
algorithm, genetic algorithm with evolution control, particle swarm optimization, and
hybrid solution are shown in order to evaluate the proposed architecture. The proposed
solution is presented to prevent the overload of the power grid
Basic Enhancement Strategies When Using Bayesian Optimization for Hyperparameter Tuning of Deep Neural Networks
Compared to the traditional machine learning models, deep neural networks (DNN) are known to be highly sensitive to the choice of hyperparameters. While the required time and effort for manual tuning has been rapidly decreasing for the well developed and commonly used DNN architectures, undoubtedly DNN hyperparameter optimization will continue to be a major burden whenever a new DNN architecture needs to be designed, a new task needs to be solved, a new dataset needs to be addressed, or an existing DNN needs to be improved further. For hyperparameter optimization of general machine learning problems, numerous automated solutions have been developed where some of the most popular solutions are based on Bayesian Optimization (BO). In this work, we analyze four fundamental strategies for enhancing BO when it is used for DNN hyperparameter optimization. Specifically, diversification, early termination, parallelization, and cost function transformation are investigated. Based on the analysis, we provide a simple yet robust algorithm for DNN hyperparameter optimization - DEEP-BO (Diversified, Early-termination-Enabled, and Parallel Bayesian Optimization). When evaluated over six DNN benchmarks, DEEP-BO mostly outperformed well-known solutions including GP-Hedge, BOHB, and the speed-up variants that use Median Stopping Rule or Learning Curve Extrapolation. In fact, DEEP-BO consistently provided the top, or at least close to the top, performance over all the benchmark types that we have tested. This indicates that DEEP-BO is a robust solution compared to the existing solutions. The DEEP-BO code is publicly available at <uri>https://github.com/snu-adsl/DEEP-BO</uri>
The pseudo-compartment method for coupling PDE and compartment-based models of diffusion
Spatial reaction-diffusion models have been employed to describe many
emergent phenomena in biological systems. The modelling technique most commonly
adopted in the literature implements systems of partial differential equations
(PDEs), which assumes there are sufficient densities of particles that a
continuum approximation is valid. However, due to recent advances in
computational power, the simulation, and therefore postulation, of
computationally intensive individual-based models has become a popular way to
investigate the effects of noise in reaction-diffusion systems in which regions
of low copy numbers exist.
The stochastic models with which we shall be concerned in this manuscript are
referred to as `compartment-based'. These models are characterised by a
discretisation of the computational domain into a grid/lattice of
`compartments'. Within each compartment particles are assumed to be well-mixed
and are permitted to react with other particles within their compartment or to
transfer between neighbouring compartments.
We develop two hybrid algorithms in which a PDE is coupled to a
compartment-based model. Rather than attempting to balance average fluxes, our
algorithms answer a more fundamental question: `how are individual particles
transported between the vastly different model descriptions?' First, we present
an algorithm derived by carefully re-defining the continuous PDE concentration
as a probability distribution. Whilst this first algorithm shows strong
convergence to analytic solutions of test problems, it can be cumbersome to
simulate. Our second algorithm is a simplified and more efficient
implementation of the first, it is derived in the continuum limit over the PDE
region alone. We test our hybrid methods for functionality and accuracy in a
variety of different scenarios by comparing the averaged simulations to
analytic solutions of PDEs for mean concentrations.Comment: MAIN - 24 pages, 10 figures, 1 supplementary file - 3 pages, 2
figure
- …