59,968 research outputs found
SW-ELM : A summation wavelet extreme learning machine algorithm with a priori initialization.
International audienceCombining neural networks and wavelet theory as an approximation or prediction models appears to be an effective solution in many applicative areas. However, when building such systems, one has to face parsimony problem, i.e., to look for a compromise between the complexity of the learning phase and accuracy performances. Following that, the aim of this paper is to propose a new structure of connectionist network, the Summation Wavelet Extreme Learning Machine (SW-ELM) that enables good accuracy and generalization performances, while limiting the learning time and reducing the impact of random initialization procedure. SW-ELM is based on Extreme Learning Machine (ELM) algorithm for fast batch learning, but with dual activation functions in the hidden layer nodes. This enhances dealing with non-linearity in an efficient manner. The initialization phase of wavelets (of hidden nodes) and neural network parameters (of input-hidden layer) is performed a priori, even before data are presented to the model. The whole proposition is illustrated and discussed by performing tests on three issues related to time-series application: an "input-output" approximation problem, a one-step ahead prediction problem, and a multi-steps ahead prediction problem. Performances of SW-ELM are benchmarked with ELM, Levenberg Marquardt algorithm for Single Layer Feed Forward Network (SLFN) and ELMAN network on six industrial data sets. Results show the significance of performances achieved by SW-ELM
Extreme Learning Machine for Microarray Cancer Classification
Cancer is a diseases in which a set of cells has not able controlled their growth, attack that interrupts upon and destroy the nearest tissues or spreading to other locations in the body. Cancer has become one of the perilous diseases in the present scenario. In this paper, the recently developed Extreme Learning Machine is used for classification problems in cancer diagnosis area. ELM is an available learning algorithm for single layer feed forward neural network. The advanced and developed methodology known for cancer multi classification using ELM microarray gene expression cancer diagnosis, this used for directing multi category classification problems in the cancer diagnosis area. ELM avoids many problems, improper learning rate and over fitting commonly faced by iterative learning methods and completes the training very fast. The performance of classification ELM on three benchmark microarray data for cancer diagnosis, namely Lymphoma data set, Leukemia data set, SRBCT data set. The results of experiments with RVM and ELM shows that for many categories of ELM still outperformer with RVM.
DOI: 10.17762/ijritcc2321-8169.15018
RMSE-ELM: Recursive Model based Selective Ensemble of Extreme Learning Machines for Robustness Improvement
Extreme learning machine (ELM) as an emerging branch of shallow networks has
shown its excellent generalization and fast learning speed. However, for
blended data, the robustness of ELM is weak because its weights and biases of
hidden nodes are set randomly. Moreover, the noisy data exert a negative
effect. To solve this problem, a new framework called RMSE-ELM is proposed in
this paper. It is a two-layer recursive model. In the first layer, the
framework trains lots of ELMs in different groups concurrently, then employs
selective ensemble to pick out an optimal set of ELMs in each group, which can
be merged into a large group of ELMs called candidate pool. In the second
layer, selective ensemble is recursively used on candidate pool to acquire the
final ensemble. In the experiments, we apply UCI blended datasets to confirm
the robustness of our new approach in two key aspects (mean square error and
standard deviation). The space complexity of our method is increased to some
degree, but the results have shown that RMSE-ELM significantly improves
robustness with slightly computational time compared with representative
methods (ELM, OP-ELM, GASEN-ELM, GASEN-BP and E-GASEN). It becomes a potential
framework to solve robustness issue of ELM for high-dimensional blended data in
the future.Comment: Accepted for publication in Mathematical Problems in Engineering,
09/22/201
An empirical learning-based validation procedure for simulation workflow
Simulation workflow is a top-level model for the design and control of
simulation process. It connects multiple simulation components with time and
interaction restrictions to form a complete simulation system. Before the
construction and evaluation of the component models, the validation of
upper-layer simulation workflow is of the most importance in a simulation
system. However, the methods especially for validating simulation workflow is
very limit. Many of the existing validation techniques are domain-dependent
with cumbersome questionnaire design and expert scoring. Therefore, this paper
present an empirical learning-based validation procedure to implement a
semi-automated evaluation for simulation workflow. First, representative
features of general simulation workflow and their relations with validation
indices are proposed. The calculation process of workflow credibility based on
Analytic Hierarchy Process (AHP) is then introduced. In order to make full use
of the historical data and implement more efficient validation, four learning
algorithms, including back propagation neural network (BPNN), extreme learning
machine (ELM), evolving new-neuron (eNFN) and fast incremental gaussian mixture
model (FIGMN), are introduced for constructing the empirical relation between
the workflow credibility and its features. A case study on a landing-process
simulation workflow is established to test the feasibility of the proposed
procedure. The experimental results also provide some useful overview of the
state-of-the-art learning algorithms on the credibility evaluation of
simulation models
Machine Learning and Integrative Analysis of Biomedical Big Data.
Recent developments in high-throughput technologies have accelerated the accumulation of massive amounts of omics data from multiple sources: genome, epigenome, transcriptome, proteome, metabolome, etc. Traditionally, data from each source (e.g., genome) is analyzed in isolation using statistical and machine learning (ML) methods. Integrative analysis of multi-omics and clinical data is key to new biomedical discoveries and advancements in precision medicine. However, data integration poses new computational challenges as well as exacerbates the ones associated with single-omics studies. Specialized computational approaches are required to effectively and efficiently perform integrative analysis of biomedical data acquired from diverse modalities. In this review, we discuss state-of-the-art ML-based approaches for tackling five specific computational challenges associated with integrative analysis: curse of dimensionality, data heterogeneity, missing data, class imbalance and scalability issues
- …