18,066 research outputs found
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Massively-Parallel Feature Selection for Big Data
We present the Parallel, Forward-Backward with Pruning (PFBP) algorithm for
feature selection (FS) in Big Data settings (high dimensionality and/or sample
size). To tackle the challenges of Big Data FS PFBP partitions the data matrix
both in terms of rows (samples, training examples) as well as columns
(features). By employing the concepts of -values of conditional independence
tests and meta-analysis techniques PFBP manages to rely only on computations
local to a partition while minimizing communication costs. Then, it employs
powerful and safe (asymptotically sound) heuristics to make early, approximate
decisions, such as Early Dropping of features from consideration in subsequent
iterations, Early Stopping of consideration of features within the same
iteration, or Early Return of the winner in each iteration. PFBP provides
asymptotic guarantees of optimality for data distributions faithfully
representable by a causal network (Bayesian network or maximal ancestral
graph). Our empirical analysis confirms a super-linear speedup of the algorithm
with increasing sample size, linear scalability with respect to the number of
features and processing cores, while dominating other competitive algorithms in
its class
A Comparison of Nature Inspired Algorithms for Multi-threshold Image Segmentation
In the field of image analysis, segmentation is one of the most important
preprocessing steps. One way to achieve segmentation is by mean of threshold
selection, where each pixel that belongs to a determined class islabeled
according to the selected threshold, giving as a result pixel groups that share
visual characteristics in the image. Several methods have been proposed in
order to solve threshold selectionproblems; in this work, it is used the method
based on the mixture of Gaussian functions to approximate the 1D histogram of a
gray level image and whose parameters are calculated using three nature
inspired algorithms (Particle Swarm Optimization, Artificial Bee Colony
Optimization and Differential Evolution). Each Gaussian function approximates
thehistogram, representing a pixel class and therefore a threshold point.
Experimental results are shown, comparing in quantitative and qualitative
fashion as well as the main advantages and drawbacks of each algorithm, applied
to multi-threshold problem.Comment: 16 pages, this is a draft of the final version of the article sent to
the Journa
Data, problems, heuristics and results in cognitive metaphor research
Cognitive metaphor research is characterised by the diversity of rival theories. Starting from this observation, the paper focuses on the problem of how the unity and diversity of cognitive theories of metaphor can be accounted for. The first part of the paper outlines a suitable metascientific approach which emerges as a modification of B. von Eckardt’s notion of research framework. In the second part, by the help of this approach, some aspects of the sophisticated relationship between Lakoff and Johnson’s, Glucksberg’s, and Gentner’s theories are discussed. The main finding is that the data, the problems, the heuristics and the hypotheses which have been partly shaped by the rivals contribute to the development of the particular theories to a considerable extent
Metaheuristic design of feedforward neural networks: a review of two decades of research
Over the past two decades, the feedforward neural network (FNN) optimization has been a key interest among the researchers and practitioners of multiple disciplines. The FNN optimization is often viewed from the various perspectives: the optimization of weights, network architecture, activation nodes, learning parameters, learning environment, etc. Researchers adopted such different viewpoints mainly to improve the FNN's generalization ability. The gradient-descent algorithm such as backpropagation has been widely applied to optimize the FNNs. Its success is evident from the FNN's application to numerous real-world problems. However, due to the limitations of the gradient-based optimization methods, the metaheuristic algorithms including the evolutionary algorithms, swarm intelligence, etc., are still being widely explored by the researchers aiming to obtain generalized FNN for a given problem. This article attempts to summarize a broad spectrum of FNN optimization methodologies including conventional and metaheuristic approaches. This article also tries to connect various research directions emerged out of the FNN optimization practices, such as evolving neural network (NN), cooperative coevolution NN, complex-valued NN, deep learning, extreme learning machine, quantum NN, etc. Additionally, it provides interesting research challenges for future research to cope-up with the present information processing era
- …