118,123 research outputs found
Activity recognition from videos with parallel hypergraph matching on GPUs
In this paper, we propose a method for activity recognition from videos based
on sparse local features and hypergraph matching. We benefit from special
properties of the temporal domain in the data to derive a sequential and fast
graph matching algorithm for GPUs.
Traditionally, graphs and hypergraphs are frequently used to recognize
complex and often non-rigid patterns in computer vision, either through graph
matching or point-set matching with graphs. Most formulations resort to the
minimization of a difficult discrete energy function mixing geometric or
structural terms with data attached terms involving appearance features.
Traditional methods solve this minimization problem approximately, for instance
with spectral techniques.
In this work, instead of solving the problem approximatively, the exact
solution for the optimal assignment is calculated in parallel on GPUs. The
graphical structure is simplified and regularized, which allows to derive an
efficient recursive minimization algorithm. The algorithm distributes
subproblems over the calculation units of a GPU, which solves them in parallel,
allowing the system to run faster than real-time on medium-end GPUs
A tabu search heuristic for the Equitable Coloring Problem
The Equitable Coloring Problem is a variant of the Graph Coloring Problem
where the sizes of two arbitrary color classes differ in at most one unit. This
additional condition, called equity constraints, arises naturally in several
applications. Due to the hardness of the problem, current exact algorithms can
not solve large-sized instances. Such instances must be addressed only via
heuristic methods. In this paper we present a tabu search heuristic for the
Equitable Coloring Problem. This algorithm is an adaptation of the dynamic
TabuCol version of Galinier and Hao. In order to satisfy equity constraints,
new local search criteria are given. Computational experiments are carried out
in order to find the best combination of parameters involved in the dynamic
tenure of the heuristic. Finally, we show the good performance of our heuristic
over known benchmark instances
Bidirectional branch and bound for controlled variable selection. Part III: local average loss minimization
The selection of controlled variables (CVs) from available measurements through
exhaustive search is computationally forbidding for large-scale processes. We
have recently proposed novel bidirectional branch and bound (B-3) approaches for
CV selection using the minimum singular value (MSV) rule and the local worst-
case loss criterion in the framework of self-optimizing control. However, the
MSV rule is approximate and worst-case scenario may not occur frequently in
practice. Thus, CV selection by minimizing local average loss can be deemed as
most reliable. In this work, the B-3 approach is extended to CV selection based
on local average loss metric. Lower bounds on local average loss and, fast
pruning and branching algorithms are derived for the efficient B-3 algorithm.
Random matrices and binary distillation column case study are used to
demonstrate the computational efficiency of the proposed method
Implementation of novel methods of global and nonsmooth optimization : GANSO programming library
We discuss the implementation of a number of modern methods of global and nonsmooth continuous optimization, based on the ideas of Rubinov, in a programming library GANSO. GANSO implements the derivative-free bundle method, the extended cutting angle method, dynamical system-based optimization and their various combinations and heuristics. We outline the main ideas behind each method, and report on the interfacing with Matlab and Maple packages. <br /
Recommended from our members
Combinatorial optimization and metaheuristics
Today, combinatorial optimization is one of the youngest and most active areas of discrete mathematics. It is a branch of optimization in applied mathematics and computer science, related to operational research, algorithm theory and computational complexity theory. It sits at the intersection of several fields, including artificial intelligence, mathematics and software engineering. Its increasing interest arises for the fact that a large number of scientific and industrial problems can be formulated as abstract combinatorial optimization problems, through graphs and/or (integer) linear programs. Some of these problems have polynomial-time (“efficient”) algorithms, while most of them are NP-hard, i.e. it is not proved that they can be solved in polynomial-time. Mainly, it means that it is not possible to guarantee that an exact solution to the problem can be found and one has to settle for an approximate solution with known performance guarantees. Indeed, the goal of approximate methods is to find “quickly” (reasonable run-times), with “high” probability, provable “good” solutions (low error from the real optimal solution). In the last 20 years, a new kind of algorithm commonly called metaheuristics have emerged in this class, which basically try to combine heuristics in high level frameworks aimed at efficiently and effectively exploring the search space. This report briefly outlines the components, concepts, advantages and disadvantages of different metaheuristic approaches from a conceptual point of view, in order to analyze their similarities and differences. The two very significant forces of intensification and diversification, that mainly determine the behavior of a metaheuristic, will be pointed out. The report concludes by exploring the importance of hybridization and integration methods
Bayesian Discovery of Multiple Bayesian Networks via Transfer Learning
Bayesian network structure learning algorithms with limited data are being
used in domains such as systems biology and neuroscience to gain insight into
the underlying processes that produce observed data. Learning reliable networks
from limited data is difficult, therefore transfer learning can improve the
robustness of learned networks by leveraging data from related tasks. Existing
transfer learning algorithms for Bayesian network structure learning give a
single maximum a posteriori estimate of network models. Yet, many other models
may be equally likely, and so a more informative result is provided by Bayesian
structure discovery. Bayesian structure discovery algorithms estimate posterior
probabilities of structural features, such as edges. We present transfer
learning for Bayesian structure discovery which allows us to explore the shared
and unique structural features among related tasks. Efficient computation
requires that our transfer learning objective factors into local calculations,
which we prove is given by a broad class of transfer biases. Theoretically, we
show the efficiency of our approach. Empirically, we show that compared to
single task learning, transfer learning is better able to positively identify
true edges. We apply the method to whole-brain neuroimaging data.Comment: 10 page
Heuristic algorithms for the Longest Filled Common Subsequence Problem
At CPM 2017, Castelli et al. define and study a new variant of the Longest
Common Subsequence Problem, termed the Longest Filled Common Subsequence
Problem (LFCS). For the LFCS problem, the input consists of two strings and
and a multiset of characters . The goal is to insert the
characters from into the string , thus obtaining a new string
, such that the Longest Common Subsequence (LCS) between and is
maximized. Casteli et al. show that the problem is NP-hard and provide a
3/5-approximation algorithm for the problem.
In this paper we study the problem from the experimental point of view. We
introduce, implement and test new heuristic algorithms and compare them with
the approximation algorithm of Casteli et al. Moreover, we introduce an Integer
Linear Program (ILP) model for the problem and we use the state of the art ILP
solver, Gurobi, to obtain exact solution for moderate sized instances.Comment: Accepted and presented as a proceedings paper at SYNASC 201
- …