583 research outputs found
Activity recognition from videos with parallel hypergraph matching on GPUs
In this paper, we propose a method for activity recognition from videos based
on sparse local features and hypergraph matching. We benefit from special
properties of the temporal domain in the data to derive a sequential and fast
graph matching algorithm for GPUs.
Traditionally, graphs and hypergraphs are frequently used to recognize
complex and often non-rigid patterns in computer vision, either through graph
matching or point-set matching with graphs. Most formulations resort to the
minimization of a difficult discrete energy function mixing geometric or
structural terms with data attached terms involving appearance features.
Traditional methods solve this minimization problem approximately, for instance
with spectral techniques.
In this work, instead of solving the problem approximatively, the exact
solution for the optimal assignment is calculated in parallel on GPUs. The
graphical structure is simplified and regularized, which allows to derive an
efficient recursive minimization algorithm. The algorithm distributes
subproblems over the calculation units of a GPU, which solves them in parallel,
allowing the system to run faster than real-time on medium-end GPUs
Statistical Physics of Hard Optimization Problems
Optimization is fundamental in many areas of science, from computer science
and information theory to engineering and statistical physics, as well as to
biology or social sciences. It typically involves a large number of variables
and a cost function depending on these variables. Optimization problems in the
NP-complete class are particularly difficult, it is believed that the number of
operations required to minimize the cost function is in the most difficult
cases exponential in the system size. However, even in an NP-complete problem
the practically arising instances might, in fact, be easy to solve. The
principal question we address in this thesis is: How to recognize if an
NP-complete constraint satisfaction problem is typically hard and what are the
main reasons for this? We adopt approaches from the statistical physics of
disordered systems, in particular the cavity method developed originally to
describe glassy systems. We describe new properties of the space of solutions
in two of the most studied constraint satisfaction problems - random
satisfiability and random graph coloring. We suggest a relation between the
existence of the so-called frozen variables and the algorithmic hardness of a
problem. Based on these insights, we introduce a new class of problems which we
named "locked" constraint satisfaction, where the statistical description is
easily solvable, but from the algorithmic point of view they are even more
challenging than the canonical satisfiability.Comment: PhD thesi
Supervised classification and mathematical optimization
Data Mining techniques often ask for the resolution of optimization problems. Supervised Classification, and, in particular, Support Vector Machines, can be seen as a paradigmatic instance. In this paper, some links between Mathematical Optimization methods and Supervised Classification are emphasized. It is shown that many different areas of Mathematical Optimization play a central role in off-the-shelf Supervised Classification methods. Moreover, Mathematical Optimization turns out to be extremely
useful to address important issues in Classification, such as identifying relevant variables, improving the interpretability of classifiers or dealing with vagueness/noise in the data.Ministerio de Ciencia e InnovaciónJunta de Andalucí
Intrinsically Motivated Goal Exploration Processes with Automatic Curriculum Learning
Intrinsically motivated spontaneous exploration is a key enabler of
autonomous lifelong learning in human children. It enables the discovery and
acquisition of large repertoires of skills through self-generation,
self-selection, self-ordering and self-experimentation of learning goals. We
present an algorithmic approach called Intrinsically Motivated Goal Exploration
Processes (IMGEP) to enable similar properties of autonomous or self-supervised
learning in machines. The IMGEP algorithmic architecture relies on several
principles: 1) self-generation of goals, generalized as fitness functions; 2)
selection of goals based on intrinsic rewards; 3) exploration with incremental
goal-parameterized policy search and exploitation of the gathered data with a
batch learning algorithm; 4) systematic reuse of information acquired when
targeting a goal for improving towards other goals. We present a particularly
efficient form of IMGEP, called Modular Population-Based IMGEP, that uses a
population-based policy and an object-centered modularity in goals and
mutations. We provide several implementations of this architecture and
demonstrate their ability to automatically generate a learning curriculum
within several experimental setups including a real humanoid robot that can
explore multiple spaces of goals with several hundred continuous dimensions.
While no particular target goal is provided to the system, this curriculum
allows the discovery of skills that act as stepping stone for learning more
complex skills, e.g. nested tool use. We show that learning diverse spaces of
goals with intrinsic motivations is more efficient for learning complex skills
than only trying to directly learn these complex skills
Supervised Classification and Mathematical Optimization
Data Mining techniques often ask for the resolution of optimization problems. Supervised Classification, and, in particular, Support Vector Machines, can be seen as a paradigmatic instance. In this paper, some links between Mathematical Optimization methods and Supervised Classification are emphasized. It is shown that many different areas of Mathematical Optimization play a central role in off-the-shelf Supervised Classification methods. Moreover, Mathematical Optimization turns out to be extremely useful to address important issues in Classification, such as identifying relevant variables, improving the interpretability of classifiers or dealing with vagueness/noise in the data
Inner Parallel Sets in Mixed-Integer Optimization
This thesis contains an extensive study of inner parallel sets in mixed-integer optimization. Inner parallel sets are a recent idea in this context and offer a possibility to relax the difficulties imposed by integrality constraints by guaranteeing feasibility of roundings of their (continuous) elements. To be able to use inner parallel sets algorithmically, various modifications, such as their enlargements and inner and outer approximations, are helpful and sometimes even necessary. Such ideas are introduced and investigated in this thesis, both theoretically as well as computationally.
From our theoretical study of inner parallel sets emerge a number of feasible rounding approaches which mainly focus on the computation of good feasible points for mixed-integer linear and nonlinear minimization problems. Good feasible points are useful in the context of solving these problems by providing tight upper bounds on the objective value. In especially difficult cases, feasible rounding approaches may also be considered as an alternative to solving a problem.
The contributions of this thesis include a thorough discussion of possibilities to enlarge inner parallel sets in the linear as well as in the nonlinear setting. Moreover, we introduce a novel cutting plane method based on inner parallel sets for mixed-integer convex minimization problems. This method, in addition to computing a good feasible point, also provides a lower bound on the objective value which is another important ingredient for solving such minimization problems. We study the possibility of dealing with equality constraints on integer variables which at first glance seem to prevent a nonempty inner parallel set. Under the occurrence of such constraints, we show that inner parallel sets can be nonempty in a reduced variable space, which allows the application of feasible rounding approaches. Finally, we investigate the behavior of inner parallel sets when integrated into search trees. Our study gives rise to a novel diving method which turns out to be a major improvement over standalone feasible rounding approaches.
We test the introduced methods on standard libraries for mixed-integer linear, convex and nonconvex minimization problems separately in several computational studies. The computational results illustrate the potential of our ideas
- …