45,956 research outputs found
DOPE: Distributed Optimization for Pairwise Energies
We formulate an Alternating Direction Method of Mul-tipliers (ADMM) that
systematically distributes the computations of any technique for optimizing
pairwise functions, including non-submodular potentials. Such discrete
functions are very useful in segmentation and a breadth of other vision
problems. Our method decomposes the problem into a large set of small
sub-problems, each involving a sub-region of the image domain, which can be
solved in parallel. We achieve consistency between the sub-problems through a
novel constraint that can be used for a large class of pair-wise functions. We
give an iterative numerical solution that alternates between solving the
sub-problems and updating consistency variables, until convergence. We report
comprehensive experiments, which demonstrate the benefit of our general
distributed solution in the case of the popular serial algorithm of Boykov and
Kolmogorov (BK algorithm) and, also, in the context of non-submodular
functions.Comment: Accepted at CVPR 201
Multiscale Markov Decision Problems: Compression, Solution, and Transfer Learning
Many problems in sequential decision making and stochastic control often have
natural multiscale structure: sub-tasks are assembled together to accomplish
complex goals. Systematically inferring and leveraging hierarchical structure,
particularly beyond a single level of abstraction, has remained a longstanding
challenge. We describe a fast multiscale procedure for repeatedly compressing,
or homogenizing, Markov decision processes (MDPs), wherein a hierarchy of
sub-problems at different scales is automatically determined. Coarsened MDPs
are themselves independent, deterministic MDPs, and may be solved using
existing algorithms. The multiscale representation delivered by this procedure
decouples sub-tasks from each other and can lead to substantial improvements in
convergence rates both locally within sub-problems and globally across
sub-problems, yielding significant computational savings. A second fundamental
aspect of this work is that these multiscale decompositions yield new transfer
opportunities across different problems, where solutions of sub-tasks at
different levels of the hierarchy may be amenable to transfer to new problems.
Localized transfer of policies and potential operators at arbitrary scales is
emphasized. Finally, we demonstrate compression and transfer in a collection of
illustrative domains, including examples involving discrete and continuous
statespaces.Comment: 86 pages, 15 figure
Enhancing Cooperative Coevolution for Large Scale Optimization by Adaptively Constructing Surrogate Models
It has been shown that cooperative coevolution (CC) can effectively deal with
large scale optimization problems (LSOPs) through a divide-and-conquer
strategy. However, its performance is severely restricted by the current
context-vector-based sub-solution evaluation method since this method needs to
access the original high dimensional simulation model when evaluating each
sub-solution and thus requires many computation resources. To alleviate this
issue, this study proposes an adaptive surrogate model assisted CC framework.
This framework adaptively constructs surrogate models for different
sub-problems by fully considering their characteristics. For the single
dimensional sub-problems obtained through decomposition, accurate enough
surrogate models can be obtained and used to find out the optimal solutions of
the corresponding sub-problems directly. As for the nonseparable sub-problems,
the surrogate models are employed to evaluate the corresponding sub-solutions,
and the original simulation model is only adopted to reevaluate some good
sub-solutions selected by surrogate models. By these means, the computation
cost could be greatly reduced without significantly sacrificing evaluation
quality. Empirical studies on IEEE CEC 2010 benchmark functions show that the
concrete algorithm based on this framework is able to find much better
solutions than the conventional CC algorithms and a non-CC algorithm even with
much fewer computation resources.Comment: arXiv admin note: text overlap with arXiv:1802.0974
High-dimensional Black-box Optimization via Divide and Approximate Conquer
Divide and Conquer (DC) is conceptually well suited to high-dimensional
optimization by decomposing a problem into multiple small-scale sub-problems.
However, appealing performance can be seldom observed when the sub-problems are
interdependent. This paper suggests that the major difficulty of tackling
interdependent sub-problems lies in the precise evaluation of a partial
solution (to a sub-problem), which can be overwhelmingly costly and thus makes
sub-problems non-trivial to conquer. Thus, we propose an approximation
approach, named Divide and Approximate Conquer (DAC), which reduces the cost of
partial solution evaluation from exponential time to polynomial time.
Meanwhile, the convergence to the global optimum (of the original problem) is
still guaranteed. The effectiveness of DAC is demonstrated empirically on two
sets of non-separable high-dimensional problems.Comment: 7 pages, 2 figures, conferenc
Classification under Streaming Emerging New Classes: A Solution using Completely Random Trees
This paper investigates an important problem in stream mining, i.e.,
classification under streaming emerging new classes or SENC. The common
approach is to treat it as a classification problem and solve it using either a
supervised learner or a semi-supervised learner. We propose an alternative
approach by using unsupervised learning as the basis to solve this problem. The
SENC problem can be decomposed into three sub problems: detecting emerging new
classes, classifying for known classes, and updating models to enable
classification of instances of the new class and detection of more emerging new
classes. The proposed method employs completely random trees which have been
shown to work well in unsupervised learning and supervised learning
independently in the literature. This is the first time, as far as we know,
that completely random trees are used as a single common core to solve all
three sub problems: unsupervised learning, supervised learning and model update
in data streams. We show that the proposed unsupervised-learning-focused method
often achieves significantly better outcomes than existing
classification-focused methods
- …
