156,546 research outputs found
Extremal Optimization at the Phase Transition of the 3-Coloring Problem
We investigate the phase transition of the 3-coloring problem on random
graphs, using the extremal optimization heuristic. 3-coloring is among the
hardest combinatorial optimization problems and is closely related to a 3-state
anti-ferromagnetic Potts model. Like many other such optimization problems, it
has been shown to exhibit a phase transition in its ground state behavior under
variation of a system parameter: the graph's mean vertex degree. This phase
transition is often associated with the instances of highest complexity. We use
extremal optimization to measure the ground state cost and the ``backbone'', an
order parameter related to ground state overlap, averaged over a large number
of instances near the transition for random graphs of size up to 512. For
graphs up to this size, benchmarks show that extremal optimization reaches
ground states and explores a sufficient number of them to give the correct
backbone value after about update steps. Finite size scaling gives
a critical mean degree value . Furthermore, the
exploration of the degenerate ground states indicates that the backbone order
parameter, measuring the constrainedness of the problem, exhibits a first-order
phase transition.Comment: RevTex4, 8 pages, 4 postscript figures, related information available
at http://www.physics.emory.edu/faculty/boettcher
Predict or classify: The deceptive role of time-locking in brain signal classification
Several experimental studies claim to be able to predict the outcome of
simple decisions from brain signals measured before subjects are aware of their
decision. Often, these studies use multivariate pattern recognition methods
with the underlying assumption that the ability to classify the brain signal is
equivalent to predict the decision itself. Here we show instead that it is
possible to correctly classify a signal even if it does not contain any
predictive information about the decision. We first define a simple stochastic
model that mimics the random decision process between two equivalent
alternatives, and generate a large number of independent trials that contain no
choice-predictive information. The trials are first time-locked to the time
point of the final event and then classified using standard machine-learning
techniques. The resulting classification accuracy is above chance level long
before the time point of time-locking. We then analyze the same trials using
information theory. We demonstrate that the high classification accuracy is a
consequence of time-locking and that its time behavior is simply related to the
large relaxation time of the process. We conclude that when time-locking is a
crucial step in the analysis of neural activity patterns, both the emergence
and the timing of the classification accuracy are affected by structural
properties of the network that generates the signal.Comment: 23 pages, 5 figure
Large-scale Binary Quadratic Optimization Using Semidefinite Relaxation and Applications
In computer vision, many problems such as image segmentation, pixel
labelling, and scene parsing can be formulated as binary quadratic programs
(BQPs). For submodular problems, cuts based methods can be employed to
efficiently solve large-scale problems. However, general nonsubmodular problems
are significantly more challenging to solve. Finding a solution when the
problem is of large size to be of practical interest, however, typically
requires relaxation. Two standard relaxation methods are widely used for
solving general BQPs--spectral methods and semidefinite programming (SDP), each
with their own advantages and disadvantages. Spectral relaxation is simple and
easy to implement, but its bound is loose. Semidefinite relaxation has a
tighter bound, but its computational complexity is high, especially for large
scale problems. In this work, we present a new SDP formulation for BQPs, with
two desirable properties. First, it has a similar relaxation bound to
conventional SDP formulations. Second, compared with conventional SDP methods,
the new SDP formulation leads to a significantly more efficient and scalable
dual optimization approach, which has the same degree of complexity as spectral
methods. We then propose two solvers, namely, quasi-Newton and smoothing Newton
methods, for the dual problem. Both of them are significantly more efficiently
than standard interior-point methods. In practice, the smoothing Newton solver
is faster than the quasi-Newton solver for dense or medium-sized problems,
while the quasi-Newton solver is preferable for large sparse/structured
problems. Our experiments on a few computer vision applications including
clustering, image segmentation, co-segmentation and registration show the
potential of our SDP formulation for solving large-scale BQPs.Comment: Fixed some typos. 18 pages. Accepted to IEEE Transactions on Pattern
Analysis and Machine Intelligenc
A Domain-Independent Algorithm for Plan Adaptation
The paradigms of transformational planning, case-based planning, and plan
debugging all involve a process known as plan adaptation - modifying or
repairing an old plan so it solves a new problem. In this paper we provide a
domain-independent algorithm for plan adaptation, demonstrate that it is sound,
complete, and systematic, and compare it to other adaptation algorithms in the
literature. Our approach is based on a view of planning as searching a graph of
partial plans. Generative planning starts at the graph's root and moves from
node to node using plan-refinement operators. In planning by adaptation, a
library plan - an arbitrary node in the plan graph - is the starting point for
the search, and the plan-adaptation algorithm can apply both the same
refinement operators available to a generative planner and can also retract
constraints and steps from the plan. Our algorithm's completeness ensures that
the adaptation algorithm will eventually search the entire graph and its
systematicity ensures that it will do so without redundantly searching any
parts of the graph.Comment: See http://www.jair.org/ for any accompanying file
Newton-Raphson Consensus for Distributed Convex Optimization
We address the problem of distributed uncon- strained convex optimization
under separability assumptions, i.e., the framework where each agent of a
network is endowed with a local private multidimensional convex cost, is
subject to communication constraints, and wants to collaborate to compute the
minimizer of the sum of the local costs. We propose a design methodology that
combines average consensus algorithms and separation of time-scales ideas. This
strategy is proved, under suitable hypotheses, to be globally convergent to the
true minimizer. Intuitively, the procedure lets the agents distributedly
compute and sequentially update an approximated Newton- Raphson direction by
means of suitable average consensus ratios. We show with numerical simulations
that the speed of convergence of this strategy is comparable with alternative
optimization strategies such as the Alternating Direction Method of
Multipliers. Finally, we propose some alternative strategies which trade-off
communication and computational requirements with convergence speed.Comment: 18 pages, preprint with proof
The structure of Inter-Urban traffic: A weighted network analysis
We study the structure of the network representing the interurban commuting
traffic of the Sardinia region, Italy, which amounts to 375 municipalities and
1,600,000 inhabitants. We use a weighted network representation where vertices
correspond to towns and the edges to the actual commuting flows among those. We
characterize quantitatively both the topological and weighted properties of the
resulting network. Interestingly, the statistical properties of commuting
traffic exhibit complex features and non-trivial relations with the underlying
topology. We characterize quantitatively the traffic backbone among large
cities and we give evidences for a very high heterogeneity of the commuter
flows around large cities. We also discuss the interplay between the
topological and dynamical properties of the network as well as their relation
with socio-demographic variables such as population and monthly income. This
analysis may be useful at various stages in environmental planning and provides
analytical tools for a wide spectrum of applications ranging from impact
evaluation to decision-making and planning support.Comment: 12 pages, 12 figures, 4 tables; 1 missing ref added and minor
revision
- …