1,068 research outputs found
A Combinatorial, Strongly Polynomial-Time Algorithm for Minimizing Submodular Functions
This paper presents the first combinatorial polynomial-time algorithm for
minimizing submodular set functions, answering an open question posed in 1981
by Grotschel, Lovasz, and Schrijver. The algorithm employs a scaling scheme
that uses a flow in the complete directed graph on the underlying set with each
arc capacity equal to the scaled parameter. The resulting algorithm runs in
time bounded by a polynomial in the size of the underlying set and the largest
length of the function value. The paper also presents a strongly
polynomial-time version that runs in time bounded by a polynomial in the size
of the underlying set independent of the function value.Comment: 17 page
Polytopal realizations of finite type -vector fans
This paper shows the polytopality of any finite type -vector fan,
acyclic or not. In fact, for any finite Dynkin type , we construct a
universal associahedron with the property
that any -vector fan of type is the normal fan of a
suitable projection of .Comment: 27 pages, 9 figures; Version 2: Minor changes in the introductio
A New Framework for Distributed Submodular Maximization
A wide variety of problems in machine learning, including exemplar
clustering, document summarization, and sensor placement, can be cast as
constrained submodular maximization problems. A lot of recent effort has been
devoted to developing distributed algorithms for these problems. However, these
results suffer from high number of rounds, suboptimal approximation ratios, or
both. We develop a framework for bringing existing algorithms in the sequential
setting to the distributed setting, achieving near optimal approximation ratios
for many settings in only a constant number of MapReduce rounds. Our techniques
also give a fast sequential algorithm for non-monotone maximization subject to
a matroid constraint
Efficient Algorithms for Searching the Minimum Information Partition in Integrated Information Theory
The ability to integrate information in the brain is considered to be an
essential property for cognition and consciousness. Integrated Information
Theory (IIT) hypothesizes that the amount of integrated information () in
the brain is related to the level of consciousness. IIT proposes that to
quantify information integration in a system as a whole, integrated information
should be measured across the partition of the system at which information loss
caused by partitioning is minimized, called the Minimum Information Partition
(MIP). The computational cost for exhaustively searching for the MIP grows
exponentially with system size, making it difficult to apply IIT to real neural
data. It has been previously shown that if a measure of satisfies a
mathematical property, submodularity, the MIP can be found in a polynomial
order by an optimization algorithm. However, although the first version of
is submodular, the later versions are not. In this study, we empirically
explore to what extent the algorithm can be applied to the non-submodular
measures of by evaluating the accuracy of the algorithm in simulated
data and real neural data. We find that the algorithm identifies the MIP in a
nearly perfect manner even for the non-submodular measures. Our results show
that the algorithm allows us to measure in large systems within a
practical amount of time
- …