29,789 research outputs found
Efficient Algorithms for Searching the Minimum Information Partition in Integrated Information Theory
The ability to integrate information in the brain is considered to be an
essential property for cognition and consciousness. Integrated Information
Theory (IIT) hypothesizes that the amount of integrated information () in
the brain is related to the level of consciousness. IIT proposes that to
quantify information integration in a system as a whole, integrated information
should be measured across the partition of the system at which information loss
caused by partitioning is minimized, called the Minimum Information Partition
(MIP). The computational cost for exhaustively searching for the MIP grows
exponentially with system size, making it difficult to apply IIT to real neural
data. It has been previously shown that if a measure of satisfies a
mathematical property, submodularity, the MIP can be found in a polynomial
order by an optimization algorithm. However, although the first version of
is submodular, the later versions are not. In this study, we empirically
explore to what extent the algorithm can be applied to the non-submodular
measures of by evaluating the accuracy of the algorithm in simulated
data and real neural data. We find that the algorithm identifies the MIP in a
nearly perfect manner even for the non-submodular measures. Our results show
that the algorithm allows us to measure in large systems within a
practical amount of time
Maximum approximate entropy and r threshold: A new approach for regularity changes detection
Approximate entropy (ApEn) has been widely used as an estimator of regularity
in many scientific fields. It has proved to be a useful tool because of its
ability to distinguish different system's dynamics when there is only available
short-length noisy data. Incorrect parameter selection (embedding dimension
, threshold and data length ) and the presence of noise in the signal
can undermine the ApEn discrimination capacity. In this work we show that
() can also be used as a feature to
discern between dynamics. Moreover, the combined use of and
allows a better discrimination capacity to be accomplished, even in
the presence of noise. We conducted our studies using real physiological time
series and simulated signals corresponding to both low- and high-dimensional
systems. When is incapable of discerning between different
dynamics because of the noise presence, our results suggest that
provides additional information that can be useful for classification purposes.
Based on cross-validation tests, we conclude that, for short length noisy
signals, the joint use of and can significantly decrease
the misclassification rate of a linear classifier in comparison with their
isolated use
Universality of Entanglement and Quantum Computation Complexity
We study the universality of scaling of entanglement in Shor's factoring
algorithm and in adiabatic quantum algorithms across a quantum phase transition
for both the NP-complete Exact Cover problem as well as the Grover's problem.
The analytic result for Shor's algorithm shows a linear scaling of the entropy
in terms of the number of qubits, therefore difficulting the possibility of an
efficient classical simulation protocol. A similar result is obtained
numerically for the quantum adiabatic evolution Exact Cover algorithm, which
also shows universality of the quantum phase transition the system evolves
nearby. On the other hand, entanglement in Grover's adiabatic algorithm remains
a bounded quantity even at the critical point. A classification of scaling of
entanglement appears as a natural grading of the computational complexity of
simulating quantum phase transitions.Comment: 30 pages, 17 figures, accepted for publication in PR
Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding
We propose computationally efficient encoders and decoders for lossy
compression using a Sparse Regression Code. The codebook is defined by a design
matrix and codewords are structured linear combinations of columns of this
matrix. The proposed encoding algorithm sequentially chooses columns of the
design matrix to successively approximate the source sequence. It is shown to
achieve the optimal distortion-rate function for i.i.d Gaussian sources under
the squared-error distortion criterion. For a given rate, the parameters of the
design matrix can be varied to trade off distortion performance with encoding
complexity. An example of such a trade-off as a function of the block length n
is the following. With computational resource (space or time) per source sample
of O((n/\log n)^2), for a fixed distortion-level above the Gaussian
distortion-rate function, the probability of excess distortion decays
exponentially in n. The Sparse Regression Code is robust in the following
sense: for any ergodic source, the proposed encoder achieves the optimal
distortion-rate function of an i.i.d Gaussian source with the same variance.
Simulations show that the encoder has good empirical performance, especially at
low and moderate rates.Comment: 14 pages, to appear in IEEE Transactions on Information Theor
Quantum Discord and Quantum Computing - An Appraisal
We discuss models of computing that are beyond classical. The primary
motivation is to unearth the cause of nonclassical advantages in computation.
Completeness results from computational complexity theory lead to the
identification of very disparate problems, and offer a kaleidoscopic view into
the realm of quantum enhancements in computation. Emphasis is placed on the
`power of one qubit' model, and the boundary between quantum and classical
correlations as delineated by quantum discord. A recent result by Eastin on the
role of this boundary in the efficient classical simulation of quantum
computation is discussed. Perceived drawbacks in the interpretation of quantum
discord as a relevant certificate of quantum enhancements are addressed.Comment: To be published in the Special Issue of the International Journal of
Quantum Information on "Quantum Correlations: entanglement and beyond." 11
pages, 4 figure
Quantum adiabatic optimization and combinatorial landscapes
In this paper we analyze the performance of the Quantum Adiabatic Evolution
algorithm on a variant of Satisfiability problem for an ensemble of random
graphs parametrized by the ratio of clauses to variables, . We
introduce a set of macroscopic parameters (landscapes) and put forward an
ansatz of universality for random bit flips. We then formulate the problem of
finding the smallest eigenvalue and the excitation gap as a statistical
mechanics problem. We use the so-called annealing approximation with a
refinement that a finite set of macroscopic variables (versus only energy) is
used, and are able to show the existence of a dynamic threshold
starting with some value of K -- the number of variables in
each clause. Beyond dynamic threshold, the algorithm should take exponentially
long time to find a solution. We compare the results for extended and
simplified sets of landscapes and provide numerical evidence in support of our
universality ansatz. We have been able to map the ensemble of random graphs
onto another ensemble with fluctuations significantly reduced. This enabled us
to obtain tight upper bounds on satisfiability transition and to recompute the
dynamical transition using the extended set of landscapes.Comment: 41 pages, 10 figures; added a paragraph on paper's organization to
the introduction, fixed reference
- …