13,080 research outputs found
Constrained Bayesian Active Learning of Interference Channels in Cognitive Radio Networks
In this paper, a sequential probing method for interference constraint
learning is proposed to allow a centralized Cognitive Radio Network (CRN)
accessing the frequency band of a Primary User (PU) in an underlay cognitive
scenario with a designed PU protection specification. The main idea is that the
CRN probes the PU and subsequently eavesdrops the reverse PU link to acquire
the binary ACK/NACK packet. This feedback indicates whether the probing-induced
interference is harmful or not and can be used to learn the PU interference
constraint. The cognitive part of this sequential probing process is the
selection of the power levels of the Secondary Users (SUs) which aims to learn
the PU interference constraint with a minimum number of probing attempts while
setting a limit on the number of harmful probing-induced interference events or
equivalently of NACK packet observations over a time window. This constrained
design problem is studied within the Active Learning (AL) framework and an
optimal solution is derived and implemented with a sophisticated, accurate and
fast Bayesian Learning method, the Expectation Propagation (EP). The
performance of this solution is also demonstrated through numerical simulations
and compared with modified versions of AL techniques we developed in earlier
work.Comment: 14 pages, 6 figures, submitted to IEEE JSTSP Special Issue on Machine
Learning for Cognition in Radio Communications and Rada
Signatures of criticality arise in simple neural population models with correlations
Large-scale recordings of neuronal activity make it possible to gain insights
into the collective activity of neural ensembles. It has been hypothesized that
neural populations might be optimized to operate at a 'thermodynamic critical
point', and that this property has implications for information processing.
Support for this notion has come from a series of studies which identified
statistical signatures of criticality in the ensemble activity of retinal
ganglion cells. What are the underlying mechanisms that give rise to these
observations? Here we show that signatures of criticality arise even in simple
feed-forward models of retinal population activity. In particular, they occur
whenever neural population data exhibits correlations, and is randomly
sub-sampled during data analysis. These results show that signatures of
criticality are not necessarily indicative of an optimized coding strategy, and
challenge the utility of analysis approaches based on equilibrium
thermodynamics for understanding partially observed biological systems.Comment: 36 pages, LaTeX; added journal reference on page 1, added link to
code repositor
Regularized Optimal Transport and the Rot Mover's Distance
This paper presents a unified framework for smooth convex regularization of
discrete optimal transport problems. In this context, the regularized optimal
transport turns out to be equivalent to a matrix nearness problem with respect
to Bregman divergences. Our framework thus naturally generalizes a previously
proposed regularization based on the Boltzmann-Shannon entropy related to the
Kullback-Leibler divergence, and solved with the Sinkhorn-Knopp algorithm. We
call the regularized optimal transport distance the rot mover's distance in
reference to the classical earth mover's distance. We develop two generic
schemes that we respectively call the alternate scaling algorithm and the
non-negative alternate scaling algorithm, to compute efficiently the
regularized optimal plans depending on whether the domain of the regularizer
lies within the non-negative orthant or not. These schemes are based on
Dykstra's algorithm with alternate Bregman projections, and further exploit the
Newton-Raphson method when applied to separable divergences. We enhance the
separable case with a sparse extension to deal with high data dimensions. We
also instantiate our proposed framework and discuss the inherent specificities
for well-known regularizers and statistical divergences in the machine learning
and information geometry communities. Finally, we demonstrate the merits of our
methods with experiments using synthetic data to illustrate the effect of
different regularizers and penalties on the solutions, as well as real-world
data for a pattern recognition application to audio scene classification
Efficiency characterization of a large neuronal network: a causal information approach
When inhibitory neurons constitute about 40% of neurons they could have an
important antinociceptive role, as they would easily regulate the level of
activity of other neurons. We consider a simple network of cortical spiking
neurons with axonal conduction delays and spike timing dependent plasticity,
representative of a cortical column or hypercolumn with large proportion of
inhibitory neurons. Each neuron fires following a Hodgkin-Huxley like dynamics
and it is interconnected randomly to other neurons. The network dynamics is
investigated estimating Bandt and Pompe probability distribution function
associated to the interspike intervals and taking different degrees of
inter-connectivity across neurons. More specifically we take into account the
fine temporal ``structures'' of the complex neuronal signals not just by using
the probability distributions associated to the inter spike intervals, but
instead considering much more subtle measures accounting for their causal
information: the Shannon permutation entropy, Fisher permutation information
and permutation statistical complexity. This allows us to investigate how the
information of the system might saturate to a finite value as the degree of
inter-connectivity across neurons grows, inferring the emergent dynamical
properties of the system.Comment: 26 pages, 3 Figures; Physica A, in pres
The Bregman Variational Dual-Tree Framework
Graph-based methods provide a powerful tool set for many non-parametric
frameworks in Machine Learning. In general, the memory and computational
complexity of these methods is quadratic in the number of examples in the data
which makes them quickly infeasible for moderate to large scale datasets. A
significant effort to find more efficient solutions to the problem has been
made in the literature. One of the state-of-the-art methods that has been
recently introduced is the Variational Dual-Tree (VDT) framework. Despite some
of its unique features, VDT is currently restricted only to Euclidean spaces
where the Euclidean distance quantifies the similarity. In this paper, we
extend the VDT framework beyond the Euclidean distance to more general Bregman
divergences that include the Euclidean distance as a special case. By
exploiting the properties of the general Bregman divergence, we show how the
new framework can maintain all the pivotal features of the VDT framework and
yet significantly improve its performance in non-Euclidean domains. We apply
the proposed framework to different text categorization problems and
demonstrate its benefits over the original VDT.Comment: Appears in Proceedings of the Twenty-Ninth Conference on Uncertainty
in Artificial Intelligence (UAI2013
Asymptotically Unambitious Artificial General Intelligence
General intelligence, the ability to solve arbitrary solvable problems, is
supposed by many to be artificially constructible. Narrow intelligence, the
ability to solve a given particularly difficult problem, has seen impressive
recent development. Notable examples include self-driving cars, Go engines,
image classifiers, and translators. Artificial General Intelligence (AGI)
presents dangers that narrow intelligence does not: if something smarter than
us across every domain were indifferent to our concerns, it would be an
existential threat to humanity, just as we threaten many species despite no ill
will. Even the theory of how to maintain the alignment of an AGI's goals with
our own has proven highly elusive. We present the first algorithm we are aware
of for asymptotically unambitious AGI, where "unambitiousness" includes not
seeking arbitrary power. Thus, we identify an exception to the Instrumental
Convergence Thesis, which is roughly that by default, an AGI would seek power,
including over us.Comment: 9 pages with 5 figures; 10 page Appendix with 2 figure
- …