36,423 research outputs found
On palimpsests in neural memory: an information theory viewpoint
The finite capacity of neural memory and the
reconsolidation phenomenon suggest it is important to be able
to update stored information as in a palimpsest, where new
information overwrites old information. Moreover, changing
information in memory is metabolically costly. In this paper, we
suggest that information-theoretic approaches may inform the
fundamental limits in constructing such a memory system. In
particular, we define malleable coding, that considers not only
representation length but also ease of representation update,
thereby encouraging some form of recycling to convert an old
codeword into a new one. Malleability cost is the difficulty of
synchronizing compressed versions, and malleable codes are of
particular interest when representing information and modifying
the representation are both expensive. We examine the tradeoff
between compression efficiency and malleability cost, under a
malleability metric defined with respect to a string edit distance.
This introduces a metric topology to the compressed domain. We
characterize the exact set of achievable rates and malleability as
the solution of a subgraph isomorphism problem. This is all done
within the optimization approach to biology framework.Accepted manuscrip
Canonical time-frequency, time-scale, and frequency-scale representations of time-varying channels
Mobile communication channels are often modeled as linear time-varying
filters or, equivalently, as time-frequency integral operators with finite
support in time and frequency. Such a characterization inherently assumes the
signals are narrowband and may not be appropriate for wideband signals. In this
paper time-scale characterizations are examined that are useful in wideband
time-varying channels, for which a time-scale integral operator is physically
justifiable. A review of these time-frequency and time-scale characterizations
is presented. Both the time-frequency and time-scale integral operators have a
two-dimensional discrete characterization which motivates the design of
time-frequency or time-scale rake receivers. These receivers have taps for both
time and frequency (or time and scale) shifts of the transmitted signal. A
general theory of these characterizations which generates, as specific cases,
the discrete time-frequency and time-scale models is presented here. The
interpretation of these models, namely, that they can be seen to arise from
processing assumptions on the transmit and receive waveforms is discussed. Out
of this discussion a third model arises: a frequency-scale continuous channel
model with an associated discrete frequency-scale characterization.Comment: To appear in Communications in Information and Systems - special
issue in honor of Thomas Kailath's seventieth birthda
EZ-AG: Structure-free data aggregation in MANETs using push-assisted self-repelling random walks
This paper describes EZ-AG, a structure-free protocol for duplicate
insensitive data aggregation in MANETs. The key idea in EZ-AG is to introduce a
token that performs a self-repelling random walk in the network and aggregates
information from nodes when they are visited for the first time. A
self-repelling random walk of a token on a graph is one in which at each step,
the token moves to a neighbor that has been visited least often. While
self-repelling random walks visit all nodes in the network much faster than
plain random walks, they tend to slow down when most of the nodes are already
visited. In this paper, we show that a single step push phase at each node can
significantly speed up the aggregation and eliminate this slow down. By doing
so, EZ-AG achieves aggregation in only O(N) time and messages. In terms of
overhead, EZ-AG outperforms existing structure-free data aggregation by a
factor of at least log(N) and achieves the lower bound for aggregation message
overhead. We demonstrate the scalability and robustness of EZ-AG using ns-3
simulations in networks ranging from 100 to 4000 nodes under different mobility
models and node speeds. We also describe a hierarchical extension for EZ-AG
that can produce multi-resolution aggregates at each node using only O(NlogN)
messages, which is a poly-logarithmic factor improvement over existing
techniques
An investigation into the Gustafsson limit for small planar antennas using optimisation
The fundamental limit for small antennas provides a guide to the
effectiveness of designs. Gustafsson et al, Yaghjian et al, and
Mohammadpour-Aghdam et al independently deduced a variation of the
Chu-Harrington limit for planar antennas in different forms. Using a
multi-parameter optimisation technique based on the ant colony algorithm,
planar, meander dipole antenna designs were selected on the basis of lowest
resonant frequency and maximum radiation efficiency. The optimal antenna
designs across the spectrum from 570 to 1750 MHz occupying an area of were compared with these limits calculated using the
polarizability tensor. The results were compared with Sievenpiper's comparison
of published planar antenna properties. The optimised antennas have greater
than 90% polarizability compared to the containing conductive box in the range
, so verifying the optimisation algorithm. The generalized
absorption efficiency of the small meander line antennas is less than 50%, and
results are the same for both PEC and copper designs.Comment: 6 pages, 10 figures, in press article. IEEE Transactions on Antennas
and Propagation (2014
A Practical Method to Estimate Information Content in the Context of 4D-Var Data Assimilation. I: Methodology
Data assimilation obtains improved estimates of the state of a physical system
by combining imperfect model results with sparse and noisy observations of reality.
Not all observations used in data assimilation are equally valuable. The ability to
characterize the usefulness of different data points is important for analyzing the
effectiveness of the assimilation system, for data pruning, and for the design of future
sensor systems.
This paper focuses on the four dimensional variational (4D-Var) data assimilation
framework. Metrics from information theory are used to quantify the contribution
of observations to decreasing the uncertainty with which the system state is known.
We establish an interesting relationship between different information-theoretic metrics
and the variational cost function/gradient under Gaussian linear assumptions.
Based on this insight we derive an ensemble-based computational procedure to estimate
the information content of various observations in the context of 4D-Var. The
approach is illustrated on linear and nonlinear test problems. In the companion paper
[Singh et al.(2011)] the methodology is applied to a global chemical data assimilation
problem
- …