322 research outputs found
Fundamentals of Large Sensor Networks: Connectivity, Capacity, Clocks and Computation
Sensor networks potentially feature large numbers of nodes that can sense
their environment over time, communicate with each other over a wireless
network, and process information. They differ from data networks in that the
network as a whole may be designed for a specific application. We study the
theoretical foundations of such large scale sensor networks, addressing four
fundamental issues- connectivity, capacity, clocks and function computation.
To begin with, a sensor network must be connected so that information can
indeed be exchanged between nodes. The connectivity graph of an ad-hoc network
is modeled as a random graph and the critical range for asymptotic connectivity
is determined, as well as the critical number of neighbors that a node needs to
connect to. Next, given connectivity, we address the issue of how much data can
be transported over the sensor network. We present fundamental bounds on
capacity under several models, as well as architectural implications for how
wireless communication should be organized.
Temporal information is important both for the applications of sensor
networks as well as their operation.We present fundamental bounds on the
synchronizability of clocks in networks, and also present and analyze
algorithms for clock synchronization. Finally we turn to the issue of gathering
relevant information, that sensor networks are designed to do. One needs to
study optimal strategies for in-network aggregation of data, in order to
reliably compute a composite function of sensor measurements, as well as the
complexity of doing so. We address the issue of how such computation can be
performed efficiently in a sensor network and the algorithms for doing so, for
some classes of functions.Comment: 10 pages, 3 figures, Submitted to the Proceedings of the IEE
Development and analysis of the Software Implemented Fault-Tolerance (SIFT) computer
SIFT (Software Implemented Fault Tolerance) is an experimental, fault-tolerant computer system designed to meet the extreme reliability requirements for safety-critical functions in advanced aircraft. Errors are masked by performing a majority voting operation over the results of identical computations, and faulty processors are removed from service by reassigning computations to the nonfaulty processors. This scheme has been implemented in a special architecture using a set of standard Bendix BDX930 processors, augmented by a special asynchronous-broadcast communication interface that provides direct, processor to processor communication among all processors. Fault isolation is accomplished in hardware; all other fault-tolerance functions, together with scheduling and synchronization are implemented exclusively by executive system software. The system reliability is predicted by a Markov model. Mathematical consistency of the system software with respect to the reliability model has been partially verified, using recently developed tools for machine-aided proof of program correctness
Quantum Darwinism and Friends
In honor of Wojciech Zurek’s 70th birthday, this Special Issue is dedicated to recent advances in our understanding the emergence of classical reality, and pays tribute to Zurek’s seminal contributions to our understanding of the Universe. To this end, “Quantum Darwinism and Friends” collects articles that make sense of the apparent chasm between quantum weirdness and classical perception, and provides a snapshot of this fundamental, exciting, and vivid field of theoretical physics
Recommended from our members
Channel equalization to achieve high bit rates in discrete multitone systems
textMulticarrier modulation (MCM) techniques such as orthogonal frequency division
multiplexing (OFDM) and discrete multi-tone (DMT) modulation are attractive
for high-speed data communications due to the ease with which MCM can combat
channel dispersion. With all the benefits MCM could give, DMT modulation has an
extra ability to perform dynamic bit loading, which has the potential to exploit fully
the available bandwidth in a slowly time-varying channel. In broadband wireline
communications, DMT modulation is standardized for asymmetric digital subscribe
line (ADSL) and very-high-bit-rate digital subscriber line (VDSL) modems. ADSL
and VDSL standards are used by telephone companies to provide high speed data
service to residences and offices.
In an ADSL receiver, an equalizer is required to compensate for the channel’s
dispersion in the time domain and the channel’s distortion in the frequency domain
of the transmitted waveform. This dissertation proposes design methods for linear
equalizers to increase the bit rate of the connection. The methods are amenable
to implementation on programmable fixed-point digital signal processors, which are
employed in ADSL/VDSL transceivers.
A conventional ADSL equalizer consists of a time-domain equalizer, a fast
Fourier transform, and a frequency domain equalizer. The time domain equalizer
(TEQ) is a finite impulse response filter that when coupled with a discretized channel
produces an equivalent channel whose impulse response is shorter than that of
the discretized channel. This channel shortening is required by the ADSL standards.
In this dissertation, I first propose a linear phase TEQ design that exploits symmetry
in existing eigen-filter approaches such as minimum mean square error(MMSE),
maximum shortening signal to noise ratio (MSSNR) and minimum intersymbol interference
(Min-ISI) equalizers. TEQs with symmetric coefficients can reach the
same performance as non-symmetric ones with much lower training complexity.
Second, I improve Min-ISI design. I reformulate the cost function to make
long TEQs design feasible. I remove the dependency of transmission delay in order
to reduce the complexity associated with delay optimization. The quantized
weighting is introduced to further lower the complexity. I also propose an iterative
optimization procedure of Min-ISI that completely avoids Cholesky decomposition
hence is better suited for a fixed-point implementation.
Finally I propose a dual-path TEQ structure, which designs a standard singleFIR
TEQ to achieve good bit rate over the entire transmission bandwidth, and
designs another FIR TEQ to improve the bit rate over a subset of subcarriers. Dualpath
TEQ can be viewed as a special case of a complex valued filter bank structure
that delivers the best bit rate of existing DMT equalizers. However, dual-path
TEQ provides a very good tradeoff between achievable bit rate vs. implementation
complexity on a programmable digital signal processor.Electrical and Computer Engineerin
LIPIcs, Volume 251, ITCS 2023, Complete Volume
LIPIcs, Volume 251, ITCS 2023, Complete Volum
- …