107,237 research outputs found
Compression of interferometric radio-astronomical data
The volume of radio-astronomical data is a considerable burden in the
processing and storing of radio observations with high time and frequency
resolutions and large bandwidths. Lossy compression of interferometric
radio-astronomical data is considered to reduce the volume of visibility data
and to speed up processing.
A new compression technique named "Dysco" is introduced that consists of two
steps: a normalization step, in which grouped visibilities are normalized to
have a similar distribution; and a quantization and encoding step, which rounds
values to a given quantization scheme using a dithering scheme. Several
non-linear quantization schemes are tested and combined with different methods
for normalizing the data. Four data sets with observations from the LOFAR and
MWA telescopes are processed with different processing strategies and different
combinations of normalization and quantization. The effects of compression are
measured in image plane.
The noise added by the lossy compression technique acts like normal system
noise. The accuracy of Dysco is depending on the signal-to-noise ratio of the
data: noisy data can be compressed with a smaller loss of image quality. Data
with typical correlator time and frequency resolutions can be compressed by a
factor of 6.4 for LOFAR and 5.3 for MWA observations with less than 1% added
system noise. An implementation of the compression technique is released that
provides a Casacore storage manager and allows transparent encoding and
decoding. Encoding and decoding is faster than the read/write speed of typical
disks.
The technique can be used for LOFAR and MWA to reduce the archival space
requirements for storing observed data. Data from SKA-low will likely be
compressible by the same amount as LOFAR. The same technique can be used to
compress data from other telescopes, but a different bit-rate might be
required.Comment: Accepted for publication in A&A. 13 pages, 8 figures. Abstract was
abridge
Survey propagation at finite temperature: application to a Sourlas code as a toy model
In this paper we investigate a finite temperature generalization of survey
propagation, by applying it to the problem of finite temperature decoding of a
biased finite connectivity Sourlas code for temperatures lower than the
Nishimori temperature. We observe that the result is a shift of the location of
the dynamical critical channel noise to larger values than the corresponding
dynamical transition for belief propagation, as suggested recently by
Migliorini and Saad for LDPC codes. We show how the finite temperature 1-RSB SP
gives accurate results in the regime where competing approaches fail to
converge or fail to recover the retrieval state
Progress of analog-hybrid computation
Review of fast analog/hybrid computer systems, integrated operational amplifiers, electronic mode-control switches, digital attenuators, and packaging technique
Topological quantum memory
We analyze surface codes, the topological quantum error-correcting codes
introduced by Kitaev. In these codes, qubits are arranged in a two-dimensional
array on a surface of nontrivial topology, and encoded quantum operations are
associated with nontrivial homology cycles of the surface. We formulate
protocols for error recovery, and study the efficacy of these protocols. An
order-disorder phase transition occurs in this system at a nonzero critical
value of the error rate; if the error rate is below the critical value (the
accuracy threshold), encoded information can be protected arbitrarily well in
the limit of a large code block. This phase transition can be accurately
modeled by a three-dimensional Z_2 lattice gauge theory with quenched disorder.
We estimate the accuracy threshold, assuming that all quantum gates are local,
that qubits can be measured rapidly, and that polynomial-size classical
computations can be executed instantaneously. We also devise a robust recovery
procedure that does not require measurement or fast classical processing;
however for this procedure the quantum gates are local only if the qubits are
arranged in four or more spatial dimensions. We discuss procedures for
encoding, measurement, and performing fault-tolerant universal quantum
computation with surface codes, and argue that these codes provide a promising
framework for quantum computing architectures.Comment: 39 pages, 21 figures, REVTe
H2B: Heartbeat-based Secret Key Generation Using Piezo Vibration Sensors
We present Heartbeats-2-Bits (H2B), which is a system for securely pairing
wearable devices by generating a shared secret key from the skin vibrations
caused by heartbeat. This work is motivated by potential power saving
opportunity arising from the fact that heartbeat intervals can be detected
energy-efficiently using inexpensive and power-efficient piezo sensors, which
obviates the need to employ complex heartbeat monitors such as
Electrocardiogram or Photoplethysmogram. Indeed, our experiments show that
piezo sensors can measure heartbeat intervals on many different body locations
including chest, wrist, waist, neck and ankle. Unfortunately, we also discover
that the heartbeat interval signal captured by piezo vibration sensors has low
Signal-to-Noise Ratio (SNR) because they are not designed as precision
heartbeat monitors, which becomes the key challenge for H2B. To overcome this
problem, we first apply a quantile function-based quantization method to fully
extract the useful entropy from the noisy piezo measurements. We then propose a
novel Compressive Sensing-based reconciliation method to correct the high bit
mismatch rates between the two independently generated keys caused by low SNR.
We prototype H2B using off-the-shelf piezo sensors and evaluate its performance
on a dataset collected from different body positions of 23 participants. Our
results show that H2B has an overwhelming pairing success rate of 95.6%. We
also analyze and demonstrate H2B's robustness against three types of attacks.
Finally, our power measurements show that H2B is very power-efficient
Hybrid computer Monte-Carlo techniques
Hybrid analog-digital computer systems for Monte Carlo method application
Experimental analysis of computer system dependability
This paper reviews an area which has evolved over the past 15 years: experimental analysis of computer system dependability. Methodologies and advances are discussed for three basic approaches used in the area: simulated fault injection, physical fault injection, and measurement-based analysis. The three approaches are suited, respectively, to dependability evaluation in the three phases of a system's life: design phase, prototype phase, and operational phase. Before the discussion of these phases, several statistical techniques used in the area are introduced. For each phase, a classification of research methods or study topics is outlined, followed by discussion of these methods or topics as well as representative studies. The statistical techniques introduced include the estimation of parameters and confidence intervals, probability distribution characterization, and several multivariate analysis methods. Importance sampling, a statistical technique used to accelerate Monte Carlo simulation, is also introduced. The discussion of simulated fault injection covers electrical-level, logic-level, and function-level fault injection methods as well as representative simulation environments such as FOCUS and DEPEND. The discussion of physical fault injection covers hardware, software, and radiation fault injection methods as well as several software and hybrid tools including FIAT, FERARI, HYBRID, and FINE. The discussion of measurement-based analysis covers measurement and data processing techniques, basic error characterization, dependency analysis, Markov reward modeling, software-dependability, and fault diagnosis. The discussion involves several important issues studies in the area, including fault models, fast simulation techniques, workload/failure dependency, correlated failures, and software fault tolerance
- …