1,224 research outputs found
DiFX2: A more flexible, efficient, robust and powerful software correlator
Software correlation, where a correlation algorithm written in a high-level
language such as C++ is run on commodity computer hardware, has become
increasingly attractive for small to medium sized and/or bandwidth constrained
radio interferometers. In particular, many long baseline arrays (which
typically have fewer than 20 elements and are restricted in observing bandwidth
by costly recording hardware and media) have utilized software correlators for
rapid, cost-effective correlator upgrades to allow compatibility with new,
wider bandwidth recording systems and improve correlator flexibility. The DiFX
correlator, made publicly available in 2007, has been a popular choice in such
upgrades and is now used for production correlation by a number of
observatories and research groups worldwide. Here we describe the evolution in
the capabilities of the DiFX correlator over the past three years, including a
number of new capabilities, substantial performance improvements, and a large
amount of supporting infrastructure to ease use of the code. New capabilities
include the ability to correlate a large number of phase centers in a single
correlation pass, the extraction of phase calibration tones, correlation of
disparate but overlapping sub-bands, the production of rapidly sampled
filterbank and kurtosis data at minimal cost, and many more. The latest version
of the code is at least 15% faster than the original, and in certain situations
many times this value. Finally, we also present detailed test results
validating the correctness of the new code.Comment: 28 pages, 9 figures, accepted for publication in PAS
Universal Sampling Rate Distortion
We examine the coordinated and universal rate-efficient sampling of a subset
of correlated discrete memoryless sources followed by lossy compression of the
sampled sources. The goal is to reconstruct a predesignated subset of sources
within a specified level of distortion. The combined sampling mechanism and
rate distortion code are universal in that they are devised to perform robustly
without exact knowledge of the underlying joint probability distribution of the
sources. In Bayesian as well as nonBayesian settings, single-letter
characterizations are provided for the universal sampling rate distortion
function for fixed-set sampling, independent random sampling and memoryless
random sampling. It is illustrated how these sampling mechanisms are
successively better. Our achievability proofs bring forth new schemes for joint
source distribution-learning and lossy compression
A Comprehensive Review of Distributed Coding Algorithms for Visual Sensor Network (VSN)
Since the invention of low cost camera, it has been widely incorporated into the sensor node in Wireless Sensor Network (WSN) to form the Visual Sensor Network (VSN). However, the use of camera is bringing with it a set of new challenges, because all the sensor nodes are powered by batteries. Hence, energy consumption is one of the most critical issues that have to be taken into consideration. In addition to this, the use of batteries has also limited the resources (memory, processor) that can be incorporated into the sensor node. The life time of a VSN decreases quickly as the image is transferred to the destination. One of the solutions to the aforementioned problem is to reduce the data to be transferred in the network by using image compression. In this paper, a comprehensive survey and analysis of distributed coding algorithms that can be used to encode images in VSN is provided. This also includes an overview of these algorithms, together with their advantages and deficiencies when implemented in VSN. These algorithms are then compared at the end to determine the algorithm that is more suitable for VSN
Compression of interferometric radio-astronomical data
The volume of radio-astronomical data is a considerable burden in the
processing and storing of radio observations with high time and frequency
resolutions and large bandwidths. Lossy compression of interferometric
radio-astronomical data is considered to reduce the volume of visibility data
and to speed up processing.
A new compression technique named "Dysco" is introduced that consists of two
steps: a normalization step, in which grouped visibilities are normalized to
have a similar distribution; and a quantization and encoding step, which rounds
values to a given quantization scheme using a dithering scheme. Several
non-linear quantization schemes are tested and combined with different methods
for normalizing the data. Four data sets with observations from the LOFAR and
MWA telescopes are processed with different processing strategies and different
combinations of normalization and quantization. The effects of compression are
measured in image plane.
The noise added by the lossy compression technique acts like normal system
noise. The accuracy of Dysco is depending on the signal-to-noise ratio of the
data: noisy data can be compressed with a smaller loss of image quality. Data
with typical correlator time and frequency resolutions can be compressed by a
factor of 6.4 for LOFAR and 5.3 for MWA observations with less than 1% added
system noise. An implementation of the compression technique is released that
provides a Casacore storage manager and allows transparent encoding and
decoding. Encoding and decoding is faster than the read/write speed of typical
disks.
The technique can be used for LOFAR and MWA to reduce the archival space
requirements for storing observed data. Data from SKA-low will likely be
compressible by the same amount as LOFAR. The same technique can be used to
compress data from other telescopes, but a different bit-rate might be
required.Comment: Accepted for publication in A&A. 13 pages, 8 figures. Abstract was
abridge
- …