689 research outputs found
Metrics to evaluate compressions algorithms for RAW SAR data
Modern synthetic aperture radar (SAR) systems have size, weight, power and cost (SWAP-C) limitations since platforms are becoming smaller, while SAR operating modes are becoming more complex. Due to the computational complexity of the SAR processing required for modern SAR systems, performing the processing on board the platform is not a feasible option. Thus, SAR systems are producing an ever-increasing volume of data that needs to be transmitted to a ground station for processing.
Compression algorithms are utilised to reduce the data volume of the raw data. However, these algorithms can cause degradation and losses that may degrade the effectiveness of the SAR mission. This study addresses the lack of standardised quantitative performance metrics to objectively quantify the performance of SAR data-compression algorithms. Therefore, metrics were established in two different domains, namely the data domain and the image domain. The data-domain metrics are used to determine the performance of the quantisation and the associated losses or errors it induces in the raw data samples. The image-domain metrics evaluate the quality of the SAR image after SAR processing has been performed.
In this study three well-known SAR compression algorithms were implemented and applied to three real SAR data sets that were obtained from a prototype airborne SAR system. The performance of these algorithms were evaluated using the proposed metrics. Important metrics in the data domain were found to be the compression ratio, the entropy, statistical parameters like the skewness and kurtosis to measure the deviation from the original distributions of the uncompressed data, and the dynamic range. The data histograms are an
important visual representation of the effects of the compression algorithm on the data. An important error measure in the data domain is the signal-to-quantisation-noise ratio (SQNR), and the phase error for applications where phase information is required to produce the output. Important metrics in the image domain include the dynamic range, the impulse response function, the image contrast, as well as the error measure, signal-to-distortion-noise ratio (SDNR).
The metrics suggested that all three algorithms performed well and are thus well suited for the compression of raw SAR data. The fast Fourier transform block adaptive quantiser (FFT-BAQ) algorithm had the overall best performance, but the analysis of the computational complexity of its compression steps, indicated that it is has the highest level of complexity compared to the other two algorithms.
Since different levels of degradation are acceptable for different SAR applications, a trade-off can be made between the data reduction and the degradation caused by the algorithm. Due to SWAP-C limitations, there also remains a trade-off between the performance and the computational complexity of the compression algorithm.Dissertation (MEng)--University of Pretoria, 2019.Electrical, Electronic and Computer EngineeringMEngUnrestricte
An assessment of technology alternatives for telecommunications and information management for the space exploration initiative
On the 20th anniversary of the Apollo 11 lunar landing, President Bush set forth ambitious goals for expanding human presence in the solar system. The Space Exploration Initiative (SEI) addresses these goals beginning with Space Station Freedom, followed by a permanent return to the Moon, and a manned mission to Mars. A well designed, adaptive Telecommunications, Navigation, and Information Management (TNIM) infrastructure is vital to the success of these missions. Utilizing initial projections of user requirements, a team under the direction of NASA's Office of Space Operations developed overall architectures and point designs to implement the TNIM functions for the Lunar and Mars mission scenarios. Based on these designs, an assessment of technology alternatives for the telecommunications and information management functions was performed. This technology assessment identifies technology developments necessary to meet the telecommunications and information management system requirements for SEI. Technology requirements, technology needs and alternatives, the present level of technology readiness in each area, and a schedule for development are presented
Advanced methods and deep learning for video and satellite data compression
L'abstract è presente nell'allegato / the abstract is in the attachmen
Performance-Optimized Quantization for SAR and InSAR Applications
For the design of present and next-generation spaceborne SAR missions, constantly increasing data rates are being demanded, which impose stringent requirements in terms of onboard memory and downlink capacity. In this scenario, the efficient quantization of SAR raw data is of primary importance since the utilized compression rate is directly related to the volume of data to be stored and transmitted to the ground, and at the same time, it affects the resulting SAR imaging performance. In this article, we introduce the performance-optimized block-adaptive quantization (PO-BAQ), a novel approach for SAR raw data compression that aims at optimizing the resource allocation and, at the same time, the quality of the resulting SAR and InSAR products. This goal is achieved by exploiting the a priori knowledge of the local SAR backscatter statistics, which allows for the generation of high-resolution bitrate maps that can be employed to fulfill a predefined performance requirement. Analyses of experimental TanDEM-X interferometric data are presented, which demonstrates the potential of the proposed method as a helpful tool for performance budget definition and data rate optimization of present and future SAR missions
A Framework for Evaluating Security in the Presence of Signal Injection Attacks
Sensors are embedded in security-critical applications from medical devices
to nuclear power plants, but their outputs can be spoofed through
electromagnetic and other types of signals transmitted by attackers at a
distance. To address the lack of a unifying framework for evaluating the
effects of such transmissions, we introduce a system and threat model for
signal injection attacks. We further define the concepts of existential,
selective, and universal security, which address attacker goals from mere
disruptions of the sensor readings to precise waveform injections. Moreover, we
introduce an algorithm which allows circuit designers to concretely calculate
the security level of real systems. Finally, we apply our definitions and
algorithm in practice using measurements of injections against a smartphone
microphone, and analyze the demodulation characteristics of commercial
Analog-to-Digital Converters (ADCs). Overall, our work highlights the
importance of evaluating the susceptibility of systems against signal injection
attacks, and introduces both the terminology and the methodology to do so.Comment: This article is the extended technical report version of the paper
presented at ESORICS 2019, 24th European Symposium on Research in Computer
Security (ESORICS), Luxembourg, Luxembourg, September 201
Metrics to evaluate compressions algorithms for RAW SAR data
Modern synthetic aperture radar (SAR) systems have size, weight, power and cost (SWAP-C) limitations since platforms are becoming smaller, while SAR operating modes are becoming more complex. Due to the computational complexity of the SAR processing required for modern SAR systems, performing the processing on board the platform is not a feasible option. Thus, SAR systems are producing an ever-increasing volume of data that needs to be transmitted to a ground station for processing.
Compression algorithms are utilised to reduce the data volume of the raw data. However, these algorithms can cause degradation and losses that may degrade the effectiveness of the SAR mission. This study addresses the lack of standardised quantitative performance metrics to objectively quantify the performance of SAR data-compression algorithms. Therefore, metrics were established in two different domains, namely the data domain and the image domain. The data-domain metrics are used to determine the performance of the quantisation and the associated losses or errors it induces in the raw data samples. The image-domain metrics evaluate the quality of the SAR image after SAR processing has been performed.
In this study three well-known SAR compression algorithms were implemented and applied to three real SAR data sets that were obtained from a prototype airborne SAR system. The performance of these algorithms were evaluated using the proposed metrics. Important metrics in the data domain were found to be the compression ratio, the entropy, statistical parameters like the skewness and kurtosis to measure the deviation from the original distributions of the uncompressed data, and the dynamic range. The data histograms are an
important visual representation of the effects of the compression algorithm on the data. An important error measure in the data domain is the signal-to-quantisation-noise ratio (SQNR), and the phase error for applications where phase information is required to produce the output. Important metrics in the image domain include the dynamic range, the impulse response function, the image contrast, as well as the error measure, signal-to-distortion-noise ratio (SDNR).
The metrics suggested that all three algorithms performed well and are thus well suited for the compression of raw SAR data. The fast Fourier transform block adaptive quantiser (FFT-BAQ) algorithm had the overall best performance, but the analysis of the computational complexity of its compression steps, indicated that it is has the highest level of complexity compared to the other two algorithms.
Since different levels of degradation are acceptable for different SAR applications, a trade-off can be made between the data reduction and the degradation caused by the algorithm. Due to SWAP-C limitations, there also remains a trade-off between the performance and the computational complexity of the compression algorithm.Dissertation (MEng)--University of Pretoria, 2019.TM2019Electrical, Electronic and Computer EngineeringMEngUnrestricte
Proceedings of the Scientific Data Compression Workshop
Continuing advances in space and Earth science requires increasing amounts of data to be gathered from spaceborne sensors. NASA expects to launch sensors during the next two decades which will be capable of producing an aggregate of 1500 Megabits per second if operated simultaneously. Such high data rates cause stresses in all aspects of end-to-end data systems. Technologies and techniques are needed to relieve such stresses. Potential solutions to the massive data rate problems are: data editing, greater transmission bandwidths, higher density and faster media, and data compression. Through four subpanels on Science Payload Operations, Multispectral Imaging, Microwave Remote Sensing and Science Data Management, recommendations were made for research in data compression and scientific data applications to space platforms
Space and Earth Science Data Compression Workshop
The workshop explored opportunities for data compression to enhance the collection and analysis of space and Earth science data. The focus was on scientists' data requirements, as well as constraints imposed by the data collection, transmission, distribution, and archival systems. The workshop consisted of several invited papers; two described information systems for space and Earth science data, four depicted analysis scenarios for extracting information of scientific interest from data collected by Earth orbiting and deep space platforms, and a final one was a general tutorial on image data compression
The model of an anomaly detector for HiLumi LHC magnets based on Recurrent Neural Networks and adaptive quantization
This paper focuses on an examination of an applicability of Recurrent Neural
Network models for detecting anomalous behavior of the CERN superconducting
magnets. In order to conduct the experiments, the authors designed and
implemented an adaptive signal quantization algorithm and a custom GRU-based
detector and developed a method for the detector parameters selection. Three
different datasets were used for testing the detector. Two artificially
generated datasets were used to assess the raw performance of the system
whereas the 231 MB dataset composed of the signals acquired from HiLumi magnets
was intended for real-life experiments and model training. Several different
setups of the developed anomaly detection system were evaluated and compared
with state-of-the-art OC-SVM reference model operating on the same data. The
OC-SVM model was equipped with a rich set of feature extractors accounting for
a range of the input signal properties. It was determined in the course of the
experiments that the detector, along with its supporting design methodology,
reaches F1 equal or very close to 1 for almost all test sets. Due to the
profile of the data, the best_length setup of the detector turned out to
perform the best among all five tested configuration schemes of the detection
system. The quantization parameters have the biggest impact on the overall
performance of the detector with the best values of input/output grid equal to
16 and 8, respectively. The proposed solution of the detection significantly
outperformed OC-SVM-based detector in most of the cases, with much more stable
performance across all the datasets.Comment: Related to arXiv:1702.0083
Learning from Synthetic Humans
Estimating human pose, shape, and motion from images and videos are
fundamental challenges with many applications. Recent advances in 2D human pose
estimation use large amounts of manually-labeled training data for learning
convolutional neural networks (CNNs). Such data is time consuming to acquire
and difficult to extend. Moreover, manual labeling of 3D pose, depth and motion
is impractical. In this work we present SURREAL (Synthetic hUmans foR REAL
tasks): a new large-scale dataset with synthetically-generated but realistic
images of people rendered from 3D sequences of human motion capture data. We
generate more than 6 million frames together with ground truth pose, depth
maps, and segmentation masks. We show that CNNs trained on our synthetic
dataset allow for accurate human depth estimation and human part segmentation
in real RGB images. Our results and the new dataset open up new possibilities
for advancing person analysis using cheap and large-scale synthetic data.Comment: Appears in: 2017 IEEE Conference on Computer Vision and Pattern
Recognition (CVPR 2017). 9 page
- …