36 research outputs found
NASA Tech Briefs, September 2010
Topics covered include: Instrument for Measuring Thermal Conductivity of Materials at Low Temperatures; Multi-Axis Accelerometer Calibration System; Pupil Alignment Measuring Technique and Alignment Reference for Instruments or Optical Systems; Autonomous System for Monitoring the Integrity of Composite Fan Housings; A Safe, Self-Calibrating, Wireless System for Measuring Volume of Any Fuel at Non-Horizontal Orientation; Adaptation of the Camera Link Interface for Flight-Instrument Applications; High-Performance CCSDS Encapsulation Service Implementation in FPGA; High-Performance CCSDS AOS Protocol Implementation in FPGA; Advanced Flip Chips in Extreme Temperature Environments; Diffuse-Illumination Systems for Growing Plants; Microwave Plasma Hydrogen Recovery System; Producing Hydrogen by Plasma Pyrolysis of Methane; Self-Deployable Membrane Structures; Reactivation of a Tin-Oxide-Containing Catalys; Functionalization of Single-Wall Carbon Nanotubes by Photo-Oxidation; Miniature Piezoelectric Macro-Mass Balance; Acoustic Liner for Turbomachinery Applications; Metering Gas Strut for Separating Rocket Stages; Large-Flow-Area Flow-Selective Liquid/Gas Separator; Counterflowing Jet Subsystem Design; Water Tank with Capillary Air/Liquid Separation; True Shear Parallel Plate Viscometer; Focusing Diffraction Grating Element with Aberration Control; Universal Millimeter-Wave Radar Front End; Mode Selection for a Single-Frequency Fiber Laser; Qualification and Selection of Flight Diode Lasers for Space Applications; Plenoptic Imager for Automated Surface Navigation; Maglev Facility for Simulating Variable Gravity; Hybrid AlGaN-SiC Avalanche Photodiode for Deep-UV Photon Detection; High-Speed Operation of Interband Cascade Lasers; 3D GeoWall Analysis System for Shuttle External Tank Foreign Object Debris Events; Charge-Spot Model for Electrostatic Forces in Simulation of Fine Particulates; Hidden Statistics Approach to Quantum Simulations; Reconstituted Three-Dimensional Interactive Imaging; Determining Atmospheric-Density Profile of Titan; Digital Microfluidics Sample Analyzer; Radiation Protection Using Carbon Nanotube Derivatives; Process to Selectively Distinguish Viable from Non-Viable Bacterial Cells; and TEAMS Model Analyzer
Channel Detection and Decoding With Deep Learning
In this thesis, we investigate the designs of pragmatic data detectors and channel decoders with the assistance of deep learning. We focus on three emerging and fundamental research problems, including the designs of message passing algorithms for data detection in faster-than-Nyquist (FTN) signalling, soft-decision decoding algorithms for high-density parity-check codes and user identification for massive machine-type communications (mMTC). These wireless communication research problems are addressed by the employment of deep learning and an outline of the main contributions are given below.
In the first part, we study a deep learning-assisted sum-product detection algorithm for FTN signalling. The proposed data detection algorithm works on a modified factor graph which concatenates a neural network function node to the variable nodes of the conventional FTN factor graph to compensate any detrimental effects that degrade the detection performance. By investigating the maximum-likelihood bit-error rate performance of a finite length coded FTN system, we show that the error performance of the proposed algorithm approaches the maximum a posterior performance, which might not be approachable by employing the sum-product algorithm on conventional FTN factor graph.
After investigating the deep learning-assisted message passing algorithm for data detection, we move to the design of an efficient channel decoder. Specifically, we propose a node-classified redundant decoding algorithm based on the received sequence’s channel reliability for Bose-Chaudhuri-Hocquenghem (BCH) codes. Two preprocessing steps are proposed prior to decoding, to mitigate the unreliable information propagation and to improve the decoding performance. On top of the preprocessing, we propose a list decoding algorithm to augment the decoder’s performance. Moreover, we show that the node-classified redundant decoding algorithm can be transformed into a neural network framework, where multiplicative tuneable weights are attached to the decoding messages to optimise the decoding performance. We show that the node-classified redundant decoding algorithm provides a performance gain compared to the random redundant decoding algorithm. Additional decoding performance gain can be obtained by both the list decoding method and the neural network “learned” node-classified redundant decoding algorithm.
Finally, we consider one of the practical services provided by the fifth-generation (5G) wireless communication networks, mMTC. Two separate system models for mMTC are studied. The first model assumes that low-resolution digital-to-analog converters are equipped by the devices in mMTC. The second model assumes that the devices' activities are correlated. In the first system model, two rounds of signal recoveries are performed. A neural network is employed to identify a suspicious device which is most likely to be falsely alarmed during the first round of signal recovery. The suspicious device is enforced to be inactive in the second round of signal recovery. The proposed scheme can effectively combat the interference caused by the suspicious device and thus improve the user identification performance. In the second system model, two deep learning-assisted algorithms are proposed to exploit the user activity correlation to facilitate channel estimation and user identification. We propose a deep learning modified orthogonal approximate message passing algorithm to exploit the correlation structure among devices. In addition, we propose a neural network framework that is dedicated for the user identification. More specifically, the neural network aims to minimise the missed detection probability under a pre-determined false alarm probability. The proposed algorithms substantially reduce the mean squared error between the estimate and unknown sequence, and largely improve the trade-off between the missed detection probability and the false alarm probability compared to the conventional orthogonal approximate message passing algorithm.
All the aforementioned three parts of research works demonstrate that deep learning is a powerful technology in the physical layer designs of wireless communications
Recommended from our members
Intelligent Side Information Generation in Distributed Video Coding
Distributed video coding (DVC) reverses the traditional coding paradigm of complex encoders allied with basic decoding to one where the computational cost is largely incurred by the decoder. This is attractive as the proven theoretical work of Wyner-Ziv (WZ) and Slepian-Wolf (SW) shows that the performance by such a system should be exactly the same as a conventional coder. Despite the solid theoretical foundations, current DVC qualitative and quantitative performance falls short of existing conventional coders and there remain crucial limitations. A key constraint governing DVC performance is the quality of side information (SI), a coarse representation of original video frames which are not available at the decoder. Techniques to generate SI have usually been based on linear motion compensated temporal interpolation (LMCTI), though these do not always produce satisfactory SI quality, especially in sequences exhibiting non-linear motion.
This thesis presents an intelligent higher order piecewise trajectory temporal interpolation (HOPTTI) framework for SI generation with original contributions that afford better SI quality in comparison to existing LMCTI-based approaches. The major elements in this framework are: (i) a cubic trajectory interpolation algorithm model that significantly improves the accuracy of motion vector estimations; (ii) an adaptive overlapped block motion compensation (AOBMC) model which reduces both blocking and overlapping artefacts in the SI emanating from the block matching algorithm; (iii) the development of an empirical mode switching algorithm; and (iv) an intelligent switching mechanism to construct SI by automatically selecting the best macroblock from the intermediate SI generated by HOPTTI and AOBMC algorithms. Rigorous analysis and evaluation confirms that significant quantitative and perceptual improvements in SI quality are achieved with the new framework
Physical-Layer Security, Quantum Key Distribution and Post-quantum Cryptography
The growth of data-driven technologies, 5G, and the Internet place enormous pressure on underlying information infrastructure. There exist numerous proposals on how to deal with the possible capacity crunch. However, the security of both optical and wireless networks lags behind reliable and spectrally efficient transmission. Significant achievements have been made recently in the quantum computing arena. Because most conventional cryptography systems rely on computational security, which guarantees the security against an efficient eavesdropper for a limited time, with the advancement in quantum computing this security can be compromised. To solve these problems, various schemes providing perfect/unconditional security have been proposed including physical-layer security (PLS), quantum key distribution (QKD), and post-quantum cryptography. Unfortunately, it is still not clear how to integrate those different proposals with higher level cryptography schemes. So the purpose of the Special Issue entitled “Physical-Layer Security, Quantum Key Distribution and Post-quantum Cryptography” was to integrate these various approaches and enable the next generation of cryptography systems whose security cannot be broken by quantum computers. This book represents the reprint of the papers accepted for publication in the Special Issue
The Interface Region Imaging Spectrograph (IRIS)
The Interface Region Imaging Spectrograph (IRIS) small explorer spacecraft
provides simultaneous spectra and images of the photosphere, chromosphere,
transition region, and corona with 0.33-0.4 arcsec spatial resolution, 2 s
temporal resolution and 1 km/s velocity resolution over a field-of-view of up
to 175 arcsec x 175 arcsec. IRIS was launched into a Sun-synchronous orbit on
27 June 2013 using a Pegasus-XL rocket and consists of a 19-cm UV telescope
that feeds a slit-based dual-bandpass imaging spectrograph. IRIS obtains
spectra in passbands from 1332-1358, 1389-1407 and 2783-2834 Angstrom including
bright spectral lines formed in the chromosphere (Mg II h 2803 Angstrom and Mg
II k 2796 Angstrom) and transition region (C II 1334/1335 Angstrom and Si IV
1394/1403 Angstrom). Slit-jaw images in four different passbands (C II 1330, Si
IV 1400, Mg II k 2796 and Mg II wing 2830 Angstrom) can be taken simultaneously
with spectral rasters that sample regions up to 130 arcsec x 175 arcsec at a
variety of spatial samplings (from 0.33 arcsec and up). IRIS is sensitive to
emission from plasma at temperatures between 5000 K and 10 MK and will advance
our understanding of the flow of mass and energy through an interface region,
formed by the chromosphere and transition region, between the photosphere and
corona. This highly structured and dynamic region not only acts as the conduit
of all mass and energy feeding into the corona and solar wind, it also requires
an order of magnitude more energy to heat than the corona and solar wind
combined. The IRIS investigation includes a strong numerical modeling component
based on advanced radiative-MHD codes to facilitate interpretation of
observations of this complex region. Approximately eight Gbytes of data (after
compression) are acquired by IRIS each day and made available for unrestricted
use within a few days of the observation.Comment: 53 pages, 15 figure
Cross-Layer Optimization for Power-Efficient and Robust Digital Circuits and Systems
With the increasing digital services demand, performance and power-efficiency
become vital requirements for digital circuits and systems. However, the
enabling CMOS technology scaling has been facing significant challenges of
device uncertainties, such as process, voltage, and temperature variations. To
ensure system reliability, worst-case corner assumptions are usually made in
each design level. However, the over-pessimistic worst-case margin leads to
unnecessary power waste and performance loss as high as 2.2x. Since
optimizations are traditionally confined to each specific level, those safe
margins can hardly be properly exploited.
To tackle the challenge, it is therefore advised in this Ph.D. thesis to
perform a cross-layer optimization for digital signal processing circuits and
systems, to achieve a global balance of power consumption and output quality.
To conclude, the traditional over-pessimistic worst-case approach leads to
huge power waste. In contrast, the adaptive voltage scaling approach saves
power (25% for the CORDIC application) by providing a just-needed supply
voltage. The power saving is maximized (46% for CORDIC) when a more aggressive
voltage over-scaling scheme is applied. These sparsely occurred circuit errors
produced by aggressive voltage over-scaling are mitigated by higher level error
resilient designs. For functions like FFT and CORDIC, smart error mitigation
schemes were proposed to enhance reliability (soft-errors and timing-errors,
respectively). Applications like Massive MIMO systems are robust against lower
level errors, thanks to the intrinsically redundant antennas. This property
makes it applicable to embrace digital hardware that trades quality for power
savings.Comment: 190 page