171 research outputs found
Lightweight approximations for division, exponentiation, and logarithm
This disclosure describes low-complexity techniques for approximate division, approximate exponentiation, and approximate logarithm that are accurate to nearly ten percent. Per the techniques, division, exponentiation, and logarithm are expressed in terms of bit-shift and addition operations, which are low-complexity operations. Division, exponentiation and logarithm operations occur frequently in computer science, e.g., in image processing filters, etc. The techniques serve to speed up computations and to reduce silicon area footprint in such compute-intensive applications
Energy efficient and low complexity techniques for the next generation millimeter wave hybrid MIMO systems
The fifth generation (and beyond) wireless communication systems require increased
capacity, high data rates, improved coverage and reduced energy consumption.
This can be potentially provided by unused available spectrum such
as the Millimeter Wave (MmWave) frequency spectrum above 30 GHz. The high
bandwidths for mmWave communication compared to sub-6 GHz microwave frequency
bands must be traded off against increased path loss, which can be compensated
using large-scale antenna arrays such as the Multiple-Input Multiple-
Output (MIMO) systems. The analog/digital Hybrid Beamforming (HBF) architectures
for mmWave MIMO systems reduce the hardware complexity and power
consumption using fewer Radio Frequency (RF) chains and support multi-stream
communication with high Spectral Efficiency (SE). Such systems can also be
optimized to achieve high Energy Efficiency (EE) gains with low complexity but
this has not been widely studied in the literature. This PhD project focussed on
designing energy efficient and low complexity communication techniques for next
generation mmWave hybrid MIMO systems.
Firstly, a novel architecture with a framework that dynamically activates the
optimal number of RF chains was designed. Fractional programming was used
to solve an EE maximization problem and the Dinkelbach Method (DM) based
framework was exploited to optimize the number of active RF chains and the data
streams. The DM is an iterative and parametric algorithm where a sequence of
easier problems converge to the global solution. The HBF matrices were designed
using a codebook-based fast approximation solution called gradient pursuit which
was introduced as a cost-effective and fast approximation algorithm. This work
maximizes EE by exploiting the structure of RF chains with full resolution
sampling unlike existing baseline approaches that use fixed RF chains and aim
only for high SE.
Secondly, an efficient sparse mmWave channel estimation algorithm was developed
with low resolution Analog-to-Digital Converters (ADCs) at the receiver.
The sparsity of the mmWave channel was exploited and the estimation problem
was tackled using compressed sensing through the Stein's unbiased risk estimate
based parametric denoiser. The Expectation-maximization density estimation
was used to avoid the need to specify the channel statistics. Furthermore, an
energy efficient mmWave hybrid MIMO system was developed with Digital-to-
Analog Converters (DACs) at the transmitter where the best subset of the active
RF chains and the DAC resolution were selected. A novel technique based on the
DM and subset selection optimization was implemented for EE maximization.
This work exploits the low resolution sampling at the converting units and provides
more efficient solutions in terms of EE and channel estimation than existing
baselines in the literature.
Thirdly, the DAC and ADC bit resolutions and the HBF matrices were jointly
optimized for EE maximization. The flexibility in choosing the bit resolution
for each DAC and ADC was considered and they were optimized on a frame-by-frame
basis unlike the existing approaches, based on the fixed resolution sampling.
A novel decomposition of the HBF matrices to three parts was introduced to
represent the analog beamformer matrix, the DAC/ADC bit resolution matrix and
the baseband beamformer matrix. The alternating direction method of multipliers
was used to solve this matrix factorization problem as it has been successfully
applied to other non-convex matrix factorization problems in the literature. This
work considers EE maximization with low resolution sampling at both the DACs
and the ADCs simultaneously, and jointly optimizes the HBF and DAC/ADC bit
resolution matrices, unlike the existing baselines that use fixed bit resolution or
otherwise optimize either DAC/ADC bit resolution or HBF matrices
A Multistage Method for SCMA Codebook Design Based on MDS Codes
Sparse Code Multiple Access (SCMA) has been recently proposed for the future
generation of wireless communication standards. SCMA system design involves
specifying several parameters. In order to simplify the procedure, most works
consider a multistage design approach. Two main stages are usually emphasized
in these methods: sparse signatures design (equivalently, resource allocation)
and codebook design. In this paper, we present a novel SCMA codebook design
method. The proposed method considers SCMA codebooks structured with an
underlying vector space obtained from classical block codes. In particular,
when using maximum distance separable (MDS) codes, our proposed design provides
maximum signal-space diversity with a relatively small alphabet. The use of
small alphabets also helps to maintain desired properties in the codebooks,
such as low peak-to-average power ratio and low-complexity detection.Comment: Submitted to IEEE Wireless Communication Letter
Low-Density Code-Domain NOMA: Better Be Regular
A closed-form analytical expression is derived for the limiting empirical
squared singular value density of a spreading (signature) matrix corresponding
to sparse low-density code-domain (LDCD) non-orthogonal multiple-access (NOMA)
with regular random user-resource allocation. The derivation relies on
associating the spreading matrix with the adjacency matrix of a large
semiregular bipartite graph. For a simple repetition-based sparse spreading
scheme, the result directly follows from a rigorous analysis of spectral
measures of infinite graphs. Turning to random (sparse) binary spreading, we
harness the cavity method from statistical physics, and show that the limiting
spectral density coincides in both cases. Next, we use this density to compute
the normalized input-output mutual information of the underlying vector channel
in the large-system limit. The latter may be interpreted as the achievable
total throughput per dimension with optimum processing in a corresponding
multiple-access channel setting or, alternatively, in a fully-symmetric
broadcast channel setting with full decoding capabilities at each receiver.
Surprisingly, the total throughput of regular LDCD-NOMA is found to be not only
superior to that achieved with irregular user-resource allocation, but also to
the total throughput of dense randomly-spread NOMA, for which optimum
processing is computationally intractable. In contrast, the superior
performance of regular LDCD-NOMA can be potentially achieved with a feasible
message-passing algorithm. This observation may advocate employing regular,
rather than irregular, LDCD-NOMA in 5G cellular physical layer design.Comment: Accepted for publication in the IEEE International Symposium on
Information Theory (ISIT), June 201
Turbo-like Iterative Multi-user Receiver Design for 5G Non-orthogonal Multiple Access
Non-orthogonal multiple access (NoMA) as an efficient way of radio resource
sharing has been identified as a promising technology in 5G to help improving
system capacity, user connectivity, and service latency in 5G communications.
This paper provides a brief overview of the progress of NoMA transceiver study
in 3GPP, with special focus on the design of turbo-like iterative multi-user
(MU) receivers. There are various types of MU receivers depending on the
combinations of MU detectors and interference cancellation (IC) schemes.
Link-level simulations show that expectation propagation algorithm (EPA) with
hybrid parallel interference cancellation (PIC) is a promising MU receiver,
which can achieve fast convergence and similar performance as message passing
algorithm (MPA) with much lower complexity.Comment: Accepted by IEEE 88th Vehicular Technology Conference (IEEE VTC-2018
Fall), 5 pages, 6 figure
Low complexity in-loop perceptual video coding
The tradition of broadcast video is today complemented with user generated content, as portable devices support video coding. Similarly, computing is becoming ubiquitous, where Internet of Things (IoT) incorporate heterogeneous networks to communicate with personal and/or infrastructure devices. Irrespective, the emphasises is on bandwidth and processor efficiencies, meaning increasing the signalling options in video encoding. Consequently, assessment for pixel differences applies uniform cost to be processor efficient, in contrast the Human Visual System (HVS) has non-uniform sensitivity based upon lighting, edges and textures. Existing perceptual assessments, are natively incompatible and processor demanding, making perceptual video coding (PVC) unsuitable for these environments. This research allows existing perceptual assessment at the native level using low complexity techniques, before producing new pixel-base image quality assessments (IQAs). To manage these IQAs a framework was developed and implemented in the high efficiency video coding (HEVC) encoder. This resulted in bit-redistribution, where greater bits and smaller partitioning were allocated to perceptually significant regions. Using a HEVC optimised processor the timing increase was < +4% and < +6% for video streaming and recording applications respectively, 1/3 of an existing low complexity PVC solution. Future work should be directed towards perceptual quantisation which offers the potential for perceptual coding gain
- …