11,327 research outputs found
On the Convergence Speed of Turbo Demodulation with Turbo Decoding
Iterative processing is widely adopted nowadays in modern wireless receivers
for advanced channel codes like turbo and LDPC codes. Extension of this
principle with an additional iterative feedback loop to the demapping function
has proven to provide substantial error performance gain. However, the adoption
of iterative demodulation with turbo decoding is constrained by the additional
implied implementation complexity, heavily impacting latency and power
consumption. In this paper, we analyze the convergence speed of these combined
two iterative processes in order to determine the exact required number of
iterations at each level. Extrinsic information transfer (EXIT) charts are used
for a thorough analysis at different modulation orders and code rates. An
original iteration scheduling is proposed reducing two demapping iterations
with reasonable performance loss of less than 0.15 dB. Analyzing and
normalizing the computational and memory access complexity, which directly
impact latency and power consumption, demonstrates the considerable gains of
the proposed scheduling and the promising contributions of the proposed
analysis.Comment: Submitted to IEEE Transactions on Signal Processing on April 27, 201
Whatâs Cooking? â Cognitive Training of Executive Function in the Elderly
Executive function involves the efficient and adaptive engagement of the control processes of updating, shifting, and inhibition (Miyake, 2000) to guide behavior toward a goal. It is associated with decrements in many other cognitive functions due to aging (West, 1996; Raz, 2000) with itself particularly vulnerable to the effect of aging (Treitz et al., 2007). Cognitive training in the form of structural experience with executive coordination demands exhibited effective enhancement in the elderly (Hertzog et al., 2008). The current study was thus aimed at the development and evaluation of a training regime for executive function in the elderly. The breakfast cooking task of Craik and Bialystok (2006) was adapted into a multitasking training task in a session (pre-test vs. post-test) by group (control vs. training). In the training condition, participants constantly switched, updated, and planned in order to control the cooking of several foods and concurrently performed a table setting secondary task. Training gains were exhibited on task related measures. Transfer effect was selectively observed on the letterânumber sequencing and digit symbol coding test. The cooking training produced short term increase in the efficiency of executive control processing. These effects were interpreted in terms of the process overlap between the training and the transfer tasks
Probabilistic Shaping for Finite Blocklengths: Distribution Matching and Sphere Shaping
In this paper, we provide for the first time a systematic comparison of
distribution matching (DM) and sphere shaping (SpSh) algorithms for short
blocklength probabilistic amplitude shaping. For asymptotically large
blocklengths, constant composition distribution matching (CCDM) is known to
generate the target capacity-achieving distribution. As the blocklength
decreases, however, the resulting rate loss diminishes the efficiency of CCDM.
We claim that for such short blocklengths and over the additive white Gaussian
channel (AWGN), the objective of shaping should be reformulated as obtaining
the most energy-efficient signal space for a given rate (rather than matching
distributions). In light of this interpretation, multiset-partition DM (MPDM),
enumerative sphere shaping (ESS) and shell mapping (SM), are reviewed as
energy-efficient shaping techniques. Numerical results show that MPDM and SpSh
have smaller rate losses than CCDM. SpSh--whose sole objective is to maximize
the energy efficiency--is shown to have the minimum rate loss amongst all. We
provide simulation results of the end-to-end decoding performance showing that
up to 1 dB improvement in power efficiency over uniform signaling can be
obtained with MPDM and SpSh at blocklengths around 200. Finally, we present a
discussion on the complexity of these algorithms from the perspective of
latency, storage and computations.Comment: 18 pages, 10 figure
Investigations into the feasibility of an on-line test methodology
This thesis aims to understand how information coding and the protocol that it
supports can affect the characteristics of electronic circuits. More specifically, it
investigates an on-line test methodology called IFIS (If it Fails It Stops) and its
impact on the design, implementation and subsequent characteristics of circuits
intended for application specific lC (ASIC) technology.
The first study investigates the influences of information coding and protocol on the
characteristics of IFIS systems. The second study investigates methods of circuit
design applicable to IFIS cells and identifies the· technique possessing the
characteristics most suitable for on-line testing. The third study investigates the
characteristics of a 'real-life' commercial UART re-engineered using the techniques
resulting from the previous two studies. The final study investigates the effects of the
halting properties endowed by the protocol on failure diagnosis within IFIS systems.
The outcome of this work is an identification and characterisation of the factors that
influence behaviour, implementation costs and the ability to test and diagnose IFIS
designs
Gbit/second lossless data compression hardware
This thesis investigates how to improve the performance of lossless data compression hardware
as a tool to reduce the cost per bit stored in a computer system or transmitted over a
communication network.
Lossless data compression allows the exact reconstruction of the original data after
decompression. Its deployment in some high-bandwidth applications has been hampered due to
performance limitations in the compressing hardware that needs to match the performance of the
original system to avoid becoming a bottleneck. Advancing the area of lossless data compression
hardware, hence, offers a valid motivation with the potential of doubling the performance of the
system that incorporates it with minimum investment.
This work starts by presenting an analysis of current compression methods with the objective of
identifying the factors that limit performance and also the factors that increase it. [Continues.
Advanced space communications architecture study. Volume 2: Technical report
The technical feasibility and economic viability of satellite system architectures that are suitable for customer premise service (CPS) communications are investigated. System evaluation is performed at 30/20 GHz (Ka-band); however, the system architectures examined are equally applicable to 14/11 GHz (Ku-band). Emphasis is placed on systems that permit low-cost user terminals. Frequency division multiple access (FDMA) is used on the uplink, with typically 10,000 simultaneous accesses per satellite, each of 64 kbps. Bulk demodulators onboard the satellite, in combination with a baseband multiplexer, convert the many narrowband uplink signals into a small number of wideband data streams for downlink transmission. Single-hop network interconnectivity is accomplished via downlink scanning beams. Each satellite is estimated to weigh 5600 lb and consume 6850W of power; the corresponding payload totals are 1000 lb and 5000 W. Nonrecurring satellite cost is estimated at 113 million. In large quantities, the user terminal cost estimate is $25,000. For an assumed traffic profile, the required system revenue has been computed as a function of the internal rate of return (IRR) on invested capital. The equivalent user charge per-minute of 64-kbps channel service has also been determined
- âŠ