18 research outputs found

    Cooperative Binning for Semi-deterministic Channels with Non-causal State Information

    Full text link
    The capacity of the semi-deterministic relay channel (SD-RC) with non-causal channel state information (CSI) only at the encoder and decoder is characterized. The capacity is achieved by a scheme based on cooperative-bin-forward. This scheme allows cooperation between the transmitter and the relay without the need to decode a part of the message by the relay. The transmission is divided into blocks and each deterministic output of the channel (observed by the relay) is mapped to a bin. The bin index is used by the encoder and the relay to choose the cooperation codeword in the next transmission block. In causal settings the cooperation is independent of the state. In \emph{non-causal} settings dependency between the relay's transmission and the state can increase the transmission rates. The encoder implicitly conveys partial state information to the relay. In particular, it uses the states of the next block and selects a cooperation codeword accordingly and the relay transmission depends on the cooperation codeword and therefore also on the states. We also consider the multiple access channel with partial cribbing as a semi-deterministic channel. The capacity region of this channel with non-causal CSI is achieved by the new scheme. Examining the result in several cases, we introduce a new problem of a point-to-point (PTP) channel where the state is provided to the transmitter by a state encoder. Interestingly, even though the CSI is also available at the receiver, we provide an example which shows that the capacity with non-causal CSI at the state encoder is strictly larger than the capacity with causal CSI

    Causal cognitive radio: An information theoretic perspective

    Get PDF
    Master'sMASTER OF ENGINEERIN

    Joint source and channel coding

    Get PDF

    3D Medical Image Lossless Compressor Using Deep Learning Approaches

    Get PDF
    The ever-increasing importance of accelerated information processing, communica-tion, and storing are major requirements within the big-data era revolution. With the extensive rise in data availability, handy information acquisition, and growing data rate, a critical challenge emerges in efficient handling. Even with advanced technical hardware developments and multiple Graphics Processing Units (GPUs) availability, this demand is still highly promoted to utilise these technologies effectively. Health-care systems are one of the domains yielding explosive data growth. Especially when considering their modern scanners abilities, which annually produce higher-resolution and more densely sampled medical images, with increasing requirements for massive storage capacity. The bottleneck in data transmission and storage would essentially be handled with an effective compression method. Since medical information is critical and imposes an influential role in diagnosis accuracy, it is strongly encouraged to guarantee exact reconstruction with no loss in quality, which is the main objective of any lossless compression algorithm. Given the revolutionary impact of Deep Learning (DL) methods in solving many tasks while achieving the state of the art results, includ-ing data compression, this opens tremendous opportunities for contributions. While considerable efforts have been made to address lossy performance using learning-based approaches, less attention was paid to address lossless compression. This PhD thesis investigates and proposes novel learning-based approaches for compressing 3D medical images losslessly.Firstly, we formulate the lossless compression task as a supervised sequential prediction problem, whereby a model learns a projection function to predict a target voxel given sequence of samples from its spatially surrounding voxels. Using such 3D local sampling information efficiently exploits spatial similarities and redundancies in a volumetric medical context by utilising such a prediction paradigm. The proposed NN-based data predictor is trained to minimise the differences with the original data values while the residual errors are encoded using arithmetic coding to allow lossless reconstruction.Following this, we explore the effectiveness of Recurrent Neural Networks (RNNs) as a 3D predictor for learning the mapping function from the spatial medical domain (16 bit-depths). We analyse Long Short-Term Memory (LSTM) models’ generalisabil-ity and robustness in capturing the 3D spatial dependencies of a voxel’s neighbourhood while utilising samples taken from various scanning settings. We evaluate our proposed MedZip models in compressing unseen Computerized Tomography (CT) and Magnetic Resonance Imaging (MRI) modalities losslessly, compared to other state-of-the-art lossless compression standards.This work investigates input configurations and sampling schemes for a many-to-one sequence prediction model, specifically for compressing 3D medical images (16 bit-depths) losslessly. The main objective is to determine the optimal practice for enabling the proposed LSTM model to achieve a high compression ratio and fast encoding-decoding performance. A solution for a non-deterministic environments problem was also proposed, allowing models to run in parallel form without much compression performance drop. Compared to well-known lossless codecs, experimental evaluations were carried out on datasets acquired by different hospitals, representing different body segments, and have distinct scanning modalities (i.e. CT and MRI).To conclude, we present a novel data-driven sampling scheme utilising weighted gradient scores for training LSTM prediction-based models. The objective is to determine whether some training samples are significantly more informative than others, specifically in medical domains where samples are available on a scale of billions. The effectiveness of models trained on the presented importance sampling scheme was evaluated compared to alternative strategies such as uniform, Gaussian, and sliced-based sampling

    Causal Sampling, Compressing, and Channel Coding of Streaming Data

    Get PDF
    With the emergence of the Internet of Things, communication systems, such as those employed in distributed control and tracking scenarios, are becoming increasingly dynamic, interactive, and delay-sensitive. The data in such real-time systems arrive at the encoder progressively in a streaming fashion. An intriguing question is: what codes can transmit streaming data with both high reliability and low latency? Classical non-causal (block) encoding schemes can transmit data reliably but under the assumption that the encoder knows the entire data block before the transmission. While this is a realistic assumption in delay-tolerant systems, it is ill-suited to real-time systems due to the delay introduced by collecting data into a block. This thesis studies causal encoding: the encoder transmits information based on the causally received data while the data is still streaming in and immediately incorporates the newly received data into a continuing transmission on the fly. This thesis investigates causal encoding of streaming data in three scenarios: causal sampling, causal lossy compressing, and causal joint source-channel coding (JSCC). In the causal sampling scenario, a sampler observes a continuous-time source process and causally decides when to transmit real-valued samples of it under a constraint on the average number of samples per second; an estimator uses the causally received samples to approximate the source process in real time. We propose a causal sampling policy that achieves the best tradeoff between the sampling frequency and the end-to-end real-time estimation distortion for a class of continuous Markov processes. In the causal lossy compressing scenario, the sampling frequency constraint in the causal sampling scenario is replaced by a rate constraint on the average number of bits per second. We propose a causal code that achieves the best causal distortion-rate tradeoff for the same class of processes. In the causal JSCC scenario, the noiseless channel and the continuous-time process in the previous scenarios are replaced by a discrete memoryless channel with feedback and a sequence of streaming symbols, respectively. We propose a causal joint sourcechannel code that achieves the maximum exponentially decaying rate of the error probability compatible with a given rate. Remarkably, the fundamental limits in the causal lossy compressing and the causal JSCC scenarios achieved by our causal codes are no worse than those achieved by the best non-causal codes. In addition to deriving the fundamental limits and presenting the causal codes that achieve the limits, we also show that our codes apply to control systems, are resilient to system deficiencies such as channel delay and noise, and have low complexities.</p

    Proceedings of the 1st Virtual Control Conference VCC 2010

    Get PDF

    Gaussian Interference Channels: Examining the Achievable Rate Region

    Get PDF
    Interference is assumed to be one of the main barriers to improving the throughput of communication systems. Consequently, interference management plays an integral role in wireless communications. Although the importance of interference has promoted numerous studies on the interference channel, the capacity region of this channel is still unknown. The focus of this thesis is on Gaussian interference channels. The two-user Gaussian Interference Channel (GIC) represents the standard model of a wireless system in which two independent transmitter-receiver pairs share the bandwidth. Three important problems are investigated: the boundary of the best-known achievable rate region, the complexity of sum-rate optimal codes, and the role of causal cooperation in enlarging the achievable rate region. The best-known achievable rate region for the two-user GIC is due to the Han-Kobayashi (HK) scheme. The HK achievable rate region includes the rate regions achieved by all other known schemes. However, mathematical expressions that characterize the HK rate region are complicated and involve a time sharing variable and two arbitrary power splitting variables. Accordingly, the boundary points of the HK rate region, and in particular the maximum HK sum-rate, are not known in general. The second chapter of this thesis studies the sum-rate of the HK scheme with Gaussian inputs, when time sharing is not used. Note that the optimal input distribution is unknown. However, for all cases where the sum-capacity is known, it is achieved by Gaussian inputs. In this thesis, we examine the HK scheme with Gaussian inputs. For the weak interference class, this study fully characterizes the maximum achievable sum-rate and shows that the weak interference class is partitioned into five parts. For each part, the optimal power splitting and the corresponding maximum achievable sum-rate are expressed in closed forms. In the third chapter, we show that the same approach can be adopted to characterize an arbitrary weighted sum-rate. Moreover, when time sharing is used, we expressed the entire boundary in terms of the upper concave envelope of a function. Consequently, the entire boundary of the HK rate region with Gaussian inputs is fully characterized. The decoding complexity of a given coding scheme is of paramount importance in wireless communications. Most coding schemes proposed for the interference channel take advantage of joint decoding to achieve a larger rate region. However, decoding complexity escalates considerably when joint decoding is used. The fourth chapter studies the achievable sum-rate of the two-user GIC when joint decoding is replaced by successive decoding. This achievable sum-rate is known when interference is mixed. However, when interference is strong or weak, it is not well understood. First, this study proves that when interference is strong and transmitters' powers satisfy certain conditions, the sum-capacity can be achieved by successive decoding. Second, when interference is weak, a novel rate-splitting scheme is proposed that does not use joint decoding. It is proved that the difference between the sum-rate of this scheme and that of the HK scheme is bounded. This study sheds light on the structure of sum-rate optimal codes. Causal cooperation among nodes in a communication system is a promising approach to increasing overall system performance. To guarantee causality, delay is inevitable in cooperative communication systems. Traditionally, delay granularity has been limited to one symbol; however, channel delay is in fact governed by channel memory and can be shorter. For example, the delay requirement in Orthogonal Frequency-Division Multiplexing (OFDM), captured in the cyclic prefix, is typically much shorter than the OFDM symbol itself. This perspective is used in the fifth chapter to study the two-user GIC with full-duplex transmitters. Among other results, it is shown that under a mild condition, the maximum multiplexing gain of this channel is in fact two

    Advanced receivers for distributed cooperation in mobile ad hoc networks

    Get PDF
    Mobile ad hoc networks (MANETs) are rapidly deployable wireless communications systems, operating with minimal coordination in order to avoid spectral efficiency losses caused by overhead. Cooperative transmission schemes are attractive for MANETs, but the distributed nature of such protocols comes with an increased level of interference, whose impact is further amplified by the need to push the limits of energy and spectral efficiency. Hence, the impact of interference has to be mitigated through with the use PHY layer signal processing algorithms with reasonable computational complexity. Recent advances in iterative digital receiver design techniques exploit approximate Bayesian inference and derivative message passing techniques to improve the capabilities of well-established turbo detectors. In particular, expectation propagation (EP) is a flexible technique which offers attractive complexity-performance trade-offs in situations where conventional belief propagation is limited by computational complexity. Moreover, thanks to emerging techniques in deep learning, such iterative structures are cast into deep detection networks, where learning the algorithmic hyper-parameters further improves receiver performance. In this thesis, EP-based finite-impulse response decision feedback equalizers are designed, and they achieve significant improvements, especially in high spectral efficiency applications, over more conventional turbo-equalization techniques, while having the advantage of being asymptotically predictable. A framework for designing frequency-domain EP-based receivers is proposed, in order to obtain detection architectures with low computational complexity. This framework is theoretically and numerically analysed with a focus on channel equalization, and then it is also extended to handle detection for time-varying channels and multiple-antenna systems. The design of multiple-user detectors and the impact of channel estimation are also explored to understand the capabilities and limits of this framework. Finally, a finite-length performance prediction method is presented for carrying out link abstraction for the EP-based frequency domain equalizer. The impact of accurate physical layer modelling is evaluated in the context of cooperative broadcasting in tactical MANETs, thanks to a flexible MAC-level simulato
    corecore