196 research outputs found

    Interactive Joint Transfer of Energy and Information

    Get PDF
    In some communication networks, such as passive RFID systems, the energy used to transfer information between a sender and a recipient can be reused for successive communication tasks. In fact, from known results in physics, any system that exchanges information via the transfer of given physical resources, such as radio waves, particles and qubits, can conceivably reuse, at least part, of the received resources. This paper aims at illustrating some of the new challenges that arise in the design of communication networks in which the signals exchanged by the nodes carry both information and energy. To this end, a baseline two-way communication system is considered in which two nodes communicate in an interactive fashion. In the system, a node can either send an "on" symbol (or "1"), which costs one unit of energy, or an "off" signal (or "0"), which does not require any energy expenditure. Upon reception of a "1" signal, the recipient node "harvests", with some probability, the energy contained in the signal and stores it for future communication tasks. Inner and outer bounds on the achievable rates are derived. Numerical results demonstrate the effectiveness of the proposed strategies and illustrate some key design insights.Comment: 29 pages, 11 figures, Submitted in IEEE Transactions on Communications. arXiv admin note: substantial text overlap with arXiv:1204.192

    Joint Interference Alignment and Bi-Directional Scheduling for MIMO Two-Way Multi-Link Networks

    Full text link
    By means of the emerging technique of dynamic Time Division Duplex (TDD), the switching point between uplink and downlink transmissions can be optimized across a multi-cell system in order to reduce the impact of inter-cell interference. It has been recently recognized that optimizing also the order in which uplink and downlink transmissions, or more generally the two directions of a two-way link, are scheduled can lead to significant benefits in terms of interference reduction. In this work, the optimization of bi-directional scheduling is investigated in conjunction with the design of linear precoding and equalization for a general multi-link MIMO two-way system. A simple algorithm is proposed that performs the joint optimization of the ordering of the transmissions in the two directions of the two-way links and of the linear transceivers, with the aim of minimizing the interference leakage power. Numerical results demonstrate the effectiveness of the proposed strategy.Comment: To be presented at ICC 2015, 6 pages, 7 figure

    A Statistical Learning Approach to Ultra-Reliable Low Latency Communication

    Get PDF
    Mission-critical applications require Ultra-Reliable Low Latency (URLLC) wireless connections, where the packet error rate (PER) goes down to 10−910^{-9}. Fulfillment of the bold reliability figures becomes meaningful only if it can be related to a statistical model in which the URLLC system operates. However, this model is generally not known and needs to be learned by sampling the wireless environment. In this paper we treat this fundamental problem in the simplest possible communication-theoretic setting: selecting a transmission rate over a dynamic wireless channel in order to guarantee high transmission reliability. We introduce a novel statistical framework for design and assessment of URLLC systems, consisting of three key components: (i) channel model selection; (ii) learning the model using training; (3) selecting the transmission rate to satisfy the required reliability. As it is insufficient to specify the URLLC requirements only through PER, two types of statistical constraints are introduced, Averaged Reliability (AR) and Probably Correct Reliability (PCR). The analysis and the evaluations show that adequate model selection and learning are indispensable for designing consistent physical layer that asymptotically behaves as if the channel was known perfectly, while maintaining the reliability requirements in URLLC systems.Comment: Submitted for publicatio

    Delay and Communication Tradeoffs for Blockchain Systems With Lightweight IoT Clients

    Get PDF

    Non-Orthogonal Multiplexing of Ultra-Reliable and Broadband Services in Fog-Radio Architectures

    Get PDF

    Ultra Reliable Low Latency Communications in Massive Multi-Antenna Systems

    Get PDF

    Progressive feature transmission for split classification at the wireless edge

    Get PDF
    We consider the scenario of inference at the wire-less edge , in which devices are connected to an edge server and ask the server to carry out remote classification, that is, classify data samples available at edge devices. This requires the edge devices to upload high-dimensional features of samples over resource-constrained wireless channels, which creates a communication bottleneck. The conventional feature pruning solution would require the device to have access to the inference model, which is not available in the current split inference scenario. To address this issue, we propose the progressive feature transmission (ProgressFTX) protocol, which minimizes the overhead by progressively transmitting features until a target confidence level is reached. A control policy is proposed to accelerate inference, comprising two key operations: importance-aware feature selection at the server and transmission-termination control . For the former, it is shown that selecting the most important features, characterized by the largest discriminant gains of the corresponding feature dimensions, achieves a sub-optimal performance. For the latter, the proposed policy is shown to exhibit a threshold structure. Specifically, the transmission is stopped when the incremental uncertainty reduction by further feature transmission is outweighed by its communication cost. The indices of the selected features and transmission decision are fed back to the device in each slot. The control policy is first derived for the tractable case of linear classification, and then extended to the more complex case of classification using a convolutional neural network . Both Gaussian and fading channels are considered. Experimental results are obtained for both a statistical data model and a real dataset. It is shown that ProgressFTX can substantially reduce the communication latency compared to conventional feature pruning and random feature transmission strategies

    Outage Analysis of Downlink URLLC in Massive MIMO systems with Power Allocation

    Get PDF
    Massive MIMO is seen as a main enabler for low-latency communications, thanks to its high spatial degrees of freedom. The channel hardening and favorable propagation properties of Massive MIMO are particularly important for multiplexing several URLLC devices. However, the actual utility of channel hardening and spatial multiplexing is dependent critically on the accuracy of channel knowledge. When several low-latency devices are multiplexed, the cost for acquiring accurate knowledge becomes critical, and it is not evident how many devices can be served with a latency-reliability requirement and how many pilot symbols should be allocated. This paper investigates the trade-off between achieving high spectral efficiency and high reliability in the downlink, by employing various power allocation strategies, for maximum ratio and minimum mean square error precoders. The results show that using max-min SINR power allocation achieves the best reliability, at the expense of lower sum spectral efficiency

    Stochastic Geometric Coverage Analysis in mmWave Cellular Networks With Realistic Channel and Antenna Radiation Models

    Get PDF
    Millimeter-wave (mmWave) bands will play an important role in 5G wireless systems. The system performance can be assessed by using models from stochastic geometry that cater for the directivity in the desired signal transmissions as well as the interference, and by calculating the signal-To-interference-plus-noise ratio ( \mathsf {SINR} ) coverage. Nonetheless, the accuracy of the existing coverage expressions derived through stochastic geometry may be questioned, as it is not clear whether they would capture the impact of the detailed mmWave channel and antenna features. In this paper, we propose an \mathsf {SINR} coverage analysis framework that includes realistic channel model and antenna element radiation patterns. We introduce and estimate two parameters, aligned gain and misaligned gain, associated with the desired signal beam and the interfering signal beam, respectively. The distributions of these gains are used to determine the distribution of the \mathsf {SINR} which is compared with the corresponding \mathsf {SINR} coverage, calculated through the system-level simulations. The results show that both aligned and misaligned gains can be modeled as exponential-logarithmically distributed random variables with the highest accuracy, and can further be approximated as exponentially distributed random variables with reasonable accuracy. These approximations can be used as a tool to evaluate the system-level performance of various 5G connectivity scenarios in the mmWave band.</p

    Goal-Oriented Scheduling in Sensor Networks With Application Timing Awareness

    Get PDF
    — Taking inspiration from linguistics, the communications theoretical community has recently shown a significant recent interest in pragmatic, or goal-oriented, communication. In this paper, we tackle the problem of pragmatic communication with multiple clients with different, and potentially conflicting, objectives. We capture the goal-oriented aspect through the metric of Value of Information (VoI), which considers the estimation of the remote process as well as the timing constraints. However, the most common definition of VoI is simply the Mean Square Error (MSE) of the whole system state, regardless of the relevance for a specific client. Our work aims to overcome this limitation by including different summary statistics, i.e., value functions of the state, for separate clients, and a diversified query process on the client side, expressed through the fact that different applications may request different functions of the process state at different times. A query-aware Deep Reinforcement Learning (DRL) solution based on statically defined VoI can outperform naive approaches by 15-20%
    • …
    corecore