518,339 research outputs found

    Ultra-Reliable and Low Latency Communication in mmWave-Enabled Massive MIMO Networks

    Get PDF
    Ultra-reliability and low-latency are two key components in 5G networks. In this letter, we investigate the problem of ultra-reliable and low-latency communication (URLLC) in millimeter wave (mmWave)-enabled massive multiple-input multiple-output (MIMO) networks. The problem is cast as a network utility maximization subject to probabilistic latency and reliability constraints. To solve this problem, we resort to the Lyapunov technique whereby a utility-delay control approach is proposed, which adapts to channel variations and queue dynamics. Numerical results demonstrate that our proposed approach ensures reliable communication with a guaranteed probability of 99.99%, and reduces latency by 28.41% and 77.11% as compared to baselines with and without probabilistic latency constraints, respectively.Comment: Accepted May 12, 2017 by IEEE Communications Letters. Topic is Ultra-Reliable and Low Latency Communication in 5G mmWave Network

    Saccadic latency in amblyopia.

    Get PDF
    We measured saccadic latencies in a large sample (total n = 459) of individuals with amblyopia or risk factors for amblyopia, e.g., strabismus or anisometropia, and normal control subjects. We presented an easily visible target randomly to the left or right, 3.5° from fixation. The interocular difference in saccadic latency is highly correlated with the interocular difference in LogMAR (Snellen) acuity-as the acuity difference increases, so does the latency difference. Strabismic and strabismic-anisometropic amblyopes have, on average, a larger difference between their eyes in LogMAR acuity than anisometropic amblyopes and thus their interocular latency difference is, on average, significantly larger than anisometropic amblyopes. Despite its relation to LogMAR acuity, the longer latency in strabismic amblyopes cannot be attributed either to poor resolution or to reduced contrast sensitivity, because their interocular differences in grating acuity and in contrast sensitivity are roughly the same as for anisometropic amblyopes. The correlation between LogMAR acuity and saccadic latency arises because of the confluence of two separable effects in the strabismic amblyopic eye-poor letter recognition impairs LogMAR acuity while an intrinsic sluggishness delays reaction time. We speculate that the frequent microsaccades and the accompanying attentional shifts, made while strabismic amblyopes struggle to maintain fixation with their amblyopic eyes, result in all types of reactions being irreducibly delayed

    Frameless ALOHA with Reliability-Latency Guarantees

    Get PDF
    One of the novelties brought by 5G is that wireless system design has increasingly turned its focus on guaranteeing reliability and latency. This shifts the design objective of random access protocols from throughput optimization towards constraints based on reliability and latency. For this purpose, we use frameless ALOHA, which relies on successive interference cancellation (SIC), and derive its exact finite-length analysis of the statistics of the unresolved users (reliability) as a function of the contention period length (latency). The presented analysis can be used to derive the reliability-latency guarantees. We also optimize the scheme parameters in order to maximize the reliability within a given latency. Our approach represents an important step towards the general area of design and analysis of access protocols with reliability-latency guarantees.Comment: Accepted for presentation at IEEE Globecom 201

    Quantifying the latency benefits of near-edge and in-network FPGA acceleration

    Get PDF
    Transmitting data to cloud datacenters in distributed IoT applications introduces significant communication latency, but is often the only feasible solution when source nodes are computationally limited. To address latency concerns, cloudlets, in-network computing, and more capable edge nodes are all being explored as a way of moving processing capability towards the edge of the network. Hardware acceleration using Field Programmable Gate Arrays (FPGAs) is also seeing increased interest due to reduced computation latency and improved efficiency. This paper evaluates the the implications of these offloading approaches using a case study neural network based image classification application, quantifying both the computation and communication latency resulting from different platform choices. We consider communication latency including the ingestion of packets for processing on the target platform, showing that this varies significantly with the choice of platform. We demonstrate that emerging in-network accelerator approaches offer much improved and predictable performance as well as better scaling to support multiple data sources
    corecore