208 research outputs found

    A predictive resource allocation algorithm in the LTE uplink for event based M2M applications

    Get PDF
    Some M2M applications such as event monitoring involve a group of devices in a vicinity that act in a co-ordinated manner. An LTE network can exploit the correlated traffic characteristics of such devices by proactively assigning resources to devices based upon the activity of neighboring devices in the same group. This can reduce latency compared to waiting for each device in the group to request resources reactively per the standard LTE protocol. In this paper, we specify a new low complexity predictive resource allocation algorithm, known as the one way algorithm, for use with delay sensitive event based M2M applications in the LTE uplink. This algorithm requires minimal incremental processing power and memory resources at the eNodeB, yet can reduce the mean uplink latency below the minimum possible value for a non-predictive resource allocation algorithm. We develop mathematical models for the probability of a prediction, the probability of a successful prediction, the probability of an unsuccessful prediction, resource usage/wastage probabilities and mean uplink latency. The validity of these models is demonstrated by comparison with the results from a simulation. The models can be used offline by network operators or online in real time by the eNodeB scheduler to optimize performance

    Predictive resource allocation in the LTE uplink for event based M2M applications

    Get PDF
    For certain event based M2M applications, it is possible to predict when devices will or may need to send data on the LTE uplink. For example, in a wireless sensor network, the fact that one sensor has triggered may increase the probability that other sensors in the vicinity may also trigger in quick succession. The existing reactive LTE uplink access protocol, in which a device with pending data sends a scheduling request to the eNodeB at its next scheduled opportunity, and the eNodeB responds with an uplink grant, can lead to high latencies. This is particularly the case when the system utilizes a high scheduling request period (of up to 80ms) to support a large number of devices in a cell, which is characteristic of M2M deployments. In this paper, we introduce, analyze and simulate a new predictive/proactive resource allocation scheme for the LTE uplink for use with event based M2M applications. In this scheme, when one device in a group sends a scheduling request, the eNodeB identifies neighbor devices in the same group which may benefit from a predictive resource allocation in lieu of waiting for those neighbors to send a scheduling request at their next scheduled opportunity. We demonstrate how the minimum uplink latency can be reduced from 6ms to 5ms and how the mean uplink latency can be reduced by greater than 50% (in certain scenarios) using this method

    2D Proactive Uplink Resource Allocation Algorithm for Event Based MTC Applications

    Full text link
    We propose a two dimension (2D) proactive uplink resource allocation (2D-PURA) algorithm that aims to reduce the delay/latency in event-based machine-type communications (MTC) applications. Specifically, when an event of interest occurs at a device, it tends to spread to the neighboring devices. Consequently, when a device has data to send to the base station (BS), its neighbors later are highly likely to transmit. Thus, we propose to cluster devices in the neighborhood around the event, also referred to as the disturbance region, into rings based on the distance from the original event. To reduce the uplink latency, we then proactively allocate resources for these rings. To evaluate the proposed algorithm, we analytically derive the mean uplink delay, the proportion of resource conservation due to successful allocations, and the proportion of uplink resource wastage due to unsuccessful allocations for 2D-PURA algorithm. Numerical results demonstrate that the proposed method can save over 16.5 and 27 percent of mean uplink delay, compared with the 1D algorithm and the standard method, respectively.Comment: 6 pages, 6 figures, Published in 2018 IEEE Wireless Communications and Networking Conference (WCNC

    Delay models for static and adaptive persistent resource allocations in wireless systems

    Get PDF
    A variety of scheduling strategies can be employed in wireless systems to satisfy different system objectives and to cater for different traffic types. Static persistent resource allocations can be employed to transfer small M2M data packets efficiently compared to dynamic packet-by-packet scheduling, even when the M2M traffic model is non-deterministic. Recently, adaptive persistent allocations have been proposed in which the volume of allocated resources can change in sympathy with the instantaneous queue size at the M2M device and without expensive signaling on control channels. This increases the efficiency of resource usage at the expense of a (typically small) increased packet delay. In this paper, we derive a statistical model for the device queue size and packet delay in static and adaptive persistent allocations which can be used for any arrival process (i.e., Poisson or otherwise). The primary motivation is to assist with dimensioning of persistent allocations given a set of QoS requirements (such as a prescribed delay budget). We validate the statistical model via comparison with queue size and delay statistics obtained from a discrete event simulation of a persistent allocation system. The validation is performed for both exponential and gamma distributed packet inter-arrivals to demonstrate the model generality

    Statistical priority-based uplink scheduling for M2M communications

    Get PDF
    Currently, the worldwide network is witnessing major efforts to transform it from being the Internet of humans only to becoming the Internet of Things (IoT). It is expected that Machine Type Communication Devices (MTCDs) will overwhelm the cellular networks with huge traffic of data that they collect from their environments to be sent to other remote MTCDs for processing thus forming what is known as Machine-to-Machine (M2M) communications. Long Term Evolution (LTE) and LTE-Advanced (LTE-A) appear as the best technology to support M2M communications due to their native IP support. LTE can provide high capacity, flexible radio resource allocation and scalability, which are the required pillars for supporting the expected large numbers of deployed MTCDs. Supporting M2M communications over LTE faces many challenges. These challenges include medium access control and the allocation of radio resources among MTCDs. The problem of radio resources allocation, or scheduling, originates from the nature of M2M traffic. This traffic consists of a large number of small data packets, with specific deadlines, generated by a potentially massive number of MTCDs. M2M traffic is therefore mostly in the uplink direction, i.e. from MTCDs to the base station (known as eNB in LTE terminology). These characteristics impose some design requirements on M2M scheduling techniques such as the need to use insufficient radio resources to transmit a huge amount of traffic within certain deadlines. This presents the main motivation behind this thesis work. In this thesis, we introduce a novel M2M scheduling scheme that utilizes what we term the “statistical priority” in determining the importance of information carried by data packets. Statistical priority is calculated based on the statistical features of the data such as value similarity, trend similarity and auto-correlation. These calculations are made and then reported by the MTCDs to the serving eNBs along with other reports such as channel state. Statistical priority is then used to assign priorities to data packets so that the scarce radio resources are allocated to the MTCDs that are sending statistically important information. This would help avoid exploiting limited radio resources to carry redundant or repetitive data which is a common situation in M2M communications. In order to validate our technique, we perform a simulation-based comparison among the main scheduling techniques and our proposed statistical priority-based scheduling technique. This comparison was conducted in a network that includes different types of MTCDs, such as environmental monitoring sensors, surveillance cameras and alarms. The results show that our proposed statistical priority-based scheduler outperforms the other schedulers in terms of having the least losses of alarm data packets and the highest rate in sending critical data packets that carry non-redundant information for both environmental monitoring and video traffic. This indicates that the proposed technique is the most efficient in the utilization of limited radio resources as compared to the other techniques

    SymbioCity: Smart Cities for Smarter Networks

    Get PDF
    The "Smart City" (SC) concept revolves around the idea of embodying cutting-edge ICT solutions in the very fabric of future cities, in order to offer new and better services to citizens while lowering the city management costs, both in monetary, social, and environmental terms. In this framework, communication technologies are perceived as subservient to the SC services, providing the means to collect and process the data needed to make the services function. In this paper, we propose a new vision in which technology and SC services are designed to take advantage of each other in a symbiotic manner. According to this new paradigm, which we call "SymbioCity", SC services can indeed be exploited to improve the performance of the same communication systems that provide them with data. Suggestive examples of this symbiotic ecosystem are discussed in the paper. The dissertation is then substantiated in a proof-of-concept case study, where we show how the traffic monitoring service provided by the London Smart City initiative can be used to predict the density of users in a certain zone and optimize the cellular service in that area.Comment: 14 pages, submitted for publication to ETT Transactions on Emerging Telecommunications Technologie

    Predictive Pre-allocation for Low-latency Uplink Access in Industrial Wireless Networks

    Full text link
    Driven by mission-critical applications in modern industrial systems, the 5th generation (5G) communication system is expected to provide ultra-reliable low-latency communications (URLLC) services to meet the quality of service (QoS) demands of industrial applications. However, these stringent requirements cannot be guaranteed by its conventional dynamic access scheme due to the complex signaling procedure. A promising solution to reduce the access delay is the pre-allocation scheme based on the semi-persistent scheduling (SPS) technique, which however may lead to low spectrum utilization if the allocated resource blocks (RBs) are not used. In this paper, we aim to address this issue by developing DPre, a predictive pre-allocation framework for uplink access scheduling of delay-sensitive applications in industrial process automation. The basic idea of DPre is to explore and exploit the correlation of data acquisition and access behavior between nodes through static and dynamic learning mechanisms in order to make judicious resource per-allocation decisions. We evaluate the effectiveness of DPre based on several monitoring applications in a steel rolling production process. Simulation results demonstrate that DPre achieves better performance in terms of the prediction accuracy, which can effectively increase the rewards of those reserved resources.Comment: Full version (accepted by INFOCOM 2018
    corecore