14 research outputs found

    Random Access Procedure for Machine Type Communication in Mobile Networks

    Get PDF
    Komunikace strojů (Machine-type communication, MTC) v mobilních sítích může vést k velkému množství požadavků na přístup k médiu a způsobit tak krátkodobá, ale častá přetížení sítě. Velké množství MTC zařízení přistupujících náhodně k radiovému kanálu vede k vysoké pravděpodobnosti kolize a neúnosné době přístupu k médiu, jelikož velké množství MTC zařízení přistupuje ke sdílenému kanálu pro náhodný přístup (Random Access Channel, RACH), který má však omezenou kapacitu. Tato diplomová práce se zabývá novou procedurou dvoufázového náhodného přístupu (Two-Phase Random Access, TPRA). Navržená procedura TPRA pro přístup MTC zařízení v mobilních sítích umožňuje snížit zátěž kanálu pro náhodný přístup tím, že redukuje pravděpodobnost kolize mezi MTC zařízeními při jejich přístupu k radiovým prostředkům. Toho je dosaženo rozdělením všech zařízení do malých skupin. Navržený koncept umožňuje základnové stanici přizpůsobit počet přístupových kanálů podle jejich aktuálního zatížení. V práci je dále navržen analytický model k vyhodnocení výkonnosti navržené procedury TPRA ve smyslu pravděpodobnosti úspěšného přístupu a doby přístupu. Výsledky simulací potvrzují přesnost těchto metrik odvozených analyticky. Výsledky dále ukazují, že TPRA umožnuje zvýšit pravděpodobnost úspěšného přístupu o 9% a zároveň snížit dobu přístupu o 50% pro vysokou hustotu MTC zařízení v porovnání se standardní LTE-A procedurou náhodného přístupu.Machine-type communication (MTC) can generate numerous connection requests and bring explosive load within small time interval. A massive amount of simultaneous random access attempts results in a high collision probability and intolerable access delay because more devices contend in shared random access channels (RACH) with limited capacity. Thus, this thesis addressed a novel mechanism, denoted as two-phase random access (TPRA) procedure, for MTC in mobile networks to relieve the load of RACH. The proposed TPRA reduces probability of collision among the MTC devices when accessing radio resources by separation of the massive number of devices into small groups. The proposed concept allows a base station to adjust the number of additional access channels according to their current load. Furthermore, we propose an analytical model to evaluate the performance of the proposed TPRA by estimating the access success probability and average access delay. The simulations results validate the accuracy of the performance metrics derived analytically. The results further demonstrate that the proposed TPRA can improve the access success probability by 9% and reduce the access delay by 50% for a high density of the MTC devices comparing to the standard LTE-A random access procedure

    A Simple Model of MTC in Smart Factories

    Get PDF

    Statistical priority-based uplink scheduling for M2M communications

    Get PDF
    Currently, the worldwide network is witnessing major efforts to transform it from being the Internet of humans only to becoming the Internet of Things (IoT). It is expected that Machine Type Communication Devices (MTCDs) will overwhelm the cellular networks with huge traffic of data that they collect from their environments to be sent to other remote MTCDs for processing thus forming what is known as Machine-to-Machine (M2M) communications. Long Term Evolution (LTE) and LTE-Advanced (LTE-A) appear as the best technology to support M2M communications due to their native IP support. LTE can provide high capacity, flexible radio resource allocation and scalability, which are the required pillars for supporting the expected large numbers of deployed MTCDs. Supporting M2M communications over LTE faces many challenges. These challenges include medium access control and the allocation of radio resources among MTCDs. The problem of radio resources allocation, or scheduling, originates from the nature of M2M traffic. This traffic consists of a large number of small data packets, with specific deadlines, generated by a potentially massive number of MTCDs. M2M traffic is therefore mostly in the uplink direction, i.e. from MTCDs to the base station (known as eNB in LTE terminology). These characteristics impose some design requirements on M2M scheduling techniques such as the need to use insufficient radio resources to transmit a huge amount of traffic within certain deadlines. This presents the main motivation behind this thesis work. In this thesis, we introduce a novel M2M scheduling scheme that utilizes what we term the “statistical priority” in determining the importance of information carried by data packets. Statistical priority is calculated based on the statistical features of the data such as value similarity, trend similarity and auto-correlation. These calculations are made and then reported by the MTCDs to the serving eNBs along with other reports such as channel state. Statistical priority is then used to assign priorities to data packets so that the scarce radio resources are allocated to the MTCDs that are sending statistically important information. This would help avoid exploiting limited radio resources to carry redundant or repetitive data which is a common situation in M2M communications. In order to validate our technique, we perform a simulation-based comparison among the main scheduling techniques and our proposed statistical priority-based scheduling technique. This comparison was conducted in a network that includes different types of MTCDs, such as environmental monitoring sensors, surveillance cameras and alarms. The results show that our proposed statistical priority-based scheduler outperforms the other schedulers in terms of having the least losses of alarm data packets and the highest rate in sending critical data packets that carry non-redundant information for both environmental monitoring and video traffic. This indicates that the proposed technique is the most efficient in the utilization of limited radio resources as compared to the other techniques

    Prioritised Random Access Channel Protocols for Delay Critical M2M Communication over Cellular Networks

    Get PDF
    With the ever-increasing technological evolution, the current and future generation communication systems are geared towards accommodating Machine to Machine (M2M) communication as a necessary prerequisite for Internet of Things (IoT). Machine Type Communication (MTC) can sustain many promising applications through connecting a huge number of devices into one network. As current studies indicate, the number of devices is escalating at a high rate. Consequently, the network becomes congested because of its lower capacity, when the massive number of devices attempts simultaneous connection through the Random Access Channel (RACH). This results in RACH resource shortage, which can lead to high collision probability and massive access delay. Hence, it is critical to upgrade conventional Random Access (RA) techniques to support a massive number of Machine Type Communication (MTC) devices including Delay-Critical (DC) MTC. This thesis approaches to tackle this problem by modeling and optimising the access throughput and access delay performance of massive random access of M2M communications in Long-Term Evolution (LTE) networks. This thesis investigates the performance of different random access schemes in different scenarios. The study begins with the design and inspection of a group based 2-step Slotted-Aloha RACH (SA-RACH) scheme considering the coexistence of Human-to-Human (H2H) and M2M communication, the latter of which is categorised as: Delay-Critical user equipments (DC-UEs) and Non-Delay-Critical user equipments (NDC-UEs). Next, a novel RACH scheme termed the Priority-based Dynamic RACH (PD-RACH) model is proposed which utilises a coded preamble based collision probability model. Finally, being a key enabler of IoT, Machine Learning, i.e. a Q-learning based approach has been adopted, and a learning assisted Prioritised RACH scheme has been developed and investigated to prioritise a specific user group. In this work, the performance analysis of these novel RACH schemes show promising results compared to that of conventional RACH

    Optimization of Mobility Parameters using Fuzzy Logic and Reinforcement Learning in Self-Organizing Networks

    Get PDF
    In this thesis, several optimization techniques for next-generation wireless networks are proposed to solve different problems in the field of Self-Organizing Networks and heterogeneous networks. The common basis of these problems is that network parameters are automatically tuned to deal with the specific problem. As the set of network parameters is extremely large, this work mainly focuses on parameters involved in mobility management. In addition, the proposed self-tuning schemes are based on Fuzzy Logic Controllers (FLC), whose potential lies in the capability to express the knowledge in a similar way to the human perception and reasoning. In addition, in those cases in which a mathematical approach has been required to optimize the behavior of the FLC, the selected solution has been Reinforcement Learning, since this methodology is especially appropriate for learning from interaction, which becomes essential in complex systems such as wireless networks. Taking this into account, firstly, a new Mobility Load Balancing (MLB) scheme is proposed to solve persistent congestion problems in next-generation wireless networks, in particular, due to an uneven spatial traffic distribution, which typically leads to an inefficient usage of resources. A key feature of the proposed algorithm is that not only the parameters are optimized, but also the parameter tuning strategy. Secondly, a novel MLB algorithm for enterprise femtocells scenarios is proposed. Such scenarios are characterized by the lack of a thorough deployment of these low-cost nodes, meaning that a more efficient use of radio resources can be achieved by applying effective MLB schemes. As in the previous problem, the optimization of the self-tuning process is also studied in this case. Thirdly, a new self-tuning algorithm for Mobility Robustness Optimization (MRO) is proposed. This study includes the impact of context factors such as the system load and user speed, as well as a proposal for coordination between the designed MLB and MRO functions. Fourthly, a novel self-tuning algorithm for Traffic Steering (TS) in heterogeneous networks is proposed. The main features of the proposed algorithm are the flexibility to support different operator policies and the adaptation capability to network variations. Finally, with the aim of validating the proposed techniques, a dynamic system-level simulator for Long-Term Evolution (LTE) networks has been designed
    corecore