44 research outputs found
Recommended from our members
Neural network design for intelligent mobile network optimisation
This thesis was submitted for the award of Doctor of Philosophy and was awarded by Brunel University LondonThe mobile networks users’ demands for data services are increasing exponentially, this is due to two main factors: the first is the evolution of smart phones and their application, and the second is the emerging new technologies for internet of things, smart cities…etc, which keeps pumping more data into the network; ‘though most of the data routed in the current mobile network is non-live data’. This increasing of demands arise the necessity for the mobile network operators to keep improving their network to satisfy it, this improvement takes place via adding hardware or increasing the resources or a combination of both. The radio resources are strictly limited due to spectrum licensing and availability, therefore efficient spectrum utilization is a major goal to be achieved for both network operators and developers. Simultaneous and multiple channel access,and adding more cells to the network are ways used to increase the data exchanged between the network nodes. The current 4G mobile system is based on the Orthogonal Frequency Division Multiple Access (OFDMA) for accessing the medium and the intercell interference degrades the link quality at the cell edge, with the introduction of heterogeneity concept to the LTE in Release 10 of the 3GPP the handover process became even more complex. To mitigate the intercell interference at the cell edge, coordinated multipoint and carrier aggregation techniques are utilized for dual connectivity. This work is focused on designing and proposing enhancing features to improve network performance and sustainability, these features comprises of distributing small cells for data only transmission, handover schemes performance evaluation at cell edge with dual connectivity, and Artificial Intelligence technology for balancing and prediction. In the proposed model design the data and controls of the Small eNodeB (SeNodeB) are processed at the network edge using a Mobile Edge Computing (MEC) server and the SeNodeBs are used to boost services provided to the users, also the concept of caching data has been investigated, the caching units where implemented in different network levels. The proposed system and resource management are simulated using the OPNET modeller and evaluated through multiple scenarios with and without full load, the UE is reconfigured to accommodate dual connectivity and have two separate connections for uplink and downlink, while maintaining connection to the Macro cell via uplink, the downlink is dedicated for small cells when content is requested from the cache. The results clearly show that the proposed system can decrease the latency while the total throughput delivered by the network has highly improved when SeNodeBs are deployed in the system, rising throughput will incur the rise of overall capacity which leads to better services being provided to the users or more users to join and benefit from the network. Handover improvement is also considered in this work, with the help of two Artificial Intelligence (AI) entities better handover performance are achieved. Balanced load over the SeNodeBs results in less frequent handover, the proposed load balancer is based on artificial neural network clustering model with self-organizing map as a hidden layer, it’s trained to forecast the network condition and learn to reduce the number of handovers especially for the UEs at the cell edge by performing only necessary ones, and avoid handovers to the Macro cell for the downlink direction. The examined handovers concern the downlinks when routing non live video stored at the small cell’s cache, and a reduction in the frequent handovers was achieved when running the balancer. Keep revolving in the handover orbit, another way to preserve and utilize network resources is by predicting the handovers before they occur, and allocate the required data in the target SeNodeB, the predictor entity in the proposed system architecture combines the features of Radial Basis Function Neural Network and neural network time series tool to create and update prediction list from the system’s collected data and learn to predict the next SeNodeB to associate with. The prediction entity is simulated using MATLAB, and the results shows that the system was able to deliver up to 92% correct predictions for handovers which led to overall throughput improvement of 75%
Applications of the Internet of Medical Things to Type 1 Diabetes Mellitus
Type 1 Diabetes Mellitus (DM1) is a condition of the metabolism typified by persistent hyperglycemia as a result of insufficient pancreatic insulin synthesis. This requires patients to be aware of their blood glucose level oscillations every day to deduce a pattern and anticipate future glycemia, and hence, decide the amount of insulin that must be exogenously injected to maintain glycemia within the target range. This approach often suffers from a relatively high imprecision, which can be dangerous. Nevertheless, current developments in Information and Communication Technologies (ICT) and innovative sensors for biological signals that might enable a continuous, complete assessment of the patient’s health provide a fresh viewpoint on treating DM1. With this, we observe that current biomonitoring devices and Continuous Glucose Monitoring (CGM) units can easily obtain data that allow us to know at all times the state of glycemia and other variables that influence its oscillations. A complete review has been made of the variables that influence glycemia in a T1DM patient and that can be measured by the above means. The communications systems necessary to transfer the information collected to a more powerful computational environment, which can adequately handle the amounts of data collected, have also been described. From this point, intelligent data analysis extracts knowledge from the data and allows predictions to be made in order to anticipate risk situations. With all of the above, it is necessary to build a holistic proposal that allows the complete and smart management of T1DM. This approach evaluates a potential shortage of such suggestions and the obstacles that future intelligent IoMT-DM1 management systems must surmount. Lastly, we provide an outline of a comprehensive IoMT-based proposal for DM1 management that aims to address the limits of prior studies while also using the disruptive technologies highlighted beforePartial funding for open access charge: Universidad de Málag
Internet of Things 2.0: Concepts, Applications, and Future Directions
Applications and technologies of the Internet of Things are in high demand with the increase of network devices. With the development of technologies such as 5G, machine learning, edge computing, and Industry 4.0, the Internet of Things has evolved. This survey article discusses the evolution of the Internet of Things and presents the vision for Internet of Things 2.0. The Internet of Things 2.0 development is discussed across seven major fields. These fields are machine learning intelligence, mission critical communication, scalability, energy harvesting-based energy sustainability, interoperability, user friendly IoT, and security. Other than these major fields, the architectural development of the Internet of Things and major types of applications are also reviewed. Finally, this article ends with the vision and current limitations of the Internet of Things in future network environments
Intelligent Sensor Networks
In the last decade, wireless or wired sensor networks have attracted much attention. However, most designs target general sensor network issues including protocol stack (routing, MAC, etc.) and security issues. This book focuses on the close integration of sensing, networking, and smart signal processing via machine learning. Based on their world-class research, the authors present the fundamentals of intelligent sensor networks. They cover sensing and sampling, distributed signal processing, and intelligent signal learning. In addition, they present cutting-edge research results from leading experts
Channel Access in Wireless Networks: Protocol Design of Energy-Aware Schemes for the IoT and Analysis of Existing Technologies
The design of channel access policies has been an object of study since the deployment of the first wireless networks, as the Medium Access Control (MAC) layer is responsible for coordinating transmissions to a shared channel and plays a key role in the network performance. While the original target was the system throughput, over the years the focus switched to communication latency, Quality of Service (QoS) guarantees, energy consumption, spectrum efficiency, and any combination of such goals.
The basic mechanisms to use a shared channel, such as ALOHA, TDMA- and FDMA-based policies, have been introduced decades ago. Nonetheless, the continuous evolution of wireless networks and the emergence of new communication paradigms demand the development of new strategies to adapt and optimize the standard approaches so as to satisfy the requirements of applications and devices.
This thesis proposes several channel access schemes for novel wireless technologies, in particular Internet of Things (IoT) networks, the Long-Term Evolution (LTE) cellular standard, and mmWave communication with the IEEE802.11ad standard.
The first part of the thesis concerns energy-aware channel access policies for IoT networks, which typically include several battery-powered sensors.
In scenarios with energy restrictions, traditional protocols that do not consider the energy consumption may lead to the premature death of the network and unreliable performance expectations. The proposed schemes show the importance of accurately characterizing all the sources of energy consumption (and inflow, in the case of energy harvesting), which need to be included in the protocol design. In particular, the schemes presented in this thesis exploit data processing and compression techniques to trade off QoS for lifetime. We investigate contention-free and contention-based chanel access policies for different scenarios and application requirements.
While the energy-aware schemes proposed for IoT networks are based on a clean-slate approach that is agnostic of the communication technology used, the second part of the thesis is focused on the LTE and IEEE802.11ad standards.
As regards LTE, the study proposed in this thesis shows how to use machine-learning techniques to infer the collision multiplicity in the channel access phase, information that can be used to understand when the network is congested and improve the contention resolution mechanism. This is especially useful for massive access scenarios; in the last years, in fact, the research community has been investigating on the use of LTE for Machine-Type Communication (MTC).
As regards the standard IEEE802.11ad, instead, it provides a hybrid MAC layer with contention-based and contention-free scheduled allocations, and a dynamic channel time allocation mechanism built on top of such schedule. Although this hybrid scheme is expected to meet heterogeneous requirements, it is still not clear how to develop a schedule based on the various traffic flows and their demands. A mathematical model is necessary to understand the performance and limits of the possible types of allocations and guide the scheduling process. In this thesis, we propose a model for the contention-based access periods which is aware of the interleaving of the available channel time with contention-free allocations
12th EASN International Conference on "Innovation in Aviation & Space for opening New Horizons"
Epoxy resins show a combination of thermal stability, good mechanical performance, and durability, which make these materials suitable for many applications in the Aerospace industry. Different types of curing agents can be utilized for curing epoxy systems. The use of aliphatic amines as curing agent is preferable over the toxic aromatic ones, though their incorporation increases the flammability of the resin. Recently, we have developed different hybrid strategies, where the sol-gel technique has been exploited in combination with two DOPO-based flame retardants and other synergists or the use of humic acid and ammonium polyphosphate to achieve non-dripping V-0 classification in UL 94 vertical flame spread tests, with low phosphorous loadings (e.g., 1-2 wt%). These strategies improved the flame retardancy of the epoxy matrix, without any detrimental impact on the mechanical and thermal properties of the composites. Finally, the formation of a hybrid silica-epoxy network accounted for the establishment of tailored interphases, due to a better dispersion of more polar additives in the hydrophobic resin