26 research outputs found
Application of machine learning for energy efficiency in mobile networks
Future generation networks (5G) will bring a new paradigm to network management, as the networks themselves will suffer evident changes that will imply new requirements in upper layers. The 5G-XHaul project, framed under the Horizon 2020 European research and innovation programme is focused on providing dynamically reconfigurable optical-wireless backhaul and fronthaul architectures with a cognitive control plane for small cells and cloud-RANs. One of the objectives contained under that premise consists in the design of new network management strategies for mobile networks, subject to which this thesis contributes. Making use of new technologies and techniques, we can deploy a multi-tier network with a lower layer of small cell deployments that are managed through a dynamic system that can automatically perform certain operations over that network. Machine Learning is an increasing trend in this field, can help with the process by making use of the data collected from the network, obtain useful knowledge, and create predictive models that can tell us the state of the network in the near future. For the development of this project, we have collaborated with COSMOTE, one of the main telecommunications companies in Greece, who have provided us with several data sets of a real network deployment in the centre of Athens. With these data, several predictive models have been created to predict the state of the network during certain time intervals and act in consequence. Many different applications can be found for those algorithms, although one of those that is a hot topic nowadays is energy efficiency. To work on that field, the prediction models where used to create a dynamic system that turns cells on and off dynamically, depending on the expected traffic, in order to achieve notable energy savings. Finally, a simulation environment was developed, based on the real traces from the COSMOTE network, in order to test the proposed network management techniques in a large number of different scenarios. This simulator generates realistic random scenarios from which several statistics can be extracted, with the aim of measuring the performance of the algorithms developed during the earlier stages of the project. Working with different tools and environments, this project studies the best data analysis and Machine Learning techniques regarding network usage data. From that data, prediction models are created, which can be used for many different and interesting applications. The one chosen for this thesis is the design of an energy efficient management system for dense small cell deployments. Finally, results are collected, and the validity of the proposed strategies is proved
AN EFFICIENT INTERFERENCE AVOIDANCE SCHEME FOR DEVICE-TODEVICE ENABLED FIFTH GENERATION NARROWBAND INTERNET OF THINGS NETWOKS’
Narrowband Internet of Things (NB-IoT) is a low-power wide-area (LPWA) technology built on long-term evolution (LTE) functionalities and standardized by the 3rd-Generation Partnership Project (3GPP). Due to its support for massive machine-type communication (mMTC) and different IoT use cases with rigorous standards in terms of connection, energy efficiency, reachability, reliability, and latency, NB-IoT has attracted the research community. However, as the capacity needs for various IoT use cases expand, the LTE evolved packet core (EPC) system's numerous functionalities may become overburdened and suboptimal. Several research efforts are currently in progress to address these
challenges. As a result, an overview of these efforts with a specific focus on the optimized architecture of the LTE EPC functionalities, the 5G architectural design for NB-IoT integration, the enabling technologies necessary for 5G NB-IoT, 5G new radio (NR) coexistence with NB-IoT, and feasible architectural deployment schemes of NB-IoT with cellular networks is discussed. This thesis also presents cloud-assisted relay with backscatter communication as part of a detailed study of the technical performance attributes and channel communication characteristics from the physical (PHY) and medium access control
(MAC) layers of the NB-IoT, with a focus on 5G. The numerous drawbacks that come with simulating these systems are explored. The enabling market for NB-IoT, the benefits for a few use cases, and the potential critical challenges associated with their deployment are all highlighted. Fortunately, the cyclic prefix orthogonal frequency division multiplexing (CPOFDM) based waveform by 3GPP NR for improved mobile broadband (eMBB) services does not prohibit the use of other waveforms in other services, such as the NB-IoT service for mMTC. As a result, the coexistence of 5G NR and NB-IoT must be manageably orthogonal (or quasi-orthogonal) to minimize mutual interference that limits the form of
freedom in the waveform's overall design. As a result, 5G coexistence with NB-IoT will introduce a new interference challenge, distinct from that of the legacy network, even though the NR's coexistence with NB-IoT is believed to improve network capacity and expand the coverage of the user data rate, as well as improves robust communication through frequency reuse. Interference challenges may make channel estimation difficult for NB-IoT devices,
limiting the user performance and spectral efficiency. Various existing interference mitigation solutions either add to the network's overhead, computational complexity and delay or are hampered by low data rate and coverage. These algorithms are unsuitable for an NB-IoT network owing to the low-complexity nature. As a result, a D2D communication based interference-control technique becomes an effective strategy for addressing this problem.
This thesis used D2D communication to decrease the network bottleneck in dense 5G NBIoT networks prone to interference. For D2D-enabled 5G NB-IoT systems, the thesis presents an interference-avoidance resource allocation that considers the less favourable cell edge NUEs. To simplify the algorithm's computing complexity and reduce interference power, the system divides the optimization problem into three sub-problems. First, in an orthogonal deployment technique using channel state information (CSI), the channel gain
factor is leveraged by selecting a probable reuse channel with higher QoS control. Second, a bisection search approach is used to find the best power control that maximizes the network sum rate, and third, the Hungarian algorithm is used to build a maximum bipartite matching strategy to choose the optimal pairing pattern between the sets of NUEs and the D2D pairs. The proposed approach improves the D2D sum rate and overall network SINR of the 5G NB-IoT system, according to the numerical data. The maximum power constraint of the D2D
pair, D2D's location, Pico-base station (PBS) cell radius, number of potential reuse channels, and cluster distance impact the D2D pair's performance. The simulation results achieve 28.35%, 31.33%, and 39% SINR performance higher than the ARSAD, DCORA, and RRA algorithms when the number of NUEs is twice the number of D2D pairs, and 2.52%, 14.80%, and 39.89% SINR performance higher than the ARSAD, RRA, and DCORA when the number of NUEs and D2D pairs are equal. As a result, a D2D sum rate increase of 9.23%, 11.26%, and 13.92% higher than the ARSAD, DCORA, and RRA when the NUE’s number is twice the number of D2D pairs, and a D2D’s sum rate increase of 1.18%, 4.64% and
15.93% higher than the ARSAD, RRA and DCORA respectively, with an equal number of NUEs and D2D pairs is achieved. The results demonstrate the efficacy of the proposed scheme. The thesis also addressed the problem where the cell-edge NUE's QoS is critical to challenges such as long-distance transmission, delays, low bandwidth utilization, and high system overhead that affect 5G NB-IoT network performance. In this case, most cell-edge NUEs boost their transmit power to maximize network throughput. Integrating cooperating D2D relaying technique into 5G NB-IoT heterogeneous network (HetNet) uplink spectrum sharing increases the system's spectral efficiency and interference power, further degrading the network. Using a max-max SINR (Max-SINR) approach, this thesis proposed an interference-aware D2D relaying strategy for 5G NB-IoT QoS improvement for a cell-edge NUE to achieve optimum system performance. The Lagrangian-dual technique is used to optimize the transmit power of the cell-edge NUE to the relay based on the average interference power constraint, while the relay to the NB-IoT base station (NBS) employs a fixed transmit power. To choose an optimal D2D relay node, the channel-to-interference plus noise ratio (CINR) of all available D2D relays is used to maximize the minimum cell-edge NUE's data rate while ensuring the cellular NUEs' QoS requirements are satisfied. Best harmonic mean, best-worst, half-duplex relay selection, and a D2D communication scheme were among the other relaying selection strategies studied. The simulation results reveal that the Max-SINR selection scheme outperforms all other selection schemes due to the high channel gain between the two communication devices except for the D2D communication scheme. The proposed algorithm achieves 21.27% SINR performance, which is nearly identical to the half-duplex scheme, but outperforms the best-worst and harmonic selection techniques by 81.27% and 40.29%, respectively. As a result, as the number of D2D relays increases, the capacity increases by 14.10% and 47.19%, respectively, over harmonic and half-duplex techniques. Finally, the thesis presents future research works on interference control in addition with the open research directions on PHY and MAC properties and a SWOT (Strengths, Weaknesses, Opportunities, and Threats) analysis presented in Chapter 2 to encourage further study on 5G NB-IoT
Control of Energy Storage
Energy storage can provide numerous beneficial services and cost savings within the electricity grid, especially when facing future challenges like renewable and electric vehicle (EV) integration. Public bodies, private companies and individuals are deploying storage facilities for several purposes, including arbitrage, grid support, renewable generation, and demand-side management. Storage deployment can therefore yield benefits like reduced frequency fluctuation, better asset utilisation and more predictable power profiles. Such uses of energy storage can reduce the cost of energy, reduce the strain on the grid, reduce the environmental impact of energy use, and prepare the network for future challenges. This Special Issue of Energies explore the latest developments in the control of energy storage in support of the wider energy network, and focus on the control of storage rather than the storage technology itself
Using MapReduce Streaming for Distributed Life Simulation on the Cloud
Distributed software simulations are indispensable in the study of large-scale life models but often require the use of technically complex lower-level distributed computing frameworks, such as MPI. We propose to overcome the complexity challenge by applying the emerging MapReduce (MR) model to distributed life simulations and by running such simulations on the cloud. Technically, we design optimized MR streaming algorithms for discrete and continuous versions of Conway’s life according to a general MR streaming pattern. We chose life because it is simple enough as a testbed for MR’s applicability to a-life simulations and general enough to make our results applicable to various lattice-based a-life models. We implement and empirically evaluate our algorithms’ performance on Amazon’s Elastic MR cloud. Our experiments demonstrate that a single MR optimization technique called strip partitioning can reduce the execution time of continuous life simulations by 64%. To the best of our knowledge, we are the first to propose and evaluate MR streaming algorithms for lattice-based simulations. Our algorithms can serve as prototypes in the development of novel MR simulation algorithms for large-scale lattice-based a-life models.https://digitalcommons.chapman.edu/scs_books/1014/thumbnail.jp
Bibliography of Lewis Research Center technical publications announced in 1992
This compilation of abstracts describes and indexes the technical reporting that resulted from the scientific and engineering work performed and managed by the Lewis Research Center in 1992. All the publications were announced in the 1992 issues of STAR (Scientific and Technical Aerospace Reports) and/or IAA (International Aerospace Abstracts). Included are research reports, journal articles, conference presentations, patents and patent applications, and theses
Recommended from our members
Spatial stochastic models for network analysis
This thesis proposes new stochastic interacting particle models for networks, and studies some fundamental properties of these models. This thesis considers two application areas of networking - engineering design questions in future wireless systems and algorithmic tasks in large scale graph structured data. The key innovation introduced in this thesis is to bring tools and ideas from stochastic geometry to bear on the problems in both these application domains. We identify certain fundamental questions in design and engineering both wireless systems and large scale graph structured data processing systems. Subsequently, we identify novel stochastic geometric models, that captures the fundamental properties of these networks, which forms the first research contribution. We then rigorously study these models, by bringing to bear new tools from stochastic geometry, random graphs, percolation and Markov processes to establish structural results and fundamental phase transitions in these models. Using our developed mathematical methodology, we then identify design insights and develop algorithms, which we demonstrate are instructive in many practical settings. In the setting of wireless systems, this thesis studies both ad-hoc and cellular networks. In the ad-hoc network setting, we aim to understand fundamental limits of the simplest possible protocol to access the spectrum, namely a link transmits whenever it has data to send by treating all interference as noise. Surprisingly this basic question itself was not understood, as the system dynamics is coupled spatially due to the interference links cause one another and temporally due to randomness in traffic arrivals. We propose a novel interacting particle model called the spatial birth-death wireless network model to understand the stability properties of the simple spectrum access protocol. Using tools from Palm calculus and fluid limit theory, we establish a tight characterization of when this model is stable. Furthermore, we show that whenever stable, the links in steady-state exhibit a form of clustering. Leveraging these structural results, we propose two mean field heuristics to obtain formulas for key performance metrics such as average delay experienced by a link. We empirically find that the proposed formulas for delay predicts accurately the system behavior. We subsequently study scalability properties of this model by introducing an appropriate infinite dimensional version of the model we call the Interference Queueing Networks model. The model consists of a queue located at each grid point of an infinite regular integer lattice, with the queues interacting with each other in a translation invariant fashion. We then prove several structural properties of the model namely, tight conditions for existence of stationary solutions and some sufficient conditions for uniqueness of stationary solutions. Remarkably, we obtain exact formula for mean delay in this model, unlike the continuum model where we relied on mean-field type heuristics to obtain insights. In the setting of cellular networks, we study optimal association schemes by mobile phones in the case when there are several possible base station technologies operating on orthogonal bands. We show that this choice leads to a performance gain we term technology diversity. Interestingly, we show that the performance gain relies on the amount of instantaneous information a user has on the various base station technologies that it can leverage to make the association decision. We outline optimal association schemes under various information settings that a user may have on the network. Moreover, we propose simple heuristics for association that relies on a user obtaining minimal instantaneous information and are thus practical to implement. We prove that in certain natural asymptotic regime of parameters, our proposed heuristic policy is also optimal, and thus quantifying the value of having fine grained information at a user for association. We empirically observe that the asymptotic result is valid even at finite parameter regimes that are typical in todays networks. In the application of analyzing large scale graph structured data, we consider the graph clustering problem with side information. Graph clustering is a standard and widely used task which consists in partitioning the set of nodes of a graph into underlying clusters where nodes in the same cluster are similar to each other and nodes across different clusters are different. Motivated by applications in social and biological networks, we consider the task of clustering nodes of a graph, when there is side information on the nodes, other than that contained in the graph. For instance in social networks, one has access to meta data about a person (node in a social graph) such as age, location, income etc, along with the combinatorial data of who are his friends on the social graph. Similarly, in biological networks, there is often meta-data about an experiment that provides additional contextual data about a node, in addition to the combinatorial data. In this thesis, we propose a generative model for such graph structured data with side information, which is inspired by random graph models in stochastic geometry such as the random connection model and the generative models for networks with clusters without contexts, such as the stochastic block model or the planted partition model. We propose a novel graph model called the planted partition random connection model. Roughly speaking, in this model, each node has two labels - an observable R [superscript d] valued (for some fixed d) feature label and an unobservable binary valued community label. Conditional on the node labels, edges are drawn at random in this graph depending on both the feature and community labels of the two end points. The clustering task consists in recovering the underlying partition of nodes corresponding to the respective community labels better than a random assignment, when given an observation of the graph generated and the features of all nodes. We show that if the 'density of nodes', i.e., average number of nodes having features in an unit volume of space of R [superscript d] is small, then no algorithm can cluster the graph that can asymptotically beat a random assignment of community labels. On the contrary, if the density of nodes is sufficiently high, we give a simple algorithm that recovers the true underlying partition strictly better a random assignment. We then apply the proposed algorithm to a problem in computational biology called Haplotype Phasing and observe empirically, that it obtains state of art results. This demonstrates, both the validity of our generative model, as well as our new algorithm.Electrical and Computer Engineerin