83 research outputs found
Interference mitigation in cognitive femtocell networks
“A thesis submitted to the University of Bedfordshire, in partial fulfilment of the requirements for the degree of Doctor of Philosophy”.Femtocells have been introduced as a solution to poor indoor coverage in cellular communication which has hugely attracted network operators and stakeholders. However, femtocells are designed to co-exist alongside macrocells providing improved spatial frequency reuse and higher spectrum efficiency to name a few. Therefore, when deployed in the two-tier architecture with macrocells, it is necessary to mitigate the inherent co-tier and cross-tier
interference. The integration of cognitive radio (CR) in femtocells introduces the ability of femtocells to dynamically adapt to varying network conditions through learning and reasoning.
This research work focuses on the exploitation of cognitive radio in femtocells to mitigate the mutual interference caused in the two-tier architecture. The research work presents original contributions in mitigating interference in femtocells by introducing practical approaches which comprises a power control scheme where femtocells adaptively controls its transmit power levels to reduce the interference it causes in a network. This is especially useful since femtocells are user deployed as this seeks to mitigate interference based on their blind placement in an indoor environment. Hybrid interference mitigation schemes which combine power control and resource/scheduling are also implemented. In a joint threshold power based admittance and contention free resource allocation scheme, the mutual interference between a Femtocell Access Point (FAP) and close-by User Equipments (UE) is mitigated based on admittance. Also, a hybrid scheme where FAPs opportunistically use Resource Blocks (RB) of Macrocell User Equipments (MUE) based on its traffic load use is also employed. Simulation analysis present improvements when these schemes are applied with emphasis in Long Term
Evolution (LTE) networks especially in terms of Signal to Interference plus Noise Ratio (SINR)
Performance Modeling and Resource Management for Mapreduce Applications
Big Data analytics is increasingly performed using the MapReduce paradigm and its open-source implementation Hadoop as a platform choice. Many applications associated with live business intelligence are written as complex data analysis programs defined by directed acyclic graphs of MapReduce jobs. An increasing number of these applications have additional requirements for completion time guarantees. The advent of cloud computing brings a competitive alternative solution for data analytic problems while it also introduces new challenges in provisioning clusters that provide best cost-performance trade-offs.
In this dissertation, we aim to develop a performance evaluation framework that enables automatic resource management for MapReduce applications in achieving different optimization goals. It consists of the following components: (1) a performance modeling framework that estimates the completion time of a given MapReduce application when executed on a Hadoop cluster according to its input data sets, the job settings and the amount of allocated resources for processing it; (2) a resource allocation strategy for deadline-driven MapReduce applications that automatically tailors and controls the resource allocation on a shared Hadoop cluster to different applications to achieve their (soft) deadlines; (3) a simulator-based solution to the resource provision problem in public cloud environment that guides the users to determine the types and amount of resources that should lease from the service provider for achieving different goals; (4) an optimization strategy to automatically determine the optimal job settings within a MapReduce application for efficient execution and resource usage. We validate the accuracy, efficiency, and performance benefits of the proposed framework using a set of realistic MapReduce applications on both private cluster and public cloud environment
Interference mitigation in D2D communication underlaying LTE-A network
The mobile data traffic has risen exponentially in recent days due to the emergence of data intensive applications, such as online gaming and video sharing. It is driving the telecommunication industry as well as the research community to come up with new paradigms that will support such high data rate requirements within the existing wireless access network, in an efficient and effective manner. To respond to this challenge, device-to-device (D2D) communication in cellular networks is viewed as a promising solution, which is expected to operate, either within the coverage area of the existing eNB and under the same cellular spectrum (in-band) or separate spectrum (out-band). D2D provides the opportunity for users located in close proximity of each other to communicate directly, without traversing data traffic through the eNB. It results in several transmission gains, such as improved throughput, energy gain, hop gain, and reuse gain. However, integration of D2D communication in cellular systems at the same time introduces new technical challenges that need to be addressed. Containment of the interference among D2D nodes and cellular users is one of the major problems. D2D transmission radiates in all directions, generating undesirable interference to primary cellular users and other D2D users sharing the same radio resources resulting in severe performance degradation. Efficient interference mitigation schemes are a principal requirement in order to optimize the system performance. This paper presents a comprehensive review of the existing interference mitigation schemes present in the open literature. Based on the subjective and objective analysis of the work available to date, it is also envisaged that adopting a multi-antenna beamforming mechanism with power control, such that the transmit power is maximized toward the direction of the intended D2D receiver node and limited in all other directions will minimize the interference in the network. This could maximize the sum throughput and hence, guarantees the reliability of both the D2D and cellular connections
Control designs and reinforcement learning-based management for software defined networks
In this thesis, we focus our investigations around the novel software defined net- working (SDN) paradigm. The central goal of SDN is to smoothly introduce centralised control capabilities to the otherwise distributed computer networks. This is achieved by abstracting and concentrating network control functionalities in a logically centralised control unit, which is referred to as the SDN controller. To further balance between centralised control, scalability and reliability considerations, distributed SDN is introduced to enable the coexistence of multiple physical SDN controllers. For distributed SDN, networking elements are grouped together to form various domains, with each domain managed by an SDN controller. In such a distributed SDN setting, SDN controllers of all domains synchronise with each other to maintain logically centralised network views, which is referred to as controller synchronisation. Centred on the problem of SDN controller synchronisation, this thesis specifically aims at addressing two aspects of the subject as follows. First, we model and analyse the performance enhancements brought by controller synchronisation in distributed SDN from a theoretical perspective. Second, we design intelligent controller synchronisation policies by leveraging existing and creating new Reinforcement Learning (RL) and Deep Learning (DL)-based approaches.
In order to understand the performance gains of SDN controller synchronisation from a fundamental and analytical perspective, we propose a two-layer network model based on graphs to capture various characteristics of distributed SDN net- works. Then, we develop two families of analytical methods to investigate the performance of distributed SDN in relationship to network structure and the level of SDN controller synchronisation. The significance of our analytical results is that they can be used to quantify the contribution of controller synchronisation level, in improving the network performance under different network parameters. Therefore, they serve as fundamental guidelines for future SDN performance analyses and protocol designs.
For the designs of SDN controller synchronisation policies, most existing works focus on the engineering-centred system design aspect of the problem for ensuring anomaly-free synchronisation. Instead, we emphasise on the performance improvements with respect to (w.r.t.) various networking tasks for designing controller synchronisation policies. Specifically, we investigate various scenarios with diverse control objectives, which range from routing related performance metric to other more sophisticated optimisation goals involving communication and computation resources in networks. We also take into consideration factors such as the scalability and robustness of the policies developed. For this goal, we employ machine learning techniques to assist our policy designs. In particular, we model the SDN controller synchronisation policy as serial decision-making processes and resort to RL-based techniques for developing the synchronisation policy. To this end, we leverage a combination of various RL and DL methods, which are tailored for handling the specific characteristics and requirements in different scenarios. Evaluation results show that our designed policies consistently outperform some already in-use controller synchronisation policies, in certain cases by considerable margins. While exploring existing RL algorithms for solving our problems, we identify some critical issues embedded within these algorithms, such as the enormity of the state-action space, which can cause inefficiency in learning. As such, we propose a novel RL algorithm to address these issues, which is named state action separable reinforcement learning (sasRL). Therefore, the sasRL approach constitutes another major contribution of this thesis in the field of RL research.Open Acces
Artificial Intelligence based multi-agent control system
Le metodologie di Intelligenza Artificiale (AI) si occupano della possibilità di rendere le macchine in grado di compiere azioni intelligenti con lo scopo di aiutare l’essere umano; quindi è possibile affermare che l’Intelligenza Artificiale consente di portare all’interno delle macchine, caratteristiche tipiche considerate come caratteristiche umane.
Nello spazio dell’Intelligenza Artificiale ci sono molti compiti che potrebbero essere richiesti alla macchina come la percezione dell’ambiente, la percezione visiva, decisioni complesse.
La recente evoluzione in questo campo ha prodotto notevoli scoperte, princi- palmente in sistemi ingegneristici come sistemi multi-agente, sistemi in rete, impianti, sistemi veicolari, sistemi sanitari; infatti una parte dei suddetti sistemi di ingegneria è presente in questa tesi di dottorato.
Lo scopo principale di questo lavoro è presentare le mie recenti attività di ricerca nel campo di sistemi complessi che portano le metodologie di intelligenza artifi- ciale ad essere applicati in diversi ambienti, come nelle reti di telecomunicazione, nei sistemi di trasporto e nei sistemi sanitari per la Medicina Personalizzata. Gli approcci progettati e sviluppati nel campo delle reti di telecomunicazione sono presentati nel Capitolo 2, dove un algoritmo di Multi Agent Reinforcement Learning è stato progettato per implementare un approccio model-free al fine di controllare e aumentare il livello di soddisfazione degli utenti; le attività di ricerca nel campo dei sistemi di trasporto sono presentate alla fine del capitolo 2 e nel capitolo 3, in cui i due approcci riguardanti un algoritmo di Reinforcement Learning e un algoritmo di Deep Learning sono stati progettati e sviluppati per far fronte a soluzioni di viaggio personalizzate e all’identificazione automatica dei mezzi trasporto; le ricerche svolte nel campo della Medicina Personalizzata sono state presentate nel Capitolo 4 dove è stato presentato un approccio basato sul controllo Deep Learning e Model Predictive Control per affrontare il problema del controllo dei fattori biologici nei pazienti diabetici.Artificial Intelligence (AI) is a science that deals with the problem of having machines perform intelligent, complex, actions with the aim of helping the human being. It is then possible to assert that Artificial Intelligence permits to bring into machines, typical characteristics and abilities that were once limited to human intervention. In the field of AI there are several tasks that ideally could be delegated to machines, such as environment aware perception, visual perception and complex decisions in the various field.
The recent research trends in this field have produced remarkable upgrades mainly on complex engineering systems such as multi-agent systems, networked systems, manufacturing, vehicular and transportation systems, health care; in fact, a portion of the mentioned engineering system is discussed in this PhD thesis, as most of them are typical field of application for traditional control systems.
The main purpose if this work is to present my recent research activities in the field of complex systems, bringing artificial intelligent methodologies in different environments such as in telecommunication networks, transportation systems and health care for Personalized Medicine.
The designed and developed approaches in the field of telecommunication net- works is presented in Chapter 2, where a multi-agent reinforcement learning algorithm was designed to implement a model-free control approach in order to regulate and improve the level of satisfaction of the users, while the research activities in the field of transportation systems are presented at the end of Chapter 2 and in Chapter 3, where two approaches regarding a Reinforcement Learning algorithm and a Deep Learning algorithm were designed and developed to cope with tailored travels and automatic identification of transportation moralities. Finally, the research activities performed in the field of Personalized Medicine have been presented in Chapter 4 where a Deep Learning and Model Predictive control based approach are presented to address the problem of controlling biological factors in diabetic patients
Optimization and Management of Large-scale Scientific Workflows in Heterogeneous Network Environments: From Theory to Practice
Next-generation computation-intensive scientific applications feature large-scale computing workflows of various structures, which can be modeled as simple as linear pipelines or as complex as Directed Acyclic Graphs (DAGs). Supporting such computing workflows and optimizing their end-to-end network performance are crucial to the success of scientific collaborations that require fast system response, smooth data flow, and reliable distributed operation.We construct analytical cost models and formulate a class of workflow mapping problems with different mapping objectives and network constraints. The difficulty of these mapping problems essentially arises from the topological matching nature in the spatial domain, which is further compounded by the resource sharing complicacy in the temporal dimension. We provide detailed computational complexity analysis and design optimal or heuristic algorithms with rigorous correctness proof or performance analysis. We decentralize the proposed mapping algorithms and also investigate these optimization problems in unreliable network environments for fault tolerance.To examine and evaluate the performance of the workflow mapping algorithms before actual deployment and implementation, we implement a simulation program that simulates the execution dynamics of distributed computing workflows. We also develop a scientific workflow automation and management platform based on an existing workflow engine for experimentations in real environments. The performance superiority of the proposed mapping solutions are illustrated by extensive simulation-based comparisons with existing algorithms and further verified by large-scale experiments on real-life scientific workflow applications through effective system implementation and deployment in real networks
Layered performance modelling and evaluation for cloud topic detection and tracking based big data applications
“Big Data” best characterized by its three features namely
“Variety”, “Volume” and “Velocity” is revolutionizing
nearly every aspect of our lives ranging from enterprises to
consumers, from science to government. A fourth characteristic
namely “value” is delivered via the use of smart data
analytics over Big Data. One such Big Data Analytics application
considered in this thesis is Topic Detection and Tracking (TDT).
The characteristics of Big Data brings with it unprecedented
challenges such as too large for traditional devices to process
and store (volume), too fast for traditional methods to scale
(velocity), and heterogeneous data (variety). In recent times,
cloud computing has emerged as a practical and technical solution
for processing big data. However, while deploying Big data
analytics applications such as TDT in cloud (called cloud-based
TDT), the challenge is to cost-effectively orchestrate and
provision Cloud resources to meet performance Service Level
Agreements (SLAs). Although there exist limited work on
performance modeling of cloud-based TDT applications none of
these methods can be directly applied to guarantee the
performance SLA of cloud-based TDT applications. For instance,
current literature lacks a systematic, reliable and accurate
methodology to measure, predict and finally guarantee
performances of TDT applications. Furthermore, existing
performance models fail to consider the end-to-end complexity of
TDT applications and focus only on the individual processing
components (e.g. map reduce).
To tackle this challenge, in this thesis, we develop a layered
performance model of cloud-based TDT applications that take into
account big data characteristics, the data and event flow across
myriad cloud software and hardware resources and diverse SLA
considerations. In particular, we propose and develop models to
capture in detail with great accuracy, the factors having a
pivotal role in performances of cloud-based TDT applications and
identify ways in which these factors affect the performance and
determine the dependencies between the factors. Further, we have
developed models to predict the performance of cloud-based TDT
applications under uncertainty conditions imposed by Big Data
characteristics. The model developed in this thesis is aimed to
be generic allowing its application to other cloud-based data
analytics applications. We have demonstrated the feasibility,
efficiency, validity and prediction accuracy of the proposed
models via experimental evaluations using a real-world Flu
detection use-case on Apache Hadoop Map Reduce, HDFS and Mahout
Frameworks
Prompt Tuned Embedding Classification for Multi-Label Industry Sector Allocation
Prompt Tuning is emerging as a scalable and cost-effective method to
fine-tune Pretrained Language Models (PLMs), which are often referred to as
Large Language Models (LLMs). This study benchmarks the performance and
computational efficiency of Prompt Tuning and baselines for multi-label text
classification. This is applied to the challenging task of classifying
companies into an investment firm's proprietary industry taxonomy, supporting
their thematic investment strategy. Text-to-text classification is frequently
reported to outperform task-specific classification heads, but has several
limitations when applied to a multi-label classification problem where each
label consists of multiple tokens: (a) Generated labels may not match any label
in the label taxonomy; (b) The fine-tuning process lacks permutation invariance
and is sensitive to the order of the provided labels; (c) The model provides
binary decisions rather than appropriate confidence scores. Limitation (a) is
addressed by applying constrained decoding using Trie Search, which slightly
improves classification performance. All limitations (a), (b), and (c) are
addressed by replacing the PLM's language head with a classification head,
which is referred to as Prompt Tuned Embedding Classification (PTEC). This
improves performance significantly, while also reducing computational costs
during inference. In our industrial application, the training data is skewed
towards well-known companies. We confirm that the model's performance is
consistent across both well-known and less-known companies. Our overall results
indicate the continuing need to adapt state-of-the-art methods to
domain-specific tasks, even in the era of PLMs with strong generalization
abilities. We release our codebase and a benchmarking dataset at
https://github.com/EQTPartners/PTEC
- …