4,021 research outputs found
A survey of machine learning techniques applied to self organizing cellular networks
In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future
Recommended from our members
Improving System Reliability for Cyber-Physical Systems
Cyber-physical systems (CPS) are systems featuring a tight combination of, and coordination between, the system's computational and physical elements. Cyber-physical systems include systems ranging from critical infrastructure such as a power grid and transportation system to health and biomedical devices. System reliability, i.e., the ability of a system to perform its intended function under a given set of environmental and operational conditions for a given period of time, is a fundamental requirement of cyber-physical systems. An unreliable system often leads to disruption of service, financial cost and even loss of human life. An important and prevalent type of cyber-physical system meets the following criteria: processing large amounts of data; employing software as a system component; running online continuously; having operator-in-the-loop because of human judgment and an accountability requirement for safety critical systems. This thesis aims to improve system reliability for this type of cyber-physical system. To improve system reliability for this type of cyber-physical system, I present a system evaluation approach entitled automated online evaluation (AOE), which is a data-centric runtime monitoring and reliability evaluation approach that works in parallel with the cyber-physical system to conduct automated evaluation along the workflow of the system continuously using computational intelligence and self-tuning techniques and provide operator-in-the-loop feedback on reliability improvement. For example, abnormal input and output data at or between the multiple stages of the system can be detected and flagged through data quality analysis. As a result, alerts can be sent to the operator-in-the-loop. The operator can then take actions and make changes to the system based on the alerts in order to achieve minimal system downtime and increased system reliability. One technique used by the approach is data quality analysis using computational intelligence, which applies computational intelligence in evaluating data quality in an automated and efficient way in order to make sure the running system perform reliably as expected. Another technique used by the approach is self-tuning which automatically self-manages and self-configures the evaluation system to ensure that it adapts itself based on the changes in the system and feedback from the operator. To implement the proposed approach, I further present a system architecture called autonomic reliability improvement system (ARIS). This thesis investigates three hypotheses. First, I claim that the automated online evaluation empowered by data quality analysis using computational intelligence can effectively improve system reliability for cyber-physical systems in the domain of interest as indicated above. In order to prove this hypothesis, a prototype system needs to be developed and deployed in various cyber-physical systems while certain reliability metrics are required to measure the system reliability improvement quantitatively. Second, I claim that the self-tuning can effectively self-manage and self-configure the evaluation system based on the changes in the system and feedback from the operator-in-the-loop to improve system reliability. Third, I claim that the approach is efficient. It should not have a large impact on the overall system performance and introduce only minimal extra overhead to the cyberphysical system. Some performance metrics should be used to measure the efficiency and added overhead quantitatively. Additionally, in order to conduct efficient and cost-effective automated online evaluation for data-intensive CPS, which requires large volumes of data and devotes much of its processing time to I/O and data manipulation, this thesis presents COBRA, a cloud-based reliability assurance framework. COBRA provides automated multi-stage runtime reliability evaluation along the CPS workflow using data relocation services, a cloud data store, data quality analysis and process scheduling with self-tuning to achieve scalability, elasticity and efficiency. Finally, in order to provide a generic way to compare and benchmark system reliability for CPS and to extend the approach described above, this thesis presents FARE, a reliability benchmark framework that employs a CPS reliability model, a set of methods and metrics on evaluation environment selection, failure analysis, and reliability estimation. The main contributions of this thesis include validation of the above hypotheses and empirical studies of ARIS automated online evaluation system, COBRA cloud-based reliability assurance framework for data-intensive CPS, and FARE framework for benchmarking reliability of cyber-physical systems. This work has advanced the state of the art in the CPS reliability research, expanded the body of knowledge in this field, and provided some useful studies for further research
Detecting feature influences to quality attributes in large and partially measured spaces using smart sampling and dynamic learning
Emergent application domains (e.g., Edge Computing/Cloud/B5G systems) are complex to be built manually. They are characterised by high variability and are modelled by large Variability Models (VMs), leading to large configuration spaces. Due to the high number of variants present in such systems, it is challenging to find the best-ranked product regarding particular Quality Attributes (QAs) in a short time.
Moreover, measuring QAs sometimes is not trivial, requiring a lot of time and resources, as is the case of the energy footprint of software systems — the focus of this paper. Hence, we need a mechanism to analyse how features and their interactions influence energy footprint, but without measuring all configurations. While practical, sampling and predictive techniques base their accuracy on uniform spaces or some initial domain knowledge, which are not always possible to achieve. Indeed, analysing
the energy footprint of products in large configuration spaces raises specific requirements that we explore in this work. This paper presents SAVRUS (Smart Analyser of Variability Requirements in Unknown Spaces), an approach for sampling and dynamic statistical learning without relying on initial domain knowledge of large and partially QA-measured spaces. SAVRUS reports the degree to which features and pairwise interactions influence a particular QA, like energy efficiency. We validate and
evaluate SAVRUS with a selection of likewise systems, which define large searching spaces containing scattered measurements.Funding for open access charge: Universidad de Málaga / CBUA.
This work is supported by the European Union’s H2020 re search and innovation programme under grant agreement
DAEMON H2020-101017109, by the projects IRIS PID2021-12281 2OB-I00 (co-financed by FEDER funds), Rhea P18-FR-1081 (MCI/AEI/ FEDER, UE), and LEIA UMA18-FEDERIA-157, and the PRE2019-087496 grant from the Ministerio de Ciencia e Innovación, Spain
Towards Artificial General Intelligence (AGI) in the Internet of Things (IoT): Opportunities and Challenges
Artificial General Intelligence (AGI), possessing the capacity to comprehend,
learn, and execute tasks with human cognitive abilities, engenders significant
anticipation and intrigue across scientific, commercial, and societal arenas.
This fascination extends particularly to the Internet of Things (IoT), a
landscape characterized by the interconnection of countless devices, sensors,
and systems, collectively gathering and sharing data to enable intelligent
decision-making and automation. This research embarks on an exploration of the
opportunities and challenges towards achieving AGI in the context of the IoT.
Specifically, it starts by outlining the fundamental principles of IoT and the
critical role of Artificial Intelligence (AI) in IoT systems. Subsequently, it
delves into AGI fundamentals, culminating in the formulation of a conceptual
framework for AGI's seamless integration within IoT. The application spectrum
for AGI-infused IoT is broad, encompassing domains ranging from smart grids,
residential environments, manufacturing, and transportation to environmental
monitoring, agriculture, healthcare, and education. However, adapting AGI to
resource-constrained IoT settings necessitates dedicated research efforts.
Furthermore, the paper addresses constraints imposed by limited computing
resources, intricacies associated with large-scale IoT communication, as well
as the critical concerns pertaining to security and privacy
Learning Representations on Logs for AIOps
AI for IT Operations (AIOps) is a powerful platform that Site Reliability
Engineers (SREs) use to automate and streamline operational workflows with
minimal human intervention. Automated log analysis is a critical task in AIOps
as it provides key insights for SREs to identify and address ongoing faults.
Tasks such as log format detection, log classification, and log parsing are key
components of automated log analysis. Most of these tasks require supervised
learning; however, there are multiple challenges due to limited labelled log
data and the diverse nature of log data. Large Language Models (LLMs) such as
BERT and GPT3 are trained using self-supervision on a vast amount of unlabeled
data. These models provide generalized representations that can be effectively
used for various downstream tasks with limited labelled data. Motivated by the
success of LLMs in specific domains like science and biology, this paper
introduces a LLM for log data which is trained on public and proprietary log
data. The results of our experiments demonstrate that the proposed LLM
outperforms existing models on multiple downstream tasks. In summary, AIOps
powered by LLMs offers an efficient and effective solution for automating log
analysis tasks and enabling SREs to focus on higher-level tasks. Our proposed
LLM, trained on public and proprietary log data, offers superior performance on
multiple downstream tasks, making it a valuable addition to the AIOps platform.Comment: 11 pages, 2023 IEEE 16th International Conference on Cloud Computing
(CLOUD
NLP Methods in Host-based Intrusion Detection Systems: A Systematic Review and Future Directions
Host based Intrusion Detection System (HIDS) is an effective last line of
defense for defending against cyber security attacks after perimeter defenses
(e.g., Network based Intrusion Detection System and Firewall) have failed or
been bypassed. HIDS is widely adopted in the industry as HIDS is ranked among
the top two most used security tools by Security Operation Centers (SOC) of
organizations. Although effective and efficient HIDS is highly desirable for
industrial organizations, the evolution of increasingly complex attack patterns
causes several challenges resulting in performance degradation of HIDS (e.g.,
high false alert rate creating alert fatigue for SOC staff). Since Natural
Language Processing (NLP) methods are better suited for identifying complex
attack patterns, an increasing number of HIDS are leveraging the advances in
NLP that have shown effective and efficient performance in precisely detecting
low footprint, zero day attacks and predicting the next steps of attackers.
This active research trend of using NLP in HIDS demands a synthesized and
comprehensive body of knowledge of NLP based HIDS. Thus, we conducted a
systematic review of the literature on the end to end pipeline of the use of
NLP in HIDS development. For the end to end NLP based HIDS development
pipeline, we identify, taxonomically categorize and systematically compare the
state of the art of NLP methods usage in HIDS, attacks detected by these NLP
methods, datasets and evaluation metrics which are used to evaluate the NLP
based HIDS. We highlight the relevant prevalent practices, considerations,
advantages and limitations to support the HIDS developers. We also outline the
future research directions for the NLP based HIDS development
- …