666 research outputs found

    A cell outage management framework for dense heterogeneous networks

    Get PDF
    In this paper, we present a novel cell outage management (COM) framework for heterogeneous networks with split control and data planes-a candidate architecture for meeting future capacity, quality-of-service, and energy efficiency demands. In such an architecture, the control and data functionalities are not necessarily handled by the same node. The control base stations (BSs) manage the transmission of control information and user equipment (UE) mobility, whereas the data BSs handle UE data. An implication of this split architecture is that an outage to a BS in one plane has to be compensated by other BSs in the same plane. Our COM framework addresses this challenge by incorporating two distinct cell outage detection (COD) algorithms to cope with the idiosyncrasies of both data and control planes. The COD algorithm for control cells leverages the relatively larger number of UEs in the control cell to gather large-scale minimization-of-drive-test report data and detects an outage by applying machine learning and anomaly detection techniques. To improve outage detection accuracy, we also investigate and compare the performance of two anomaly-detecting algorithms, i.e., k-nearest-neighbor- and local-outlier-factor-based anomaly detectors, within the control COD. On the other hand, for data cell COD, we propose a heuristic Grey-prediction-based approach, which can work with the small number of UE in the data cell, by exploiting the fact that the control BS manages UE-data BS connectivity and by receiving a periodic update of the received signal reference power statistic between the UEs and data BSs in its coverage. The detection accuracy of the heuristic data COD algorithm is further improved by exploiting the Fourier series of the residual error that is inherent to a Grey prediction model. Our COM framework integrates these two COD algorithms with a cell outage compensation (COC) algorithm that can be applied to both planes. Our COC solution utilizes an actor-critic-based reinforcement learning algorithm, which optimizes the capacity and coverage of the identified outage zone in a plane, by adjusting the antenna gain and transmission power of the surrounding BSs in that plane. The simulation results show that the proposed framework can detect both data and control cell outage and compensate for the detected outage in a reliable manner

    One-Class Classification: Taxonomy of Study and Review of Techniques

    Full text link
    One-class classification (OCC) algorithms aim to build classification models when the negative class is either absent, poorly sampled or not well defined. This unique situation constrains the learning of efficient classifiers by defining class boundary just with the knowledge of positive class. The OCC problem has been considered and applied under many research themes, such as outlier/novelty detection and concept learning. In this paper we present a unified view of the general problem of OCC by presenting a taxonomy of study for OCC problems, which is based on the availability of training data, algorithms used and the application domains applied. We further delve into each of the categories of the proposed taxonomy and present a comprehensive literature review of the OCC algorithms, techniques and methodologies with a focus on their significance, limitations and applications. We conclude our paper by discussing some open research problems in the field of OCC and present our vision for future research.Comment: 24 pages + 11 pages of references, 8 figure

    A survey of machine learning techniques applied to self organizing cellular networks

    Get PDF
    In this paper, a survey of the literature of the past fifteen years involving Machine Learning (ML) algorithms applied to self organizing cellular networks is performed. In order for future networks to overcome the current limitations and address the issues of current cellular systems, it is clear that more intelligence needs to be deployed, so that a fully autonomous and flexible network can be enabled. This paper focuses on the learning perspective of Self Organizing Networks (SON) solutions and provides, not only an overview of the most common ML techniques encountered in cellular networks, but also manages to classify each paper in terms of its learning solution, while also giving some examples. The authors also classify each paper in terms of its self-organizing use-case and discuss how each proposed solution performed. In addition, a comparison between the most commonly found ML algorithms in terms of certain SON metrics is performed and general guidelines on when to choose each ML algorithm for each SON function are proposed. Lastly, this work also provides future research directions and new paradigms that the use of more robust and intelligent algorithms, together with data gathered by operators, can bring to the cellular networks domain and fully enable the concept of SON in the near future

    Machine Learning in Wireless Sensor Networks: Algorithms, Strategies, and Applications

    Get PDF
    Wireless sensor networks monitor dynamic environments that change rapidly over time. This dynamic behavior is either caused by external factors or initiated by the system designers themselves. To adapt to such conditions, sensor networks often adopt machine learning techniques to eliminate the need for unnecessary redesign. Machine learning also inspires many practical solutions that maximize resource utilization and prolong the lifespan of the network. In this paper, we present an extensive literature review over the period 2002-2013 of machine learning methods that were used to address common issues in wireless sensor networks (WSNs). The advantages and disadvantages of each proposed algorithm are evaluated against the corresponding problem. We also provide a comparative guide to aid WSN designers in developing suitable machine learning solutions for their specific application challenges.Comment: Accepted for publication in IEEE Communications Surveys and Tutorial

    Detecting malicious data injections in event detection wireless sensor networks

    Get PDF

    A case study: Failure prediction in a real LTE network

    Get PDF
    Mobile traffic and number of connected devices have been increasing exponentially nowadays, with customer expectation from mobile operators in term of quality and reliability is higher and higher. This places pressure on operators to invest as well as to operate their growing infrastructures. As such, telecom network management becomes an essential problem. To reduce cost and maintain network performance, operators need to bring more automation and intelligence into their management system. Self-Organizing Networks function (SON) is an automation technology aiming to maximize performance in mobility networks by bringing autonomous adaptability and reducing human intervention in network management and operations. Three main areas of SON include self-configuration (auto-configuration when new element enter the network), self-optimization (optimization of the network parameters during operation) and self-healing (maintenance). The main purpose of the thesis is to illustrate how anomaly detection methods can be applied to SON functions, in particularly self-healing functions such as fault detection and cell outage management. The thesis is illustrated by a case study, in which the anomalies - in this case, the failure alarms, are predicted in advance using performance measurement data (PM data) collected from a real LTE network within a certain timeframe. Failures prediction or anomalies detection can help reduce cost and maintenance time in mobile network base stations. The author aims to answer the research questions: what anomaly detection models could detect the anomalies in advance, and what type of anomalies can be well-detected using those models. Using cross-validation, the thesis shows that random forest method is the best performing model out of the chosen ones, with F1-score of 0.58, 0.96 and 0.52 for the anomalies: Failure in Optical Interface, Temperature alarm, and VSWR minor alarm respectively. Those are also the anomalies can be well-detected by the model

    NeuDetect: A neural network data mining system for wireless network intrusion detection

    Get PDF
    This thesis proposes an Intrusion Detection System, NeuDetect, which applies Neural Network technique to wireless network packets captured through hardware sensors for purposes of real time detection of anomalous packets. To address the problem of high false alarm rate confronted by the current wireless intrusion detection systems, this thesis presents a method of applying the artificial neural networks technique to the wireless network intrusion detection system. The proposed system solution approach is to find normal and anomalous patterns on preprocessed wireless packet records by comparing them with training data using Back-propagation algorithm. An anomaly score is assigned to each packet by calculating the difference between the output error and threshold. If the anomaly score is positive then the wireless packet is flagged as anomalous and is negative then the packet is flagged as normal. If the anomaly score is zero or close to zero it will be flagged as an unknown attack and will be sent back to training process for re-evaluation

    Behaviour based anomaly detection system for smartphones using machine learning algorithm

    Get PDF
    In this research, we propose a novel, platform independent behaviour-based anomaly detection system for smartphones. The fundamental premise of this system is that every smartphone user has unique usage patterns. By modelling these patterns into a profile we can uniquely identify users. To evaluate this hypothesis, we conducted an experiment in which a data collection application was developed to accumulate real-life dataset consisting of application usage statistics, various system metrics and contextual information from smartphones. Descriptive statistical analysis was performed on our dataset to identify patterns of dissimilarity in smartphone usage of the participants of our experiment. Following this analysis, a Machine Learning algorithm was applied on the dataset to create a baseline usage profile for each participant. These profiles were compared to monitor deviations from baseline in a series of tests that we conducted, to determine the profiling accuracy. In the first test, seven day smartphone usage data consisting of eight features and an observation interval of one hour was used and an accuracy range of 73.41% to 100% was achieved. In this test, 8 out 10 user profiles were more than 95% accurate. The second test, utilised the entire dataset and achieved average accuracy of 44.50% to 95.48%. Not only these results are very promising in differentiating participants based on their usage, the implications of this research are far reaching as our system can also be extended to provide transparent, continuous user authentication on smartphones or work as a risk scoring engine for other Intrusion Detection System

    Anomaly detection in unknown environments using wireless sensor networks

    Get PDF
    This dissertation addresses the problem of distributed anomaly detection in Wireless Sensor Networks (WSN). A challenge of designing such systems is that the sensor nodes are battery powered, often have different capabilities and generally operate in dynamic environments. Programming such sensor nodes at a large scale can be a tedious job if the system is not carefully designed. Data modeling in distributed systems is important for determining the normal operation mode of the system. Being able to model the expected sensor signatures for typical operations greatly simplifies the human designer’s job by enabling the system to autonomously characterize the expected sensor data streams. This, in turn, allows the system to perform autonomous anomaly detection to recognize when unexpected sensor signals are detected. This type of distributed sensor modeling can be used in a wide variety of sensor networks, such as detecting the presence of intruders, detecting sensor failures, and so forth. The advantage of this approach is that the human designer does not have to characterize the anomalous signatures in advance. The contributions of this approach include: (1) providing a way for a WSN to autonomously model sensor data with no prior knowledge of the environment; (2) enabling a distributed system to detect anomalies in both sensor signals and temporal events online; (3) providing a way to automatically extract semantic labels from temporal sequences; (4) providing a way for WSNs to save communication power by transmitting compressed temporal sequences; (5) enabling the system to detect time-related anomalies without prior knowledge of abnormal events; and, (6) providing a novel missing data estimation method that utilizes temporal and spatial information to replace missing values. The algorithms have been designed, developed, evaluated, and validated experimentally in synthesized data, and in real-world sensor network applications

    Deviation Point Curriculum Learning for Trajectory Outlier Detection in Cooperative Intelligent Transport Systems

    Get PDF
    Cooperative Intelligent Transport Systems (C-ITS) are emerging in the field of transportation systems, which can be used to provide safety, sustainability, efficiency, communication and cooperation between vehicles, roadside units, and traffic command centres. With improved network structure and traffic mobility, a large amount of trajectory-based data is generated. Trajectory-based knowledge graphs help to give semantic and interconnection capabilities for intelligent transport systems. Prior works consider trajectory as the single point of deviation for the individual outliers. However, in real-world transportation systems, trajectory outliers can be seen in the groups, e.g., a group of vehicles that deviates from a single point based on the maintenance of streets in the vicinity of the intelligent transportation system. In this paper, we propose a trajectory deviation point embedding and deep clustering method for outlier detection. We first initiate network structure and nodes' neighbours to construct a structural embedding by preserving nodes relationships. We then implement a method to learn the latent representation of deviation points in road network structures. A hierarchy multilayer graph is designed with a biased random walk to generate a set of sequences. This sequence is implemented to tune the node embeddings. After that, embedding values of the node were averaged to get the trip embedding. Finally, LSTM-based pairwise classification method is initiated to cluster the embedding with similarity-based measures. The results obtained from the experiments indicate that the proposed learning trajectory embedding captured structural identity and increased F-measure by 5.06% and 2.4% while compared with generic Node2Vec and Struct2Vec methods.acceptedVersio
    • …
    corecore