5 research outputs found

    Accessibility Degradation Prediction on LTE/SAE Network Using Discrete Time Markov Chain (DTMC) Model

    Get PDF
    In this paper, an algorithm for predicting accessibility performance on an LTE/SAE network based on relevant historical key performance indicator (KPI) data is proposed. Since there are three KPIs related to accessibility, each representing different segments, a method to map these three KPI values onto the status of accessibility performance is proposed. The network conditions are categorized as high, acceptable or low for each time interval of observation. The first state shows that the system is running optimally, while the second state shows that the system has deteriorated and needs full attention, and the third state indicates that the system has gone into degraded conditions that cannot be tolerated. After the state sequence has been obtained, a transition probability matrix can be derived, which can be used to predict future conditions using a DTMC model. The results obtained are system predictions in terms of probability values for each state for a specific future time. These prediction values are required for proactive health monitoring and fault management. Accessibility degradation prediction is then conducted by using measurement data derived from an eNodeB in the LTE network for a period of one month

    Detecting and predicting outages in mobile networks with log data.

    No full text
    Modern cellular networks are complex systems offering a wide range of services and present challenges in detecting anomalous events when they do occur. The networks are engineered for high reliability and, hence, the data from these networks is predominantly normal with a small proportion being anomalous. From an operations perspective, it is important to detect these anomalies in a timely manner, to correct vulnerabilities in the network and preclude the occurrence of major failure events. The objective of our work is anomaly detection in cellular networks in near real-time to improve network performance and reliability. We use performance data from a 4G LTE network to develop a methodology for anomaly detection in such networks. Two rigorous prediction models are proposed: a non-parametric approach (Chi-Square test), and a parametric one (Gaussian Mixture Models). These models are trained to detect differences between distributions to classify a target distribution as belonging to a normal period or abnormal period with high accuracy. We discuss the merits between the approaches and show that both provide a more nuanced view of the network than simple thresh- olds of success/failure used by operators in production networks today

    Developing reliable anomaly detection system for critical hosts: a proactive defense paradigm

    Full text link
    Current host-based anomaly detection systems have limited accuracy and incur high processing costs. This is due to the need for processing massive audit data of the critical host(s) while detecting complex zero-day attacks which can leave minor, stealthy and dispersed artefacts. In this research study, this observation is validated using existing datasets and state-of-the-art algorithms related to the construction of the features of a host's audit data, such as the popular semantic-based extraction and decision engines, including Support Vector Machines, Extreme Learning Machines and Hidden Markov Models. There is a challenging trade-off between achieving accuracy with a minimum processing cost and processing massive amounts of audit data that can include complex attacks. Also, there is a lack of a realistic experimental dataset that reflects the normal and abnormal activities of current real-world computers. This thesis investigates the development of new methodologies for host-based anomaly detection systems with the specific aims of improving accuracy at a minimum processing cost while considering challenges such as complex attacks which, in some cases, can only be visible via a quantified computing resource, for example, the execution times of programs, the processing of massive amounts of audit data, the unavailability of a realistic experimental dataset and the automatic minimization of the false positive rate while dealing with the dynamics of normal activities. This study provides three original and significant contributions to this field of research which represent a marked advance in its body of knowledge. The first major contribution is the generation and release of a realistic intrusion detection systems dataset as well as the development of a metric based on fuzzy qualitative modeling for embedding the possible quality of realism in a dataset's design process and assessing this quality in existing or future datasets. The second key contribution is constructing and evaluating the hidden host features to identify the trivial differences between the normal and abnormal artefacts of hosts' activities at a minimum processing cost. Linux-centric features include the frequencies and ranges, frequency-domain representations and Gaussian interpretations of system call identifiers with execution times while, for Windows, a count of the distinct core Dynamic Linked Library calls is identified as a hidden host feature. The final key contribution is the development of two new anomaly-based statistical decision engines for capitalizing on the potential of some of the suggested hidden features and reliably detecting anomalies. The first engine, which has a forensic module, is based on stochastic theories including Hierarchical hidden Markov models and the second is modeled using Gaussian Mixture Modeling and Correntropy. The results demonstrate that the proposed host features and engines are competent for meeting the identified challenges
    corecore