1,127 research outputs found

    Automatic Detection of Mass Outages in Radio Access Networks

    Get PDF
    Fault management in mobile networks is required for detecting, analysing, and fixing problems appearing in the mobile network. When a large problem appears in the mobile network, multiple alarms are generated from the network elements. Traditionally Network Operations Center (NOC) process the reported failures, create trouble tickets for problems, and perform a root cause analysis. However, alarms do not reveal the root cause of the failure, and the correlation of alarms is often complicated to determine. If the network operator can correlate alarms and manage clustered groups of alarms instead of separate ones, it saves costs, preserves the availability of the mobile network, and improves the quality of service. Operators may have several electricity providers and the network topology is not correlated with the electricity topology. Additionally, network sites and other network elements are not evenly distributed across the network. Hence, we investigate the suitability of a density-based clustering methods to detect mass outages and perform alarm correlation to reduce the amount of created trouble tickets. This thesis focuses on assisting the root cause analysis and detecting correlated power and transmission failures in the mobile network. We implement a Mass Outage Detection Service and form a custom density-based algorithm. Our service performs alarm correlation and creates clusters of possible power and transmission mass outage alarms. We have filed a patent application based on the work done in this thesis. Our results show that we are able to detect mass outages in real time from the data streams. The results also show that detected clusters reduce the number of created trouble tickets and help reduce of the costs of running the network. The number of trouble tickets decreases by 4.7-9.3% for the alarms we process in the service in the tested networks. When we consider only alarms included in the mass outage groups, the reduction is over 75%. Therefore continuing to use, test, and develop implemented Mass Outage Detection Service is beneficial for operators and automated NOC

    Fade Depth Prediction Using Human Presence for Real Life WSN Deployment

    Get PDF
    Current problem in real life WSN deployment is determining fade depth in indoor propagation scenario for link power budget analysis using (fade margin parameter). Due to the fact that human presence impacts the performance of wireless networks, this paper proposes a statistical approach for shadow fading prediction using various real life parameters. Considered parameters within this paper include statistically mapped human presence and the number of people through time compared to the received signal strength. This paper proposes an empirical model fade depth prediction model derived from a comprehensive set of measured data in indoor propagation scenario. It is shown that the measured fade depth has high correlations with the number of people in non-line-of-sight condition, giving a solid foundation for the fade depth prediction model. In line-of-sight conditions this correlations is significantly lower. By using the proposed model in real life deployment scenarios of WSNs, the data loss and power consumption can be reduced by the means of intelligently planning and designing Wireless Sensor Network

    Cell fault management using machine learning techniques

    Get PDF
    This paper surveys the literature relating to the application of machine learning to fault management in cellular networks from an operational perspective. We summarise the main issues as 5G networks evolve, and their implications for fault management. We describe the relevant machine learning techniques through to deep learning, and survey the progress which has been made in their application, based on the building blocks of a typical fault management system. We review recent work to develop the abilities of deep learning systems to explain and justify their recommendations to network operators. We discuss forthcoming changes in network architecture which are likely to impact fault management and offer a vision of how fault management systems can exploit deep learning in the future. We identify a series of research topics for further study in order to achieve this

    A New Paradigm for Proactive Self-Healing in Future Self-Organizing Mobile Cellular Networks

    Get PDF
    Mobile cellular network operators spend nearly a quarter of their revenue on network management and maintenance. Remarkably, a significant proportion of that budget is spent on resolving outages that degrade or disrupt cellular services. Historically, operators have mainly relied on human expertise to identify, diagnose and resolve such outages while also compensating for them in the short-term. However, with ambitious quality of experience expectations from 5th generation and beyond mobile cellular networks spurring research towards technologies such as ultra-dense heterogeneous networks and millimeter wave spectrum utilization, discovering and compensating coverage lapses in future networks will be a major challenge. Numerous studies have explored heuristic, analytical and machine learning-based solutions to autonomously detect, diagnose and compensate cell outages in legacy mobile cellular networks, a branch of research known as self-healing. This dissertation focuses on self-healing techniques for future mobile cellular networks, with special focus on outage detection and avoidance components of self-healing. Network outages can be classified into two primary types: 1) full and 2) partial. Full outages result from failed soft or hard components of network entities while partial outages are generally a consequence of parametric misconfiguration. To this end, chapter 2 of this dissertation is dedicated to a detailed survey of research on detecting, diagnosing and compensating full outages as well as a detailed analysis of studies on proactive outage avoidance schemes and their challenges. A key observation from the analysis of the state-of-the-art outage detection techniques is their dependence on full network coverage data, susceptibility to noise or randomness in the data and inability to characterize outages in both spacial domain and temporal domain. To overcome these limitations, chapters 3 and 4 present two unique and novel outage detection techniques. Chapter 3 presents an outage detection technique based on entropy field decomposition which combines information field theory and entropy spectrum pathways theory and is robust to noise variance. Chapter 4 presents a deep learning neural network algorithm which is robust to data sparsity and compares it with entropy field decomposition and other state-of-the-art machine learning-based outage detection algorithms including support vector machines, K-means clustering, independent component analysis and deep auto-encoders. Based on the insights obtained regarding the impact of partial outages, chapter 5 presents a complete framework for 5th generation and beyond mobile cellular networks that is designed to avoid partial outages caused by parametric misconfiguration. The power of the proposed framework is demonstrated by leveraging it to design a solution that tackles one of the most common problems associated with ultra-dense heterogeneous networks, namely imbalanced load among small and macro cells, and poor resource utilization as a consequence. The optimization problem is formulated as a function of two hard parameters namely antenna tilt and transmit power, and a soft parameter, cell individual offset, that affect the coverage, capacity and load directly. The resulting solution is a combination of the otherwise conflicting coverage and capacity optimization and load balancing self-organizing network functions

    Outage-Watch: Early Prediction of Outages using Extreme Event Regularizer

    Full text link
    Cloud services are omnipresent and critical cloud service failure is a fact of life. In order to retain customers and prevent revenue loss, it is important to provide high reliability guarantees for these services. One way to do this is by predicting outages in advance, which can help in reducing the severity as well as time to recovery. It is difficult to forecast critical failures due to the rarity of these events. Moreover, critical failures are ill-defined in terms of observable data. Our proposed method, Outage-Watch, defines critical service outages as deteriorations in the Quality of Service (QoS) captured by a set of metrics. Outage-Watch detects such outages in advance by using current system state to predict whether the QoS metrics will cross a threshold and initiate an extreme event. A mixture of Gaussian is used to model the distribution of the QoS metrics for flexibility and an extreme event regularizer helps in improving learning in tail of the distribution. An outage is predicted if the probability of any one of the QoS metrics crossing threshold changes significantly. Our evaluation on a real-world SaaS company dataset shows that Outage-Watch significantly outperforms traditional methods with an average AUC of 0.98. Additionally, Outage-Watch detects all the outages exhibiting a change in service metrics and reduces the Mean Time To Detection (MTTD) of outages by up to 88% when deployed in an enterprise cloud-service system, demonstrating efficacy of our proposed method.Comment: Accepted to ESEC/FSE 202

    Predicting Electrical Faults in Power Distribution Network

    Get PDF
    Electricity is becoming increasingly important in modern civilization, and as a result, the emphasis on and use of power infrastructure is gradually expanding. Simultaneously, investment and distribution modes are shifting from the large-scale centralized generation of electricity and sheer consumption to decentralized generators and extremely sophisticated clients. This transformation puts further strain on old infrastructure, necessitating significant expenditures in future years to ensure a consistent supply. Subsequent technical and prediction technologies can help to maximize the use of the current grid while lowering the probability of faults. This study discusses some of the local grid difficulties as well as a prospective maintenance and failure probabilistic model. To provide an effective and convenient power source to consumers, a high Volta protects and maintains under fault conditions. Most of the fault identification and localization approaches rely on real and reactive power converter observations of electronic values. This can be seen in metrics and ground evaluations derived via internet traffic. This paper provides a thorough examination of the mechanisms for error detection, diagnosis, and localization in overhead lines. The proposal is then able to make suggestions about the ways that can be incorporated to predict foreseen faults in the electrical network. The three classifiers, Random Forest, XGBoost and Decision tree are producing high accuracies, while Logistic Regression and SVM are producing realistic accuracy results

    Endless Data

    Get PDF
    Small and Medium Enterprises (SMEs), as well as micro teams, face an uphill task when delivering software to the Cloud. While rapid release methods such as Continuous Delivery can speed up the delivery cycle: software quality, application uptime and information management remain key concerns. This work looks at four aspects of software delivery: crowdsourced testing, Cloud outage modelling, collaborative chat discourse modelling, and collaborative chat discourse segmentation. For each aspect, we consider business related questions around how to improve software quality and gain more significant insights into collaborative data while respecting the rapid release paradigm
    corecore