3,628 research outputs found

    A framework for modelling mobile radio access networks for intelligent fault management

    Get PDF
    Postprin

    An intelligent alarm management system for large-scale telecommunication companies

    Get PDF
    This paper introduces an intelligent system that performs alarm correlation and root cause analysis. The system is designed to operate in large- scale heterogeneous networks from telecommunications operators. The pro- posed architecture includes a rules management module that is based in data mining (to generate the rules) and reinforcement learning (to improve rule se- lection) algorithms. In this work, we focus on the design and development of the rule generation part and test it using a large real-world dataset containing alarms from a Portuguese telecommunications company. The correlation engine achieved promising results, measured by a compression rate of 70% and as- sessed in real-time by experienced network administrator staff

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    Supporting Telecommunication Alarm Management System with Trouble Ticket Prediction

    Get PDF
    Fault alarm data emanated from heterogeneous telecommunication network services and infrastructures are exploding with network expansions. Managing and tracking the alarms with Trouble Tickets using manual or expert rule- based methods has become challenging due to increase in the complexity of Alarm Management Systems and demand for deployment of highly trained experts. As the size and complexity of networks hike immensely, identifying semantically identical alarms, generated from heterogeneous network elements from diverse vendors, with data-driven methodologies has become imperative to enhance efficiency. In this paper, a data-driven Trouble Ticket prediction models are proposed to leverage Alarm Management Systems. To improve performance, feature extraction, using a sliding time-window and feature engineering, from related history alarm streams is also introduced. The models were trained and validated with a data-set provided by the largest telecommunication provider in Italy. The experimental results showed the promising efficacy of the proposed approach in suppressing false positive alarms with Trouble Ticket prediction

    FILTERING FALSE ALARMS: AN APPROACH BASED ON EPISODE MINING

    Get PDF
    The security of computer networks is a prime concern today. Various devices and methods have been developed to offer different kinds of protection (firewalls, IDS´s, antiviruses, etc.). By centrally storing and processing the signals of these devices, it is possible to detect more cheats and attacks than simply by analysing the logs independently. The most difficult and still unsolved problem in centralized systems is that vast numbers of false alarms. If a harmless pattern, which caused by a safe operation is identified as an alarm, then it is a nuisance and requires human invention to be handled properly. In this paper we show how we can use data mining to discover the patterns that frequently causes false alarms. Due to the new requirements (events with many attributes, invertible parametric predicates) none of the previously published algorithms can be applied to our problem directly. We present the algorithm ABAMSEP, which discovers frequent alert-ended episodes. We prove that the algorithm is correct in the sense that it finds all episodes that meet the requirements of the specification

    A log mining approach for process monitoring in SCADA

    Get PDF
    SCADA (Supervisory Control and Data Acquisition) systems are used for controlling and monitoring industrial processes. We propose a methodology to systematically identify potential process-related threats in SCADA. Process-related threats take place when an attacker gains user access rights and performs actions, which look legitimate, but which are intended to disrupt the SCADA process. To detect such threats, we propose a semi-automated approach of log processing. We conduct experiments on a real-life water treatment facility. A preliminary case study suggests that our approach is effective in detecting anomalous events that might alter the regular process workflow

    Tiedonlouhinta televerkkojen lokien analysoinnin tukena

    Get PDF
    Telecommunications network management is based on huge amounts of data that are continuously collected from elements and devices from all around the network. The data is monitored and analysed to provide information for decision making in all operation functions. Knowledge discovery and data mining methods can support fast-pace decision making in network operations. In this thesis, I analyse decision making on different levels of network operations. I identify the requirements decision-making sets for knowledge discovery and data mining tools and methods, and I study resources that are available to them. I then propose two methods for augmenting and applying frequent sets to support everyday decision making. The proposed methods are Comprehensive Log Compression for log data summarisation and Queryable Log Compression for semantic compression of log data. Finally I suggest a model for a continuous knowledge discovery process and outline how it can be implemented and integrated to the existing network operations infrastructure.Tiedonlouhintamenetelmillä analysoidaan suuria tietomääriä, jotka on kerätty esimerkiksi vähittäiskaupan asiakkaista, televerkkojen laitteista, prosessiteollisuuden tuotantolaitoksista, tai erotettu geeneistä tai muista tutkitusta kohteista. Menetelmät havaitsevat tehokkaasti asioiden välisiä yhteyksiä kuten käyttäytymis- ja toimintamalleja ja poikkeamia niistä. Menetelmillä tuotettua tietoa käytetään liike-elämässä ja teollisuudessa toimintojen tehostamiseen sekä tieteessä uusien tutkimustulosten etsimiseen. Tiedonlouhinnan menetelmien ongelmana on niiden monimutkaisuus ja vaikeakäyttöisyys. Pystyäkseen käyttämään menetelmiä, tulee hallita niiden teoreettiset perusteet ja kyetä asettamaan kohdalleen useita kymmeniä tuloksiin vaikuttavia syötearvoja. Tämä on hankalaa käytännön tehtävissä, kuten televerkkojen valvonnassa, joissa seurattavat datamäärät ovat valtavia ja aikaa päätöksen tekoon on vähän: pikemminkin minuutteja kuin tunteja. Minkälaisia tiedonlouhintamenetelmien tulisi olla, jotta ne voitaisiin liittää esimerkiksi osaksi televerkon valvojan työkaluja? Selvittääkseni tiedonlouhintamenetelmille asetettavat vaatimukset tarkastelen väitöskirjassani päätöksentekoa televerkon operoinnin ja ylläpidon eri vaiheissa ja tasoilla. Luon päätöksenteosta mallin ja tarkastelen sitä tukevia tiedonlouhinnan tehtäviä ja niiden tarvitsemia lähtötietoja. Kuvaan teollisessa käyttöympäristössä saatavilla olevan asiantuntemuksen, resurssit ja työvälineet, joiden avulla tiedonlouhintamenetelmiä käytetään ja johdan vaatimuslistan päätöksenteon tukena käytettäville tiedonlouhintamenetelmille. Tutkimuksessani esittelen kaksi menetelmää laajojen tapahtumia sisältävien lokitietokantojen analysointiin. CLC-menetelmä luo ilman etukäteisoppimista tai -määritelmiä annetusta laajasta tapahtumajoukosta tiivistelmän havaitsemalla ja kuvaamalla usein samankaltaisina toistuvat tapahtumat ja tapahtumien ketjut. Menetelmä jättää lokiin asiantuntijan tarkasteltavaksi yksittäiset ja harvoin esiintyvät tapahtumat. QLC-menetelmää puolestaan käytetään lokien tiiviiseen tallennukseen. Sen avulla voidaan lokit tallentaa joissain tapauksissa kolmanneksen pienempään tilaan yleisesti käytettyihin tiivistysmenetelmiin verrattuna. Lisäksi QLC-menetelmän etuna on, että sen avulla tiivistettyihin lokitiedostoihin voidaan kohdistaa kyselyjä ilman, että tiivistystä täytyy erikseen ensin purkaa. Sekä CLC- että QLC-menetelmä täyttää hyvin havaitut tiedonlouhintamenetelmille asetetut vaatimukset. Tutkimuksen lopuksi esitän teollista päätöksentekoa tukevaa jatkuvaa tiedonlouhintaa kuvaavan prosessimallin ja hahmottelen kuinka tiedonlouhintamenetelmät ja -prosessi voidaan yhdistää yrityksen tietojärjestelmään. Olen käyttänyt televerkkojen ylläpitoa tutkimusympäristönä, mutta sekä havaitsemani tiedonlouhintamenetelmille asetettavat vaatimukset että kehittämäni menetelmät ovat sovellettavissa muissa vastaavissa ympäristöissä, joissa tarkkaillaan ja analysoidaan jatkuvaa lokitapahtumien virtaa. Näille ympäristöille on yhteistä, että niissä on jatkuvasti tehtävä päätöksiä, joita ei pystytä tapahtumien ja prosessin tilojen harvinaisuuden tai moniselitteisyyden takia automatisoimaan. Tällaisia ovat muun muassa tietoturvalokit, verkkopalvelujen käytön seuranta, teollisten prosessien ylläpito, sekä laajojen logistiikkapalveluiden seuranta
    corecore