21,126 research outputs found

    Information spreading during emergencies and anomalous events

    Full text link
    The most critical time for information to spread is in the aftermath of a serious emergency, crisis, or disaster. Individuals affected by such situations can now turn to an array of communication channels, from mobile phone calls and text messages to social media posts, when alerting social ties. These channels drastically improve the speed of information in a time-sensitive event, and provide extant records of human dynamics during and afterward the event. Retrospective analysis of such anomalous events provides researchers with a class of "found experiments" that may be used to better understand social spreading. In this chapter, we study information spreading due to a number of emergency events, including the Boston Marathon Bombing and a plane crash at a western European airport. We also contrast the different information which may be gleaned by social media data compared with mobile phone data and we estimate the rate of anomalous events in a mobile phone dataset using a proposed anomaly detection method.Comment: 19 pages, 11 figure

    A Grammatical Inference Approach to Language-Based Anomaly Detection in XML

    Full text link
    False-positives are a problem in anomaly-based intrusion detection systems. To counter this issue, we discuss anomaly detection for the eXtensible Markup Language (XML) in a language-theoretic view. We argue that many XML-based attacks target the syntactic level, i.e. the tree structure or element content, and syntax validation of XML documents reduces the attack surface. XML offers so-called schemas for validation, but in real world, schemas are often unavailable, ignored or too general. In this work-in-progress paper we describe a grammatical inference approach to learn an automaton from example XML documents for detecting documents with anomalous syntax. We discuss properties and expressiveness of XML to understand limits of learnability. Our contributions are an XML Schema compatible lexical datatype system to abstract content in XML and an algorithm to learn visibly pushdown automata (VPA) directly from a set of examples. The proposed algorithm does not require the tree representation of XML, so it can process large documents or streams. The resulting deterministic VPA then allows stream validation of documents to recognize deviations in the underlying tree structure or datatypes.Comment: Paper accepted at First Int. Workshop on Emerging Cyberthreats and Countermeasures ECTCM 201

    Polymorphism and danger susceptibility of system call DASTONs

    Get PDF
    We have proposed a metaphor “DAnger Susceptible daTa codON� (DASTON) in data subject to processing by Danger Theory (DT) based Artificial Immune System (DAIS). The DASTONs are data chunks or data point sets that actively take part to produce “danger�; here we abstract “danger� as required outcome. To have closer look to the metaphor, this paper furthers biological abstractions for DASTON. Susceptibility of DASTON is important parameter for generating dangerous outcome. In biology, susceptibility of a host to pathogenic activities (potentially dangerous activities) is related to polymorphism. Interestingly, results of experiments conducted for system call DASTONs are in close accordance to biological theory of polymorphism and susceptibility. This shows that computational data (system calls in this case) exhibit biological properties when processed with DT point of view

    AI Solutions for MDS: Artificial Intelligence Techniques for Misuse Detection and Localisation in Telecommunication Environments

    Get PDF
    This report considers the application of Articial Intelligence (AI) techniques to the problem of misuse detection and misuse localisation within telecommunications environments. A broad survey of techniques is provided, that covers inter alia rule based systems, model-based systems, case based reasoning, pattern matching, clustering and feature extraction, articial neural networks, genetic algorithms, arti cial immune systems, agent based systems, data mining and a variety of hybrid approaches. The report then considers the central issue of event correlation, that is at the heart of many misuse detection and localisation systems. The notion of being able to infer misuse by the correlation of individual temporally distributed events within a multiple data stream environment is explored, and a range of techniques, covering model based approaches, `programmed' AI and machine learning paradigms. It is found that, in general, correlation is best achieved via rule based approaches, but that these suffer from a number of drawbacks, such as the difculty of developing and maintaining an appropriate knowledge base, and the lack of ability to generalise from known misuses to new unseen misuses. Two distinct approaches are evident. One attempts to encode knowledge of known misuses, typically within rules, and use this to screen events. This approach cannot generally detect misuses for which it has not been programmed, i.e. it is prone to issuing false negatives. The other attempts to `learn' the features of event patterns that constitute normal behaviour, and, by observing patterns that do not match expected behaviour, detect when a misuse has occurred. This approach is prone to issuing false positives, i.e. inferring misuse from innocent patterns of behaviour that the system was not trained to recognise. Contemporary approaches are seen to favour hybridisation, often combining detection or localisation mechanisms for both abnormal and normal behaviour, the former to capture known cases of misuse, the latter to capture unknown cases. In some systems, these mechanisms even work together to update each other to increase detection rates and lower false positive rates. It is concluded that hybridisation offers the most promising future direction, but that a rule or state based component is likely to remain, being the most natural approach to the correlation of complex events. The challenge, then, is to mitigate the weaknesses of canonical programmed systems such that learning, generalisation and adaptation are more readily facilitated

    Unfolding the procedure of characterizing recorded ultra low frequency, kHZ and MHz electromagetic anomalies prior to the L'Aquila earthquake as pre-seismic ones. Part I

    Get PDF
    Ultra low frequency, kHz and MHz electromagnetic anomalies were recorded prior to the L'Aquila catastrophic earthquake that occurred on April 6, 2009. The main aims of this contribution are: (i) To suggest a procedure for the designation of detected EM anomalies as seismogenic ones. We do not expect to be possible to provide a succinct and solid definition of a pre-seismic EM emission. Instead, we attempt, through a multidisciplinary analysis, to provide elements of a definition. (ii) To link the detected MHz and kHz EM anomalies with equivalent last stages of the L'Aquila earthquake preparation process. (iii) To put forward physically meaningful arguments to support a way of quantifying the time to global failure and the identification of distinguishing features beyond which the evolution towards global failure becomes irreversible. The whole effort is unfolded in two consecutive parts. We clarify we try to specify not only whether or not a single EM anomaly is pre-seismic in itself, but mainly whether a combination of kHz, MHz, and ULF EM anomalies can be characterized as pre-seismic one
    corecore