2,717 research outputs found

    Multi-paradigm frameworks for scalable intrusion detection

    Get PDF
    Research in network security and intrusion detection systems (IDSs) has typically focused on small or artificial data sets. Tools are developed that work well on these data sets but have trouble meeting the demands of real-world, large-scale network environments. In addressing this problem, improvements must be made to the foundations of intrusion detection systems, including data management, IDS accuracy and alert volume;We address data management of network security and intrusion detection information by presenting a database mediator system that provides single query access via a domain specific query language. Results are returned in the form of XML using web services, allowing analysts to access information from remote networks in a uniform manner. The system also provides scalable data capture of log data for multi-terabyte datasets;Next, we address IDS alert accuracy by building an agent-based framework that utilizes web services to make the system easy to deploy and capable of spanning network boundaries. Agents in the framework process IDS alerts managed by a central alert broker. The broker can define processing hierarchies by assigning dependencies on agents to achieve scalability. The framework can also be used for the task of event correlation, or gathering information relevant to an IDS alert;Lastly, we address alert volume by presenting an approach to alert correlation that is IDS independent. Using correlated events gathered in our agent framework, we build a feature vector for each IDS alert representing the network traffic profile of the internal host at the time of the alert. This feature vector is used as a statistical fingerprint in a clustering algorithm that groups related alerts. We analyze our results with a combination of domain expert evaluation and feature selection

    Intrusion Alert Quality Framework For Security False Alert Reduction

    Get PDF
    Tesis ini mengkaji rekabentuk dan perlaksanaan rangka-kerja yang mempersiapkan amaran-amaran keselamatan dengan metrik-metrik yang disahkan This thesis investigates the design and implementation of a framework to prepare security alerts with verified data quality metric

    Archiving the Relaxed Consistency Web

    Full text link
    The historical, cultural, and intellectual importance of archiving the web has been widely recognized. Today, all countries with high Internet penetration rate have established high-profile archiving initiatives to crawl and archive the fast-disappearing web content for long-term use. As web technologies evolve, established web archiving techniques face challenges. This paper focuses on the potential impact of the relaxed consistency web design on crawler driven web archiving. Relaxed consistent websites may disseminate, albeit ephemerally, inaccurate and even contradictory information. If captured and preserved in the web archives as historical records, such information will degrade the overall archival quality. To assess the extent of such quality degradation, we build a simplified feed-following application and simulate its operation with synthetic workloads. The results indicate that a non-trivial portion of a relaxed consistency web archive may contain observable inconsistency, and the inconsistency window may extend significantly longer than that observed at the data store. We discuss the nature of such quality degradation and propose a few possible remedies.Comment: 10 pages, 6 figures, CIKM 201

    Intrusion Alert Quality Framework For Security False Alert Reduction [TH9737. N162 2007 f rb].

    Get PDF
    Tesis ini mengkaji rekabentuk dan perlaksanaan rangka-kerja yang mempersiapkan amaran-amaran keselamatan dengan metrik-metrik yang disahkan, memperkayakan amaran-amaran keselamatan dengan metrik-metrik tersebut dan akhirnya, menyeragamkan amaran-amaran tersebut dengan satu format yang dipersetujui agar sesuai digunakan oleh prosedur-prosedur penganalisaan amaran di peringkat tinggi. This thesis investigates the design and implementation of a framework to prepare security alerts with verified data quality metrics, enrich alerts with these metrics and finally, format the alerts in a standard format, suitable for consumption by highlevel alert analysis procedures

    SOA-enabled compliance management: Instrumenting, assessing, and analyzing service-based business processes

    Get PDF
    Facilitating compliance management, that is, assisting a company's management in conforming to laws, regulations, standards, contracts, and policies, is a hot but non-trivial task. The service-oriented architecture (SOA) has evolved traditional, manual business practices into modern, service-based IT practices that ease part of the problem: the systematic definition and execution of business processes. This, in turn, facilitates the online monitoring of system behaviors and the enforcement of allowed behaviors-all ingredients that can be used to assist compliance management on the fly during process execution. In this paper, instead of focusing on monitoring and runtime enforcement of rules or constraints, we strive for an alternative approach to compliance management in SOAs that aims at assessing and improving compliance. We propose two ingredients: (i) a model and tool to design compliant service-based processes and to instrument them in order to generate evidence of how they are executed and (ii) a reporting and analysis suite to create awareness of a company's compliance state and to enable understanding why and where compliance violations have occurred. Together, these ingredients result in an approach that is close to how the real stakeholders-compliance experts and auditors-actually assess the state of compliance in practice and that is less intrusive than enforcing compliance. © 2013 Springer-Verlag London

    Estimating the Processing Time of Process Instances in Semi-Structured Processes - A Case Study

    Get PDF
    Performance analysis of Web applications are rather difficult since people can perform parts of an activity outside the application or get interrupted while performing an activity in the system. The lack of a performance model makes it hard to plan resources or get a better understanding of the way available resources are used. In this paper an approach for determining a performance model for semi-structured processes is applied to a case study

    A Machine Learning Enhanced Scheme for Intelligent Network Management

    Get PDF
    The versatile networking services bring about huge influence on daily living styles while the amount and diversity of services cause high complexity of network systems. The network scale and complexity grow with the increasing infrastructure apparatuses, networking function, networking slices, and underlying architecture evolution. The conventional way is manual administration to maintain the large and complex platform, which makes effective and insightful management troublesome. A feasible and promising scheme is to extract insightful information from largely produced network data. The goal of this thesis is to use learning-based algorithms inspired by machine learning communities to discover valuable knowledge from substantial network data, which directly promotes intelligent management and maintenance. In the thesis, the management and maintenance focus on two schemes: network anomalies detection and root causes localization; critical traffic resource control and optimization. Firstly, the abundant network data wrap up informative messages but its heterogeneity and perplexity make diagnosis challenging. For unstructured logs, abstract and formatted log templates are extracted to regulate log records. An in-depth analysis framework based on heterogeneous data is proposed in order to detect the occurrence of faults and anomalies. It employs representation learning methods to map unstructured data into numerical features, and fuses the extracted feature for network anomaly and fault detection. The representation learning makes use of word2vec-based embedding technologies for semantic expression. Next, the fault and anomaly detection solely unveils the occurrence of events while failing to figure out the root causes for useful administration so that the fault localization opens a gate to narrow down the source of systematic anomalies. The extracted features are formed as the anomaly degree coupled with an importance ranking method to highlight the locations of anomalies in network systems. Two types of ranking modes are instantiated by PageRank and operation errors for jointly highlighting latent issue of locations. Besides the fault and anomaly detection, network traffic engineering deals with network communication and computation resource to optimize data traffic transferring efficiency. Especially when network traffic are constrained with communication conditions, a pro-active path planning scheme is helpful for efficient traffic controlling actions. Then a learning-based traffic planning algorithm is proposed based on sequence-to-sequence model to discover hidden reasonable paths from abundant traffic history data over the Software Defined Network architecture. Finally, traffic engineering merely based on empirical data is likely to result in stale and sub-optimal solutions, even ending up with worse situations. A resilient mechanism is required to adapt network flows based on context into a dynamic environment. Thus, a reinforcement learning-based scheme is put forward for dynamic data forwarding considering network resource status, which explicitly presents a promising performance improvement. In the end, the proposed anomaly processing framework strengthens the analysis and diagnosis for network system administrators through synthesized fault detection and root cause localization. The learning-based traffic engineering stimulates networking flow management via experienced data and further shows a promising direction of flexible traffic adjustment for ever-changing environments

    Multi-protocol correlation : data record analyses and correlator design

    Get PDF
    This thesis has two main goals. The first one is to design a user configurable multiprotocol correlator and implement a prototype of said design. The second goal is to identify and propose a method to match different data records from different protocols. In essence, this thesis is about correlation of records which contain information about protocols or services that are generally used in telecommunication networks. In order to reach the two main goals of this thesis, we need to combine our knowledge from the programming world with our knowledge from the networking world. Correlation can be done on multiple levels; you can correlate protocol messages, and you can correlate whole calls or transactions which allows you to perform correlation across sections of a network. We approach this problem by gathering protocol signaling data, specifications on how the protocols work, and log files with examples. With this knowledge we were able to identify many of the problem areas related to correlation of the main protocols to be used in this thesis. We designed a configurable correlator that could be configured to overcome the problem areas related to correlation provided enough data was given. The prototype correlator was tested both on correctness and performance. Then, in order to validate the correctness and preciseness of our developed prototype correlator, we compare the correlation results obtained from our tool with the results obtained using Utel System ´s STINGA NGN Monitor. The comparison shows that the correlation results from our prototype correlator are satisfactor

    Software failure prediction based on patterns of multiple-event failures

    Get PDF
    A fundamental need for software reliability engineering is to comprehend how software systems fail, which means understanding the dynamics that govern different types of failure manifestation. In this research, I present an exploratory study on multiple-event failures, which is a failure manifestation characterized by sequences of failure events, varying in terms of length, duration, and combination of failure types. This study aims to (i) improve the understanding of multiple-event failures in real software systems, investigating their occurrences, associations, and causes; (ii) propose analysis protocols that take into account multiple-event failure manifestations; (iii) take advantage of the sequential nature of this type of software failure to perform predictions. The failures analyzed in this research were observed empirically. In total, I analyzed 42,209 real software failures from 644 computers used in different workplaces. The major contributions of this study are a protocol developed to investigate the existence of patterns of failure associations; a protocol to discover patterns of failure sequences; and a prediction approach whose main concept is to calculate the probability of a certain failure event to occur within a time interval upon the occurrence of a particular pattern of preceding failures. I used three methods to tackle the prediction problem; Multinomial Logistic Regression (w/ and w/o Ridge regularization), Decision Tree, and Random Forest. These methods were chosen due to the nature of the failure data, in which the failure types must be handled as categorical variables. Initially, I performed a failure association discovery analysis which only included failures from a widely used commercial off-the-shelf Operating System (OS). As a result, I discovered 45 OS failure association patterns with 153,511 occurrences, which were composed of the same or different failure types and occurring within well-established time intervals, systematically. The observed associations suggest the existence of underlying mechanisms governing these failure occurrences, which motivated the improvement of the previous method by creating a protocol to discover patterns of failure sequences using flexible time thresholds and a failure prediction approach. To have a comprehensive view of how different software failures may affect each other, both methods were applied to three different samples — the first sample contained only OS failures, the second contained only User Application failures, and the third encompassed both OS and User Application failures altogether. As a result, I found 165, 480, and 640 different failure sequences with thousands of occurrences, respectively. Finally, the proposed approach was able to predict failures with good to high accuracy (86% to 93%).CAPES - Coordenação de Aperfeiçoamento de Pessoal de Nível SuperiorTese (Doutorado)Uma necessidade fundamental para a engenharia de confiabilidade de software é compreender como os sistemas de software falham, que significa entender a dinâmica que governa os diferentes tipos de manifestação de falha. Esta pesquisa apresenta um estudo exploratório sobre falhas de múltiplos eventos, que é uma manifestação de falha caracterizada por sequências de eventos de falha que variam em comprimento, duração e combinação de tipos de falha. Este estudo visa (i) melhorar a compreensão das falhas de múltiplos eventos em sistemas de software reais, investigando suas ocorrências, associações e causas; (ii) propor protocolos de análise que levem em consideração as manifestações de falha de múltiplos eventos; (iii) aproveitar a natureza sequencial desse tipo de falha de software para realizar previsões. As falhas analisadas nesta pesquisa foram observadas empiricamente. No total, foram analisadas 42.209 falhas reais de software de 644 computadores de diferentes locais de trabalho. As principais contribuições deste estudo são um protocolo desenvolvido para investigar a existência de padrões de associações de falha; um protocolo para descobrir padrões de sequências de falha; e uma abordagem de previsão cuja principal ideia é calcular a probabilidade de um determinado evento de falha ocorrer dentro de um intervalo de tempo após a ocorrência de um padrão particular de falhas anteriores. Três métodos foram utilizados para resolver o problema de previsão; Regressão Logística Multinomial (com ou sem regularização Ridge), Decision Tree e Random Forest. Tais métodos foram escolhidos devido à natureza dos dados de falha, nos quais os tipos de falha devem ser tratados como variáveis categóricas. Inicialmente, foi realizada uma análise de descoberta de associação de falhas que considerou apenas falhas de um sistema operacional (SO) comercial amplamente utilizado. Como resultado, foram descobertos 45 padrões de associação de falhas de sistema operacional com 153.511 ocorrências, compostos dos mesmos ou diferentes tipos de falha e ocorrendo, sistematicamente, em intervalos de tempo bem estabelecidos. As associações observadas sugerem a existência de mecanismos subjacentes que regem essas ocorrências de falha, o que motivou o aprimoramento do método anterior, com a criação de um protocolo para descobrir padrões de sequências de falhas usando limites de tempo flexíveis e uma abordagem de previsão de falha. Para ter uma visão abrangente de como as diferentes falhas de software podem afetar umas às outras, os dois métodos foram aplicados a três amostras diferentes — a primeira amostra contém apenas falhas do Sistema Operacional, a segunda contém apenas falhas de Aplicativos do Usuário e a terceira engloba falhas do Sistema Operacional e de Aplicativos de Usuário. Como resultado, foram encontradas 165, 480 e 640 sequências de falha diferentes com milhares de ocorrências, respectivamente. Por fim, a abordagem proposta foi capaz de prever falhas com boa até alta precisão (86% a 93%)
    corecore