25 research outputs found

    Network anomaly detection using management information base (MIB) network traffic variables

    Get PDF
    In this dissertation, a hierarchical, multi-tier, multiple-observation-window, network anomaly detection system (NADS) is introduced, namely, the MIB Anomaly Detection (MAD) system, which is capable of detecting and diagnosing network anomalies (including network faults and Denial of Service computer network attacks) proactively and adaptively. The MAD system utilizes statistical models and neural network classifier to detect network anomalies through monitoring the subtle changes of network traffic patterns. The process of measuring network traffic pattern is achieved by monitoring the Management Information Base (Mifi) II variables, supplied by the Simple Network Management Protocol (SNMP) LI. The MAD system then converted each monitored Mifi variable values, collected during each observation window, into a Probability Density Function (PDF), processed them statistically, combined intelligently the result for each individual variable and derived the final decision. The MAD system has a distributed, hierarchical, multi-tier architecture, based on which it could provide the health status of each network individual element. The inter-tier communication requires low network bandwidth, thus, making it possibly utilization on capacity challenged wireless as well as wired networks. Efficiently and accurately modeling network traffic behavior is essential for building NADS. In this work, a novel approach to statistically model network traffic measurements with high variability is introduced, that is, dividing the network traffic measurements into three different frequency segments and modeling the data in each frequency segment separately. Also in this dissertation, a new network traffic statistical model, i.e., the one-dimension hyperbolic distribution, is introduced

    An Investigation of the Effects of Modeling Application Workloads and Path Characteristics on Network Performance

    Get PDF
    Network testbeds and simulators remain the dominant platforms for evaluating networking technologies today. Central to the problem of network emulation or simulation is the problem modeling and generating realistic, synthetic Internet traffic as the results of such experiments are valid to the extent that the traffic generated to drive these experiments accurately represents the traffic carried in real production networks. Modeling and generating realistic Internet traffic remains a complex and not well-understood problem in empirical networking research. When modeling production network traffic, researchers lack a clear understanding about which characteristics of the traffic must be modeled, and how these traffic characteristics affect the results of their experiments. In this dissertation, we developed and analyzed a spectrum of empirically-derived traffic models with varying degrees of realism. For TCP traffic, we examined several choices for modeling the internal structure of TCP connections (the pattern of request/response exchanges), and the round trip times of connections. Using measurements from two different production networks, we constructed nine different traffic models, each embodying different choices in the modeling space, and conducted extensive experiments to evaluate these choices on a 10Gbps laboratory testbed. As a result of this study, we demonstrate that the old adage of garbage-in-garbage-out applies to empirical networking research. We conclude that the structure of traffic driving an experiment significantly affects the results of the experiment. And we demonstrate this by showing the effects on four key network performance metrics: connection durations, response times, router queue lengths, and number of active connections in the network

    Wavelet methods and statistical applications: network security and bioinformatics

    Get PDF
    Wavelet methods possess versatile properties for statistical applications. We would like to explore the advantages of using wavelets in the analyses in two different research areas. First of all, we develop an integrated tool for online detection of network anomalies. We consider statistical change point detection algorithms, for both local changes in the variance and for jumps detection, and propose modified versions of these algorithms based on moving window techniques. We investigate performances on simulated data and on network traffic data with several superimposed attacks. All detection methods are based on wavelet packets transformations. We also propose a Bayesian model for the analysis of high-throughput data where the outcome of interest has a natural ordering. The method provides a unified approach for identifying relevant markers and predicting class memberships. This is accomplished by building a stochastic search variable selection method into an ordinal model. We apply the methodology to the analysis of proteomic studies in prostate cancer. We explore wavelet-based techniques to remove noise from the protein mass spectra. The goal is to identify protein markers associated with prostate-specific antigen (PSA) level, an ordinal diagnostic measure currently used to stratify patients into different risk groups

    Certificate status information distribution and validation in vehicular networks

    Get PDF
    Vehicular ad hoc networks (VANETs) are emerging as an functional technology for providing a wide range of applications to vehicles and passengers. Ensuring secure functioning is one of the prerequisites for deploying reliable VANETs. The basic solution envisioned to achieve these requirements is to use digital certificates linked to a user by a trusted third party. These certificates can then be used to sign information. Most of the existing solutions manage these certificates by means of a central Certification Authority (CA). According to IEEE 1609.2 standard, vehicular networks will rely on the public key infrastructure (PKI). In PKI, a CA issues an authentic digital certificate for each node in the network. Therefore, an efficient certificate management is crucial for the robust and reliable operation of any PKI. A critical part of any certificate-management scheme is the revocation of certificates. The distribution of certificate status information process, as well as the revocation process itself, is an open research problem for VANETs.In this thesis, firstly we analyze the revocation process itself and develop an accurate and rigorous model for certificate revocation. One of the key findings of our analysis is that the certificate revocation process is statistically self-similar. As none of the currently common formal models for revocation is able to capture the self-similar nature of real revocation data, we develop an ARFIMA model that recreates this pattern. We show that traditional mechanisms that aim to scale could benefit from this model to improve their updating strategies.Secondly, we analyze how to deploy a certificate status checking service for mobile networks and we propose a new criterion based on a risk metric to evaluate cached status data. With this metric, the PKI is able to code information about the revocation process in the standard certificate revocation lists. Thus, users can evaluate a risk function in order to estimate whether a certificate has been revoked while there is no connection to a status checking server. Moreover, we also propose a systematic methodology to build a fuzzy system that assists users in the decision making process related to certificate status checking.Thirdly, we propose two novel mechanisms for distributing and validating certificate status information (CSI) in VANET. This first mechanism is a collaborative certificate status checking mechanism based on the use based on an extended-CRL. The main advantage of this extended-CRL is that the road-side units and repository vehicles can build an efficient structure based on an authenticated hash tree to respond to status checking requests inside the VANET, saving time and bandwidth. The second mechanism aims to optimize the trade- off between the bandwidth necessary to download the CSI and the freshness of the CSI. This mechanism is based on the use of a hybrid delta-CRL scheme and Merkle hash trees, so that the risk of operating with unknown revoked certificates remains below a threshold during the validity interval of the base-CRL, and CAs have the ability to manage this risk by setting the size of the delta-CRLs. Finally, we also analyze the impact of the revocation service in the certificate prices. We model the behavior of the oligopoly of risk-averse certificate providers that issue digital certificates to clients facing iden- tical independent risks. We found the equilibrium in the Bertrand game. In this equilibrium, we proof that certificate providers that offer better revocation information are able to impose higher prices to their certificates without sacrificing market share in favor of the other oligarchs.Las redes vehiculares ad hoc (VANETs) se est谩n convirtiendo en una tecnolog铆a funcional para proporcionar una amplia gama de aplicaciones para veh铆culos y pasajeros. Garantizar un funcionamiento seguro es uno de los requisitos para el despliegue de las VANETs. Sin seguridad, los usuarios podr铆an ser potencialmente vulnerables a la mala conducta de los servicios prestados por la VANET. La soluci贸n b谩sica prevista para lograr estos requisitos es el uso de certificados digitales gestionados a trav茅s de una autoridad de certificaci贸n (CA). De acuerdo con la norma IEEE 1609.2, las redes vehiculares depender谩n de la infraestructura de clave p煤blica (PKI). Sin embargo, el proceso de distribuci贸n del estado de los certificados, as铆 como el propio proceso de revocaci贸n, es un problema abierto para VANETs.En esta tesis, en primer lugar se analiza el proceso de revocaci贸n y se desarrolla un modelo preciso y riguroso que modela este proceso conluyendo que el proceso de revocaci贸n de certificados es estad铆sticamente auto-similar. Como ninguno de los modelos formales actuales para la revocaci贸n es capaz de capturar la naturaleza auto-similar de los datos de revocaci贸n, desarrollamos un modelo ARFIMA que recrea este patr贸n. Mostramos que ignorar la auto-similitud del proceso de revocaci贸n lleva a estrategias de emisi贸n de datos de revocaci贸n ineficientes. El modelo propuesto permite generar trazas de revocaci贸n sint茅ticas con las cuales los esquemas de revocaci贸n actuales pueden ser mejorados mediante la definici贸n de pol铆ticas de emisi贸n de datos de revocaci贸n m谩s precisas. En segundo lugar, se analiza la forma de implementar un mecanismo de emisi贸n de datos de estado de los certificados para redes m贸viles y se propone un nuevo criterio basado en una medida del riesgo para evaluar los datos de revocaci贸n almacenados en la cach茅. Con esta medida, la PKI es capaz de codificar la informaci贸n sobre el proceso de revocaci贸n en las listas de revocaci贸n. As铆, los usuarios pueden estimar en funci贸n del riesgo si un certificado se ha revocado mientras no hay conexi贸n a un servidor de control de estado. Por otra parte, tambi茅n se propone una metodolog铆a sistem谩tica para construir un sistema difuso que ayuda a los usuarios en el proceso de toma de decisiones relacionado con la comprobaci贸n de estado de certificados.En tercer lugar, se proponen dos nuevos mecanismos para la distribuci贸n y validaci贸n de datos de estado de certificados en VANETs. El primer mecanismo est谩 basado en el uso en una extensi贸n de las listas estandares de revocaci贸n. La principal ventaja de esta extensi贸n es que las unidades al borde de la carretera y los veh铆culos repositorio pueden construir una estructura eficiente sobre la base de un 谩rbol de hash autenticado para responder a las peticiones de estado de certificados. El segundo mecanismo tiene como objetivo optimizar el equilibrio entre el ancho de banda necesario para descargar los datos de revocaci贸n y la frescura de los mismos. Este mecanismo se basa en el uso de un esquema h铆brido de 谩rboles de Merkle y delta-CRLs, de modo que el riesgo de operar con certificados revocados desconocidos permanece por debajo de un umbral durante el intervalo de validez de la CRL base, y la CA tiene la capacidad de gestionar este riesgo mediante el ajuste del tama帽o de las delta-CRL. Para cada uno de estos mecanismos, llevamos a cabo el an谩lisis de la seguridad y la evaluaci贸n del desempe帽o para demostrar la seguridad y eficiencia de las acciones que se emprenden

    Pattern Recognition

    Get PDF
    Pattern recognition is a very wide research field. It involves factors as diverse as sensors, feature extraction, pattern classification, decision fusion, applications and others. The signals processed are commonly one, two or three dimensional, the processing is done in real- time or takes hours and days, some systems look for one narrow object class, others search huge databases for entries with at least a small amount of similarity. No single person can claim expertise across the whole field, which develops rapidly, updates its paradigms and comprehends several philosophical approaches. This book reflects this diversity by presenting a selection of recent developments within the area of pattern recognition and related fields. It covers theoretical advances in classification and feature extraction as well as application-oriented works. Authors of these 25 works present and advocate recent achievements of their research related to the field of pattern recognition

    Telecommunications Networks

    Get PDF
    This book guides readers through the basics of rapidly emerging networks to more advanced concepts and future expectations of Telecommunications Networks. It identifies and examines the most pressing research issues in Telecommunications and it contains chapters written by leading researchers, academics and industry professionals. Telecommunications Networks - Current Status and Future Trends covers surveys of recent publications that investigate key areas of interest such as: IMS, eTOM, 3G/4G, optimization problems, modeling, simulation, quality of service, etc. This book, that is suitable for both PhD and master students, is organized into six sections: New Generation Networks, Quality of Services, Sensor Networks, Telecommunications, Traffic Engineering and Routing

    Workload Modeling for Computer Systems Performance Evaluation

    Full text link

    Closing the loop: the integration of long-term ambient vibration monitoring in structural engineering design

    Get PDF
    his study investigated the integration of long-term monitoring into the structural engineering design process to improve the design and operation of civil structures. A survey of civil and structural engineering professionals, conducted as part of this research, identified the cost and complexity of in-situ monitoring as key barriers to their implementation in practice. Therefore, the research focused on the use of ambient vibration monitoring as it is offers a low cost and unobtrusive method for instrumenting new and existing structures. The research was structured around the stages of analysing ambient vibration data using operational modal analysis (OMA), defined in this study as: i) pre-selection of analysis parameters, ii) pre-processing of the data, iii) estimation of the modal parameters, iv) identification of modes of vibration within the modal estimates, and v) using modal parameter estimates as a basis for understanding and quantifying in-service structural behaviour. A method was developed for automating the selecting of the model order, the number of modes of vibrations assumed to be identifiable within the measured dynamic response. This method allowed the modal estimates from different structures, monitoring periods or analysis parameters to be compared, and removed part of the subjectivity identified within current OMA methods. Pre-processing of ambient acceleration responses through filtering was identified as a source of bias within OMA modal estimates. It was shown that this biasing was a result of filtering artefacts within the processed data. Two methods were proposed for removing or reducing the bias of modal estimates induced by filtering artefacts, based on exclusion of sections of the response corrupted by the artefacts or fitting of the artefacts as part of the modal analysis. A new OMA technique, the short-time random decrement technique (ST-RDT) was developed on the basis of the survey of industry perceptions of long-term monitoring and limitations of existing structural monitoring techniques identified within the literature. Key advantages of the ST-RDT are that it allows the uncertainty of modal estimates and any changes in modal behaviour to be quantified through subsampling theory. The ST-RDT has been extensively validated with numerical, experimental and real-world case studies including multi-storey timber buildings and the world's first 3D printed steel bridge. Modal estimates produced using the ST-RDT were used as a basis for developing an automated method of identifying modes of vibration using a probabilistic mixture model. Identification of modes of vibration within OMA estimates was previously a specialized skill. The procedure accounts for the inherent noise associated with ambient vibration monitoring and allows the uncertainty within the modal estimates associated with each mode of vibration to be quantified. Methods of identifying, isolating and quantifying weak non-linear modal behaviour, changes in dynamic behaviour associated with changes in the distributions of mass or stiffness within a structure have been developed based on the fundamental equations of structural dynamics. These methods allow changes in dynamic behaviour associated with thermally-induced changes in stiffness or changes in static loading to be incorporated within the automated identification of modes of vibration. These methods also allow ambient vibration monitoring to be used for estimating structural parameters usually measured by more complex, expensive or delicate sensors. Examples of this include estimating the change in elastic modulus of simple structures with temperature or estimating the location and magnitude of static loads applied to a structure in-service. The methods developed in this study are applicable to a wide range of structural monitoring technologies, are accessible to non-specialist audiences and may be adapted for the monitoring of any civil structure

    Vol. 15, No. 1 (Full Issue)

    Get PDF
    corecore