19 research outputs found

    An Immunity-Based Anomaly Detection System with Sensor Agents

    Get PDF
    This paper proposes an immunity-based anomaly detection system with sensor agents based on the specificity and diversity of the immune system. Each agent is specialized to react to the behavior of a specific user. Multiple diverse agents decide whether the behavior is normal or abnormal. Conventional systems have used only a single sensor to detect anomalies, while the immunity-based system makes use of multiple sensors, which leads to improvements in detection accuracy. In addition, we propose an evaluation framework for the anomaly detection system, which is capable of evaluating the differences in detection accuracy between internal and external anomalies. This paper focuses on anomaly detection in user's command sequences on UNIX-like systems. In experiments, the immunity-based system outperformed some of the best conventional systems

    Hidden Markov Models with Random Restarts vs Boosting for Malware Detection

    Full text link
    Effective and efficient malware detection is at the forefront of research into building secure digital systems. As with many other fields, malware detection research has seen a dramatic increase in the application of machine learning algorithms. One machine learning technique that has been used widely in the field of pattern matching in general-and malware detection in particular-is hidden Markov models (HMMs). HMM training is based on a hill climb, and hence we can often improve a model by training multiple times with different initial values. In this research, we compare boosted HMMs (using AdaBoost) to HMMs trained with multiple random restarts, in the context of malware detection. These techniques are applied to a variety of challenging malware datasets. We find that random restarts perform surprisingly well in comparison to boosting. Only in the most difficult "cold start" cases (where training data is severely limited) does boosting appear to offer sufficient improvement to justify its higher computational cost in the scoring phase

    MALDI-TOF: A Rapid Identification of Dairy Pathogens

    Get PDF
    Abstract The proposed research study is a field validation study to benchmark against proven methods, a new methodology for the detection of microorganisms (Matrix-Assisted Laser Desorption Ionization Time of Flight Mass Spectrometry or MALDI-ToF) isolated from dairy farm and critical for safety and quality. The MALDI-TOF is a relatively new molecular technique extremely advantageous in terms of cost effectiveness, sample preparation easiness, turn-around time and result analysis accessibility. Although already successfully deployed in clinical diagnostic, it has not been evaluated for agricultural applications yet. In the dairy industry, Mastitis causes the most financial loss and a rapid diagnostic method as MALDI-TOF, will assist in the control and prevention program of mastitis, in addition to the sanitation and safety level of the dairy farms and processing facility. In the present study, we prospectively compared MALDI-TOF MS to the conventional 16S rRNA sequencing method for the identification of environmental mastitis isolates (481) and thermoduric isolates of pasteurized milk (248). Among the 481 environmental isolates, 454 (94.4%) were putatively identified to the genus level by MALDI-TOF MS and 426 (88.6%) were identified to the species level, but no reliable identification was obtained for 17 (3.5%), and 27 (5.6%) discordant results were identified. Future studies can help to overcome the limitation of MALDI database and additional sample preparation steps might help to reduce the number of discordance in identification. In conclusion, our results show that MALDI-TOF MS is a fast and reliable technique which has the potential to replace conventional identification methods for most dairy pathogens, routinely isolated from the milk and dairy products. Thus it’s adoption will strengthen the capacity, quality, and possibly the scope of diagnostic services to support the dairy industry

    Supervised fault detection using unstructured server-log data to support root cause analysis

    Get PDF
    Fault detection is one of the most important aspects of telecommunication networks. Considering the growing scale and complexity of communication networks, maintenance and debugging have become extremely complicated and expensive. In complex systems, a higher rate of failure, due to the large number of components, has increased the importance of both fault detection and root cause analysis. Fault detection for communication networks is based on analyzing system logs from servers or different components in a network in order to determine if there is any unusual activity. However, detecting and diagnosing problems in such huge systems are challenging tasks for human, since the amount of information, which needs to be processed goes far beyond the level that can be handled manually. Therefore, there is an immense demand for automatic processing of datasets to extract the relevant data needed for detecting anomalies. In a Big Data world, using machine learning techniques to analyze log data automatically becomes more and more popular. Machine learning based fault detection does not require any prior knowledge about the types of problems and does not rely on explicit programming (such as rule-based). Machine learning has the ability to improve its performance automatically through learning from experience. In this thesis, we investigate supervised machine learning approaches to detect known faults from unstructured log data as a fast and efficient approach. As the aim is to identify abnormal cases against normal ones, anomaly detection is considered to be a binary classification. For extracting numerical features from event logs as a primary step in any classification, we used windowing along with bag-of-words approaches considering their textual characteristics (high dimension and sparseness). We focus on linear classification methods such as single layer perceptron and Support Vector Machines as promising candidate methods for supervised fault detection based on the textual characteristics of network-based server-log data. In order to generate an appropriate approach generalizing for detecting known faults, two important factors are investigated, namely the size of datasets and the time duration of faults. By investigating the experimental results concerning these two aforementioned factors, a two-layer classification is proposed to overcome the windowing and feature extraction challenges for long lasting faults. The thesis proposes a novel approach for collecting feature vectors for two layers of a two-layer classification. In the first layer we attempt to detect the starting line of each fault repetition as well as the fault duration. The obtained models from the first layer are used to create feature vectors for the second layer. In order to evaluate the learning algorithms and select the best detection model, cross validation and F-scores are used in this thesis because traditional metrics such as accuracy and error rates are not well suited for imbalanced datasets. The experimental results show that the proposed SVM classifier provides the best performance independent of fault duration, while factors such as labelling rule and reduction of the feature space have no significant effect on the performance. In addition, the results show that the two-layer classification system can improve the performance of fault detection; however, a more suited approach for collecting feature vectors with smaller time span needs to be further investigated

    Modeling Deception for Cyber Security

    Get PDF
    In the era of software-intensive, smart and connected systems, the growing power and so- phistication of cyber attacks poses increasing challenges to software security. The reactive posture of traditional security mechanisms, such as anti-virus and intrusion detection systems, has not been sufficient to combat a wide range of advanced persistent threats that currently jeopardize systems operation. To mitigate these extant threats, more ac- tive defensive approaches are necessary. Such approaches rely on the concept of actively hindering and deceiving attackers. Deceptive techniques allow for additional defense by thwarting attackers’ advances through the manipulation of their perceptions. Manipu- lation is achieved through the use of deceitful responses, feints, misdirection, and other falsehoods in a system. Of course, such deception mechanisms may result in side-effects that must be handled. Current methods for planning deception chiefly portray attempts to bridge military deception to cyber deception, providing only high-level instructions that largely ignore deception as part of the software security development life cycle. Con- sequently, little practical guidance is provided on how to engineering deception-based techniques for defense. This PhD thesis contributes with a systematic approach to specify and design cyber deception requirements, tactics, and strategies. This deception approach consists of (i) a multi-paradigm modeling for representing deception requirements, tac- tics, and strategies, (ii) a reference architecture to support the integration of deception strategies into system operation, and (iii) a method to guide engineers in deception mod- eling. A tool prototype, a case study, and an experimental evaluation show encouraging results for the application of the approach in practice. Finally, a conceptual coverage map- ping was developed to assess the expressivity of the deception modeling language created.Na era digital o crescente poder e sofisticação dos ataques cibernéticos apresenta constan- tes desafios para a segurança do software. A postura reativa dos mecanismos tradicionais de segurança, como os sistemas antivírus e de detecção de intrusão, não têm sido suficien- tes para combater a ampla gama de ameaças que comprometem a operação dos sistemas de software actuais. Para mitigar estas ameaças são necessárias abordagens ativas de defesa. Tais abordagens baseiam-se na ideia de adicionar mecanismos para enganar os adversários (do inglês deception). As técnicas de enganação (em português, "ato ou efeito de enganar, de induzir em erro; artimanha usada para iludir") contribuem para a defesa frustrando o avanço dos atacantes por manipulação das suas perceções. A manipula- ção é conseguida através de respostas enganadoras, de "fintas", ou indicações erróneas e outras falsidades adicionadas intencionalmente num sistema. É claro que esses meca- nismos de enganação podem resultar em efeitos colaterais que devem ser tratados. Os métodos atuais usados para enganar um atacante inspiram-se fundamentalmente nas técnicas da área militar, fornecendo apenas instruções de alto nível que ignoram, em grande parte, a enganação como parte do ciclo de vida do desenvolvimento de software seguro. Consequentemente, há poucas referências práticas em como gerar técnicas de defesa baseadas em enganação. Esta tese de doutoramento contribui com uma aborda- gem sistemática para especificar e desenhar requisitos, táticas e estratégias de enganação cibernéticas. Esta abordagem é composta por (i) uma modelação multi-paradigma para re- presentar requisitos, táticas e estratégias de enganação, (ii) uma arquitetura de referência para apoiar a integração de estratégias de enganação na operação dum sistema, e (iii) um método para orientar os engenheiros na modelação de enganação. Uma ferramenta protó- tipo, um estudo de caso e uma avaliação experimental mostram resultados encorajadores para a aplicação da abordagem na prática. Finalmente, a expressividade da linguagem de modelação de enganação é avaliada por um mapeamento de cobertura de conceitos

    Use of Litigation Screenings in Mass Torts: A Formula for Fraud, The

    Get PDF

    Anomaly recognition for intrusion detection on emergent monitoring environments

    Get PDF
    Tesis inédita de la Universidad Complutense de Madrid, Facultad de Informática, Departamento de Ingeniería del Software e Inteligencia Artificial, leída el 19/12/2018La proteccion de la informacion y el ciberespacio se ha convertido en un aspecto esencial en el soporte que garantiza el avance hacia los principales desafíos que plantean la sociedad de la informacion y las nuevas tecnologías. Pero a pesar del progreso en esta area, la eficacia de los ataques dirigidos contra sistemas de la informacion ha aumentado drasticamente en los ultimos años. Esto es debido a diferentes motivos: en primer lugar, cada vez mas usuarios hacen uso de tecnologías de la informacion para llevar a cabo actividades que involucren el intercambio de datos sensibles. Por otro lado, los atacantes cada vez disponen de una mayor cantidad de medios para la ejecucion de intentos de intrusion. Finalmente, es de especial relevancia la evolucion de los escenarios de monitorizacion. Este hecho es propiciado por el avance tecnologico, dando lugar a sistemas de computo mucho mas complejos, con mayor capacidad de procesamiento y que son capaces de manejar informacion masiva proporcionada por fuentes de diferente naturaleza...The security on information and cyberspace has become a fundamental component of the support that guarantees progress towards the main challenges posed by the information society and the new technologies. But despite progress in this research field, the effectiveness of the attacks against information systems has increased dramatically in recent years. This is due to different reasons: firstly, more and more users make use of information technologies to carry out activities that involve exchanges of sensitive data. On the other hand, attackers dispose an increasable amount of means for executing intrusion attempts. Finally, is of particular relevance the evolution of the protected environment, which is fostered by technological advances, hence giving rise to much more sophisticated computer systems, with greater processing capacity and which are able to handle massive information provided by sources of varying nature...Fac. de InformáticaTRUEunpu

    Serous business: delineating the broad spectrum of diseases with subretinal fluid in the macula

    Get PDF
    A wide range of ocular diseases can present with serous subretinal fluid in the macula and therefore clinically mimic central serous chorioretinopathy (CSC). In this manuscript, we categorise the diseases and conditions that are part of the differential diagnosis into 12 main pathogenic subgroups: neovascular diseases, vitelliform lesions, inflammatory diseases, ocular tumours, haematological malignancies, paraneoplastic syndromes, genetic dis-eases, ocular developmental anomalies, medication-related conditions and toxicity-related diseases, rhegma-togenous retinal detachment and tractional retinal detachment, retinal vascular diseases, and miscellaneous diseases. In addition, we describe 2 new clinical pictures associated with macular subretinal fluid accumulation, namely serous maculopathy with absence of retinal pigment epithelium (SMARPE) and serous maculopathy due to aspecific choroidopathy (SMACH). Differentiating between these various diseases and CSC can be challenging, and obtaining the correct diagnosis can have immediate therapeutic and prognostic consequences. Here, we describe the key differential diagnostic features of each disease within this clinical spectrum, including repre-sentative case examples. Moreover, we discuss the pathogenesis of each disease in order to facilitate the dif-ferentiation from typical CSC.Ophthalmic researc
    corecore