434 research outputs found

    Enhancing Failure Propagation Analysis in Cloud Computing Systems

    Full text link
    In order to plan for failure recovery, the designers of cloud systems need to understand how their system can potentially fail. Unfortunately, analyzing the failure behavior of such systems can be very difficult and time-consuming, due to the large volume of events, non-determinism, and reuse of third-party components. To address these issues, we propose a novel approach that joins fault injection with anomaly detection to identify the symptoms of failures. We evaluated the proposed approach in the context of the OpenStack cloud computing platform. We show that our model can significantly improve the accuracy of failure analysis in terms of false positives and negatives, with a low computational cost.Comment: 12 pages, The 30th International Symposium on Software Reliability Engineering (ISSRE 2019

    A Review of Rule Learning Based Intrusion Detection Systems and Their Prospects in Smart Grids

    Get PDF

    A Review of Intrusion Detection using Deep Learning

    Get PDF
    As network applications grow rapidly, network security mechanisms require more attention to improve speed and accuracy. The development of new types of intruders poses a serious threat to network security: although many tools for network security have been developed, the rapid growth of intrusion activity remains a serious problem. Intrusion Detection Systems (IDS) are used to detect intrusive network activity. Preventing and detecting unauthorized access to a computer is an IT security concern. Therefore, network security provides a measure of the level of prevention and detection that can be used to avoid suspicious users. Deep learning has been used extensively in recent years to improve network intruder detection. These techniques allow for automatic detection of network traffic anomalies. This paper presents literature review on intrusion detection techniques

    Malware Resistant Data Protection in Hyper-connected Networks: A survey

    Full text link
    Data protection is the process of securing sensitive information from being corrupted, compromised, or lost. A hyperconnected network, on the other hand, is a computer networking trend in which communication occurs over a network. However, what about malware. Malware is malicious software meant to penetrate private data, threaten a computer system, or gain unauthorised network access without the users consent. Due to the increasing applications of computers and dependency on electronically saved private data, malware attacks on sensitive information have become a dangerous issue for individuals and organizations across the world. Hence, malware defense is critical for keeping our computer systems and data protected. Many recent survey articles have focused on either malware detection systems or single attacking strategies variously. To the best of our knowledge, no survey paper demonstrates malware attack patterns and defense strategies combinedly. Through this survey, this paper aims to address this issue by merging diverse malicious attack patterns and machine learning (ML) based detection models for modern and sophisticated malware. In doing so, we focus on the taxonomy of malware attack patterns based on four fundamental dimensions the primary goal of the attack, method of attack, targeted exposure and execution process, and types of malware that perform each attack. Detailed information on malware analysis approaches is also investigated. In addition, existing malware detection techniques employing feature extraction and ML algorithms are discussed extensively. Finally, it discusses research difficulties and unsolved problems, including future research directions.Comment: 30 pages, 9 figures, 7 tables, no where submitted ye

    Intelligent network intrusion detection using an evolutionary computation approach

    Get PDF
    With the enormous growth of users\u27 reliance on the Internet, the need for secure and reliable computer networks also increases. Availability of effective automatic tools for carrying out different types of network attacks raises the need for effective intrusion detection systems. Generally, a comprehensive defence mechanism consists of three phases, namely, preparation, detection and reaction. In the preparation phase, network administrators aim to find and fix security vulnerabilities (e.g., insecure protocol and vulnerable computer systems or firewalls), that can be exploited to launch attacks. Although the preparation phase increases the level of security in a network, this will never completely remove the threat of network attacks. A good security mechanism requires an Intrusion Detection System (IDS) in order to monitor security breaches when the prevention schemes in the preparation phase are bypassed. To be able to react to network attacks as fast as possible, an automatic detection system is of paramount importance. The later an attack is detected, the less time network administrators have to update their signatures and reconfigure their detection and remediation systems. An IDS is a tool for monitoring the system with the aim of detecting and alerting intrusive activities in networks. These tools are classified into two major categories of signature-based and anomaly-based. A signature-based IDS stores the signature of known attacks in a database and discovers occurrences of attacks by monitoring and comparing each communication in the network against the database of signatures. On the other hand, mechanisms that deploy anomaly detection have a model of normal behaviour of system and any significant deviation from this model is reported as anomaly. This thesis aims at addressing the major issues in the process of developing signature based IDSs. These are: i) their dependency on experts to create signatures, ii) the complexity of their models, iii) the inflexibility of their models, and iv) their inability to adapt to the changes in the real environment and detect new attacks. To meet the requirements of a good IDS, computational intelligence methods have attracted considerable interest from the research community. This thesis explores a solution to automatically generate compact rulesets for network intrusion detection utilising evolutionary computation techniques. The proposed framework is called ESR-NID (Evolving Statistical Rulesets for Network Intrusion Detection). Using an interval-based structure, this method can be deployed for any continuous-valued input data. Therefore, by choosing appropriate statistical measures (i.e. continuous-valued features) of network trafc as the input to ESRNID, it can effectively detect varied types of attacks since it is not dependent on the signatures of network packets. In ESR-NID, several innovations in the genetic algorithm were developed to keep the ruleset small. A two-stage evaluation component in the evolutionary process takes the cooperation of rules into consideration and results into very compact, easily understood rulesets. The effectiveness of this approach is evaluated against several sources of data for both detection of normal and abnormal behaviour. The results are found to be comparable to those achieved using other machine learning methods from both categories of GA-based and non-GA-based methods. One of the significant advantages of ESR-NIS is that it can be tailored to specific problem domains and the characteristics of the dataset by the use of different fitness and performance functions. This makes the system a more flexible model compared to other learning techniques. Additionally, an IDS must adapt itself to the changing environment with the least amount of configurations. ESR-NID uses an incremental learning approach as new flow of traffic become available. The incremental learning approach benefits from less required storage because it only keeps the generated rules in its database. This is in contrast to the infinitely growing size of repository of raw training data required for traditional learning

    Detecção de anomalias na partilha de ficheiros em ambientes empresariais

    Get PDF
    File sharing is the activity of making archives (documents, videos, photos) available to other users. Enterprises use file sharing to make archives available to their employees or clients. The availability of these files can be done through an internal network, cloud service (external) or even Peer-to-Peer (P2P). Most of the time, the files within the file sharing service have sensitive information that cannot be disclosed. Equifax data breach attack exploited a zero-day attack that allowed arbitrary code execution, leading to a huge data breach as over 143 million user information was presumed compromised. Ransomware is a type of malware that encrypts computer data (documents, media, ...) making it inaccessible to the user, demanding a ransom for the decryption of the data. This type of malware has been a serious threat to enterprises. WannaCry and NotPetya are some examples of ransomware that had a huge impact on enterprises with big amounts of ransoms, for example WannaCry reached more than 142,361.51inransoms.Inthisdissertation,wepurposeasystemthatcandetectfilesharinganomalieslikeransomware(WannaCry,NotPetya)andtheft(Equifaxbreach),andalsotheirpropagation.Thesolutionconsistsofnetworkmonitoring,thecreationofcommunicationprofilesforeachuser/machine,ananalysisalgorithmusingmachinelearningandacountermeasuremechanismincaseananomalyisdetected.Partilhadeficheiroseˊaatividadededisponibilizarficheiros(documentos,vıˊdeos,fotos)autilizadores.Asempresasusamapartilhadeficheirosparadisponibilizarficheirosaosseusutilizadoresetrabalhadores.Adisponibilidadedestesficheirospodeserfeitaapartirdeumaredeinterna,servic\codenuvem(externo)ouateˊPonto−a−Ponto.Normalmente,osficheiroscontidosnoservic\codepartilhadeficheirosconte^mdadosconfidenciaisquena~opodemserdivulgados.Oataquedeviolac\ca~odedadosrealizadoaEquifaxexplorouumavulnerabilidadedediazeroquepermitiuexecuc\ca~odecoˊdigoarbitraˊrio,levandoaqueainformac\ca~ode143milho~esdeutilizadoresfossecomprometida.Ransomwareeˊumtipodemalwarequecifraosdadosdocomputador(documentos,multimeˊdia...)tornando−osinacessıˊveisaoutilizador,exigindoaesteumresgateparadecifraressesdados.Estetipodemalwaretemsidoumagrandeameac\caaˋsempresasatuais.WannaCryeNotPetyasa~oalgunsexemplosdeRansomwarequetiveramumgrandeimpactocomgrandesquantiasderesgate,WannaCryalcanc\coumaisde142,361.51 in ransoms. In this dissertation, we purpose a system that can detect file sharing anomalies like ransomware (WannaCry, NotPetya) and theft (Equifax breach), and also their propagation. The solution consists of network monitoring, the creation of communication profiles for each user/machine, an analysis algorithm using machine learning and a countermeasure mechanism in case an anomaly is detected.Partilha de ficheiros é a atividade de disponibilizar ficheiros (documentos, vídeos, fotos) a utilizadores. As empresas usam a partilha de ficheiros para disponibilizar ficheiros aos seus utilizadores e trabalhadores. A disponibilidade destes ficheiros pode ser feita a partir de uma rede interna, serviço de nuvem (externo) ou até Ponto-a-Ponto. Normalmente, os ficheiros contidos no serviço de partilha de ficheiros contêm dados confidenciais que não podem ser divulgados. O ataque de violação de dados realizado a Equifax explorou uma vulnerabilidade de dia zero que permitiu execução de código arbitrário, levando a que a informação de 143 milhões de utilizadores fosse comprometida. Ransomware é um tipo de malware que cifra os dados do computador (documentos, multimédia...) tornando-os inacessíveis ao utilizador, exigindo a este um resgate para decifrar esses dados. Este tipo de malware tem sido uma grande ameaça às empresas atuais. WannaCry e NotPetya são alguns exemplos de Ransomware que tiveram um grande impacto com grandes quantias de resgate, WannaCry alcançou mais de 142,361.51 em resgates. Neste tabalho, propomos um sistema que consiga detectar anomalias na partilha de ficheiros, como o ransomware (WannaCry, NotPetya) e roubo de dados (violação de dados Equifax), bem como a sua propagação. A solução consiste na monitorização da rede da empresa, na criação de perfis para cada utilizador/máquina, num algoritmo de machine learning para análise dos dados e num mecanismo que bloqueie a máquina afetada no caso de se detectar uma anomalia.Mestrado em Engenharia de Computadores e Telemátic

    End-to-end anomaly detection in stream data

    Get PDF
    Nowadays, huge volumes of data are generated with increasing velocity through various systems, applications, and activities. This increases the demand for stream and time series analysis to react to changing conditions in real-time for enhanced efficiency and quality of service delivery as well as upgraded safety and security in private and public sectors. Despite its very rich history, time series anomaly detection is still one of the vital topics in machine learning research and is receiving increasing attention. Identifying hidden patterns and selecting an appropriate model that fits the observed data well and also carries over to unobserved data is not a trivial task. Due to the increasing diversity of data sources and associated stochastic processes, this pivotal data analysis topic is loaded with various challenges like complex latent patterns, concept drift, and overfitting that may mislead the model and cause a high false alarm rate. Handling these challenges leads the advanced anomaly detection methods to develop sophisticated decision logic, which turns them into mysterious and inexplicable black-boxes. Contrary to this trend, end-users expect transparency and verifiability to trust a model and the outcomes it produces. Also, pointing the users to the most anomalous/malicious areas of time series and causal features could save them time, energy, and money. For the mentioned reasons, this thesis is addressing the crucial challenges in an end-to-end pipeline of stream-based anomaly detection through the three essential phases of behavior prediction, inference, and interpretation. The first step is focused on devising a time series model that leads to high average accuracy as well as small error deviation. On this basis, we propose higher-quality anomaly detection and scoring techniques that utilize the related contexts to reclassify the observations and post-pruning the unjustified events. Last but not least, we make the predictive process transparent and verifiable by providing meaningful reasoning behind its generated results based on the understandable concepts by a human. The provided insight can pinpoint the anomalous regions of time series and explain why the current status of a system has been flagged as anomalous. Stream-based anomaly detection research is a principal area of innovation to support our economy, security, and even the safety and health of societies worldwide. We believe our proposed analysis techniques can contribute to building a situational awareness platform and open new perspectives in a variety of domains like cybersecurity, and health
    • …
    corecore