66 research outputs found

    Applying Bag of System Calls for Anomalous Behavior Detection of Applications in Linux Containers

    Full text link
    In this paper, we present the results of using bags of system calls for learning the behavior of Linux containers for use in anomaly-detection based intrusion detection system. By using system calls of the containers monitored from the host kernel for anomaly detection, the system does not require any prior knowledge of the container nature, neither does it require altering the container or the host kernel.Comment: Published version available on IEEE Xplore (http://ieeexplore.ieee.org/document/7414047/) arXiv admin note: substantial text overlap with arXiv:1611.0305

    Intrusion Detection Systems in Cloud Computing: A Contemporary Review of Techniques and Solutions

    Get PDF
    Rapid growth of resources and escalating cost of infrastructure is leading organizations to adopt cloud computing. Cloud computing provides high performance, efficient utilization, and on-demand availability of resources. However, the cloud environment is vulnerable to different kinds of intrusion attacks which involve installing malicious software and creating backdoors. In a cloud environment, where businesses have hosted important and critical data, the security of underlying technologies becomes crucial. To mitigate the threat to cloud environments, Intrusion Detection Systems (IDS) are a layer of defense. The aim of this survey paper is to review IDS techniques proposed for the cloud. To achieve this objective, the first step is defining the limitations and unique characteristics of each technique. The second step is establishing the criteria to evaluate IDS architectures. In this paper, the criteria used is derived from basic characteristics of cloud. Next step is a comparative analysis of various existing intrusion detection techniques against the criteria. The last step is on the discussion of drawbacks and open issues, comprehended from the evaluation, due to which implementation of IDS in cloud environment face hurdles

    Contributions to Edge Computing

    Get PDF
    Efforts related to Internet of Things (IoT), Cyber-Physical Systems (CPS), Machine to Machine (M2M) technologies, Industrial Internet, and Smart Cities aim to improve society through the coordination of distributed devices and analysis of resulting data. By the year 2020 there will be an estimated 50 billion network connected devices globally and 43 trillion gigabytes of electronic data. Current practices of moving data directly from end-devices to remote and potentially distant cloud computing services will not be sufficient to manage future device and data growth. Edge Computing is the migration of computational functionality to sources of data generation. The importance of edge computing increases with the size and complexity of devices and resulting data. In addition, the coordination of global edge-to-edge communications, shared resources, high-level application scheduling, monitoring, measurement, and Quality of Service (QoS) enforcement will be critical to address the rapid growth of connected devices and associated data. We present a new distributed agent-based framework designed to address the challenges of edge computing. This actor-model framework implementation is designed to manage large numbers of geographically distributed services, comprised from heterogeneous resources and communication protocols, in support of low-latency real-time streaming applications. As part of this framework, an application description language was developed and implemented. Using the application description language a number of high-order management modules were implemented including solutions for resource and workload comparison, performance observation, scheduling, and provisioning. A number of hypothetical and real-world use cases are described to support the framework implementation

    An adaptive and distributed intrusion detection scheme for cloud computing

    Get PDF
    Cloud computing has enormous potentials but still suffers from numerous security issues. Hence, there is a need to safeguard the cloud resources to ensure the security of clients’ data in the cloud. Existing cloud Intrusion Detection System (IDS) suffers from poor detection accuracy due to the dynamic nature of cloud as well as frequent Virtual Machine (VM) migration causing network traffic pattern to undergo changes. This necessitates an adaptive IDS capable of coping with the dynamic network traffic pattern. Therefore, the research developed an adaptive cloud intrusion detection scheme that uses Binary Segmentation change point detection algorithm to track the changes in the normal profile of cloud network traffic and updates the IDS Reference Model when change is detected. Besides, the research addressed the issue of poor detection accuracy due to insignificant features and coordinated attacks such as Distributed Denial of Service (DDoS). The insignificant feature was addressed using feature selection while coordinated attack was addressed using distributed IDS. Ant Colony Optimization and correlation based feature selection were used for feature selection. Meanwhile, distributed Stochastic Gradient Decent and Support Vector Machine (SGD-SVM) were used for the distributed IDS. The distributed IDS comprised detection units and aggregation unit. The detection units detected the attacks using distributed SGD-SVM to create Local Reference Model (LRM) on various computer nodes. Then, the LRM was sent to aggregation units to create a Global Reference Model. This Adaptive and Distributed scheme was evaluated using two datasets: a simulated datasets collected using Virtual Machine Ware (VMWare) hypervisor and Network Security Laboratory-Knowledge Discovery Database (NSLKDD) benchmark intrusion detection datasets. To ensure that the scheme can cope with the dynamic nature of VM migration in cloud, performance evaluation was performed before and during the VM migration scenario. The evaluation results of the adaptive and distributed scheme on simulated datasets showed that before VM migration, an overall classification accuracy of 99.4% was achieved by the scheme while a related scheme achieved an accuracy of 83.4%. During VM migration scenario, classification accuracy of 99.1% was achieved by the scheme while the related scheme achieved an accuracy of 85%. The scheme achieved an accuracy of 99.6% when it was applied to NSL-KDD dataset while the related scheme achieved an accuracy of 83%. The performance comparisons with a related scheme showed that the developed adaptive and distributed scheme achieved superior performance

    IPCFA: A Methodology for Acquiring Forensically-Sound Digital Evidence in the Realm of IAAS Public Cloud Deployments

    Get PDF
    Cybercrimes and digital security breaches are on the rise: savvy businesses and organizations of all sizes must ready themselves for the worst. Cloud computing has become the new normal, opening even more doors for cybercriminals to commit crimes that are not easily traceable. The fast pace of technology adoption exceeds the speed by which the cybersecurity community and law enforcement agencies (LEAs) can invent countermeasures to investigate and prosecute such criminals. While presenting defensible digital evidence in courts of law is already complex, it gets more complicated if the crime is tied to public cloud computing, where storage, network, and computing resources are shared and dispersed over multiple geographical areas. Investigating such crimes involves collecting evidence data from the public cloud that is court-sound. Digital evidence court admissibility in the U.S. is governed predominantly by the Federal Rules of Evidence and Federal Rules of Civil Procedures. Evidence authenticity can be challenged by the Daubert test, which evaluates the forensic process that took place to generate the presented evidence. Existing digital forensics models, methodologies, and processes have not adequately addressed crimes that take place in the public cloud. It was only in late 2020 that the Scientific Working Group on Digital Evidence (SWGDE) published a document that shed light on best practices for collecting evidence from cloud providers. Yet SWGDE’s publication does not address the gap between the technology and the legal system when it comes to evidence admissibility. The document is high level with more focus on law enforcement processes such as issuing a subpoena and preservation orders to the cloud provider. This research proposes IaaS Public Cloud Forensic Acquisition (IPCFA), a methodology to acquire forensic-sound evidence from public cloud IaaS deployments. IPCFA focuses on bridging the gap between the legal and technical sides of evidence authenticity to help produce admissible evidence that can withstand scrutiny in U.S. courts. Grounded in design research science (DSR), the research is rigorously evaluated using two hypothetical scenarios for crimes that take place in the public cloud. The first scenario takes place in AWS and is hypothetically walked-thru. The second scenario is a demonstration of IPCFA’s applicability and effectiveness on Azure Cloud. Both cases are evaluated using a rubric built from the federal and civil digital evidence requirements and the international best practices for iv digital evidence to show the effectiveness of IPCFA in generating cloud evidence sound enough to be considered admissible in court

    A manifesto for future generation cloud computing: research directions for the next decade

    Get PDF
    The Cloud computing paradigm has revolutionised the computer science horizon during the past decade and has enabled the emergence of computing as the fifth utility. It has captured significant attention of academia, industries, and government bodies. Now, it has emerged as the backbone of modern economy by offering subscription-based services anytime, anywhere following a pay-as-you-go model. This has instigated (1) shorter establishment times for start-ups, (2) creation of scalable global enterprise applications, (3) better cost-to-value associativity for scientific and high performance computing applications, and (4) different invocation/execution models for pervasive and ubiquitous applications. The recent technological developments and paradigms such as serverless computing, software-defined networking, Internet of Things, and processing at network edge are creating new opportunities for Cloud computing. However, they are also posing several new challenges and creating the need for new approaches and research strategies, as well as the re-evaluation of the models that were developed to address issues such as scalability, elasticity, reliability, security, sustainability, and application models. The proposed manifesto addresses them by identifying the major open challenges in Cloud computing, emerging trends, and impact areas. It then offers research directions for the next decade, thus helping in the realisation of Future Generation Cloud Computing

    METHODS FOR HIGH-THROUGHPUT COMPARATIVE GENOMICS AND DISTRIBUTED SEQUENCE ANALYSIS

    Get PDF
    High-throughput sequencing has accelerated applications of genomics throughout the world. The increased production and decentralization of sequencing has also created bottlenecks in computational analysis. In this dissertation, I provide novel computational methods to improve analysis throughput in three areas: whole genome multiple alignment, pan-genome annotation, and bioinformatics workflows. To aid in the study of populations, tools are needed that can quickly compare multiple genome sequences, millions of nucleotides in length. I present a new multiple alignment tool for whole genomes, named Mugsy, that implements a novel method for identifying syntenic regions. Mugsy is computationally efficient, does not require a reference genome, and is robust in identifying a rich complement of genetic variation including duplications, rearrangements, and large-scale gain and loss of sequence in mixtures of draft and completed genome data. Mugsy is evaluated on the alignment of several dozen bacterial chromosomes on a single computer and was the fastest program evaluated for the alignment of assembled human chromosome sequences from four individuals. A distributed version of the algorithm is also described and provides increased processing throughput using multiple CPUs. Numerous individual genomes are sequenced to study diversity, evolution and classify pan-genomes. Pan-genome annotations contain inconsistencies and errors that hinder comparative analysis, even within a single species. I introduce a new tool, Mugsy-Annotator, that identifies orthologs and anomalous gene structure across a pan-genome using whole genome multiple alignments. Identified anomalies include inconsistently located translation initiation sites and disrupted genes due to draft genome sequencing or pseudogenes. An evaluation of pan-genomes indicates that such anomalies are common and alternative annotations suggested by the tool can improve annotation consistency and quality. Finally, I describe the Cloud Virtual Resource, CloVR, a desktop application for automated sequence analysis that improves usability and accessibility of bioinformatics software and cloud computing resources. CloVR is installed on a personal computer as a virtual machine and requires minimal installation, addressing challenges in deploying bioinformatics workflows. CloVR also seamlessly accesses remote cloud computing resources for improved processing throughput. In a case study, I demonstrate the portability and scalability of CloVR and evaluate the costs and resources for microbial sequence analysis
    • …
    corecore