14 research outputs found

    Applying Bag of System Calls for Anomalous Behavior Detection of Applications in Linux Containers

    Full text link
    In this paper, we present the results of using bags of system calls for learning the behavior of Linux containers for use in anomaly-detection based intrusion detection system. By using system calls of the containers monitored from the host kernel for anomaly detection, the system does not require any prior knowledge of the container nature, neither does it require altering the container or the host kernel.Comment: Published version available on IEEE Xplore (http://ieeexplore.ieee.org/document/7414047/) arXiv admin note: substantial text overlap with arXiv:1611.0305

    On A Simpler and Faster Derivation of Single Use Reliability Mean and Variance for Model-Based Statistical Testing

    Get PDF
    Markov chain usage-based statistical testing has proved sound and effective in providing audit trails of evidence in certifying software-intensive systems. The system end-toend reliability is derived analytically in closed form, following an arc-based Bayesian model. System reliability is represented by an important statistic called single use reliability, and defined as the probability of a randomly selected use being successful. This paper continues our earlier work on a simpler and faster derivation of the single use reliability mean, and proposes a new derivation of the single use reliability variance by applying a well-known theorem and eliminating the need to compute the second moments of arc failure probabilities. Our new results complete a new analysis that could be shown to be simpler, faster, and more direct while also rendering a more intuitive explanation. Our new theory is illustrated with three simple Markov chain usage models with manual derivations and experimental results

    A Measurement System for Science and Engineering Research Center Performance Evaluation

    Get PDF
    This research provides performance metrics for cooperative research centers that enhance translational research formed by the partnership of government, industry and academia. Centers are part of complex ecosystems that vary greatly in the type of science conducted, organizational structures and expected outcomes. The ability to realize their objectives depends on transparent measurement systems to assist in decision making in research translation. A generalizable, hierarchical decision model that uses both quantitative and qualitative metrics is developed based upon program goals. Mission-oriented metrics are used to compare the effectiveness of the cooperative research centers through case studies. The US National Science Foundation (NSF) industry university cooperative research center (IUCRC) program is the domain of organizational effectiveness because of its longevity, clear organizational structure, repeated use and availability of data. Not unlike a franchise business model, the program has been replicated numerous times gaining recognition as one of the most successful federally funded collaborative research center (CRC) programs. Understanding IUCRCs is important because they are a key US policy lever for enhancing translational research. While the program model is somewhat unique, the research project begins to close the gap for comparing CRCs by introducing a generalizable model and method into the literature stream. Through a literature review, program objectives, goals, and outputs are linked together to construct a four-level hierarchical decision model (HDM). A structured model development process shows how experts validate the content and construct of the model using these linked concepts. A subjective data collection approach is discussed showing how collection, analysis and quantification of expert pair-wise-comparison data is used to establish weights for each of the decision criteria. Several methods are discussed showing how inconsistency and disagreement are measured and analyzed until acceptable levels are reached. Six case studies are used to compare results, evaluate the impact of expert disagreement and conduct criterion-related validity. Comparative analysis demonstrates the ability of the model to efficiently ascertain criteria that are relatively more important towards each center\u27s performance score. Applying this information, specific performance improvement recommendations for each center are presented. Upon review, experts generally agreed with the results. Criterion-related validity discusses how the performance measurement scoring system can be used for comparative analysis among science and engineering focused research centers. Dendrograms highlight where experts disagree and provide a method for further disagreement analysis. Judgment quantification values for different expert clusters are substituted into the model one-at-a-time (OAT) providing a method to analyze how changes in decisions based on these disagreements impact the results of the model\u27s output. This research project contributes to the field by introducing a generalizable model and measurement system that compares performance of NSF supported science and engineering focused research centers

    A Simpler and More Direct Derivation of System Reliability Using Markov Chain Usage Models

    Get PDF
    Markov chain usage-based statistical testing has been around for more than two decades, and proved sound and effective in providing audit trails of evidence in certifying software-intensive systems. The system end-to-end reliability is derived analytically in closed form, following an arc-based Bayesian model. System reliability is represented by an important statistic called single use reliability, and defined as the probability of a randomly selected use being successful. This paper reviews the analytical derivation of the single use reliability mean, and proposes a simpler, faster, and more direct way to compute the expected value that renders an intuitive explanation. The new derivation is illustrated with two examples

    TOOLS TO STIMULATE BLOCKCHAIN: APPLICATION OF REGULATORY SANDBOXES, SPECIAL ECONOMIC ZONES AND PUBLIC PRIVATE PARTNERSHIPS

    Get PDF
    The Blockchain technology has significant and almost limitless potential. However, today their use for implementation is associated with the problems of lack of high-quality legal regulation of this technology; technical standards for its application; investments required for its development. These problems and the search for their solutions are especially relevant now, in the context of the financial crisis. In this regard, the purpose of the article is to analyse the legal mechanisms and tools that make up special and experimental regimes, the use of which contributed to the introduction of the Blockchain technology into industrial production, identifying their features in relation to individual countries, problems associated with their implementation and finding solutions. The research is based on comparative legal and system analysis, as well as methods of legal modelling and content analysis. The author comes to the conclusion that in order to increase the attractiveness of the legal climate for the implementation of the Blockchain technology, it is necessary, first, to develop a “high-quality” legal regulation, which will be possible in the case of prior testing of an innovative product (service) based on the application of this technology in conditions of the experimental legal regime (regulatory sandbox); second, to develop standards for normative and technical regulation of this technology; third, to improve legislation on the main tools aimed at stimulating investment in the creation and implementation of digital innovations, and Blockchain technology, including - on special economic zones, public-private partnerships and state support of companies-developers of the Blockchain services for industrial production

    Anomaly-based insider threat detection with expert feedback and descriptions

    Get PDF
    Abstract. Insider threat is one of the most significant security risks for organizations, hence insider threat detection is an important task. Anomaly detection is a one approach to insider threat detection. Anomaly detection techniques can be categorized into three categories with respect to how much labelled data is needed: unsupervised, semi-supervised and supervised. Obtaining accurate labels of all kinds of incidents for supervised learning is often expensive and impractical. Unsupervised methods do not require labelled data, but they have a high false positive rate because they operate on the assumption that anomalies are rarer than nominals. This can be mitigated by introducing feedback, known as expert-feedback or active learning. This allows the analyst to label a subset of the data. Another problem is the fact that models often are not interpretable, thus it is unclear why the model decided that a data instance is an anomaly. This thesis presents a literature review of insider threat detection, unsupervised and semi-supervised anomaly detection. The performance of various unsupervised anomaly detectors are evaluated. Knowledge is introduced into the system by using state-of-the-art feedback technique for ensembles, known as active anomaly discovery, which is incorporated into the anomaly detector, known as isolation forest. Additionally, to improve interpretability techniques of creating rule-based descriptions for the isolation forest are evaluated. Experiments were performed on CMU-CERT dataset, which is the only publicly available insider threat dataset with logon, removable device and HTTP log data. Models use usage count and session-based features that are computed for users on every day. The results show that active anomaly discovery helps in ranking true positives higher on the list, lowering the amount of data analysts have to analyse. Results also show that both compact description and Bayesian rulesets have the potential to be used in generating decision-rules that aid in analysing incidents; however, these rules are not correct in every instance.Poikkeamapohjainen sisäpiiriuhkien havainta palautteen ja kuvauksien avulla. Tiivistelmä. Sisäpiirinuhat ovat yksi vakavimmista riskeistä organisaatioille. Tästä syystä sisäpiiriuhkien havaitseminen on tärkeää. Sisäpiiriuhkia voidaan havaita poikkeamien havaitsemismenetelmillä. Nämä menetelmät voidaan luokitella kolmeen oppimisluokkaan saatavilla olevan tietomäärän perusteella: ohjaamaton, puoli-ohjattu ja ohjattu. Täysin oikein merkatun tiedon saaminen ohjattua oppimista varten voi olla hyvin kallista ja epäkäytännöllistä. Ohjaamattomat oppimismenetelmät eivät vaadi merkattua tietoa, mutta väärien positiivisten osuus on suurempi, koska nämä menetelmät perustuvat oletukseen että poikkeamat ovat harvinaisempia kuin normaalit tapaukset. Väärien positiivisten osuutta voidaan pienentää ottamalla käyttöön palaute, jolloin analyytikko voi merkata osan datasta. Tässä opinnäytetyössä tutustutaan ensin sisäpiiriuhkien havaitsemiseen, mitä tutkimuksia on tehty ja ohjaamattomaan ja puoli-ohjattuun poikkeamien havaitsemiseen. Muutamien lupaavien ohjaamattomien poikkeamatunnistimien toimintakyky arvioidaan. Järjestelmään lisätään tietoisuutta havaitsemisongelmasta käyttämällä urauurtavaa active anomaly discovery -palautemetelmää, joka on tehty havaitsinjoukoille (engl. ensembles). Tätä arvioidaan Isolation Forest -havaitsimen kanssa. Lisäksi, jotta analytiikko pystyisi paremmin käsittelemään havainnot, tässä työssä myös arvioidaan sääntöpohjaisten kuvausten luontimenetelmä Isolation Forest -havaitsimelle. Kokeilut suoritettiin käyttäen julkista CMU-CERT:in aineistoa, joka on ainoa julkinen aineisto, missä on muun muuassa kirjautumis-, USB-laite- ja HTTP-tapahtumia. Mallit käyttävät käyttöluku- ja istuntopohjaisia piirteitä, jotka luodaan jokaista käyttäjää ja päivää kohti. Tuloksien perusteella Active Anomaly Discovery auttaa epäilyttävämpien tapahtumien sijoittamisessa listan kärkeen vähentäen tiedon määrä, jonka analyytikon tarvitsee tutkia. Kompaktikuvakset (engl. compact descriptions)- ja Bayesian sääntöjoukko -menetelmät pystyvät luomaan sääntöjä, jotka kuvaavat minkä takia tapahtuma on epäilyttävä, mutta nämä säännöt eivät aina ole oikein

    Cyber security of smart building ecosystems

    Get PDF
    Abstract. Building automation systems are used to create energy-efficient and customisable commercial and residential buildings. During the last two decades, these systems have become more and more interconnected to reduce expenses and expand their capabilities by allowing vendors to perform maintenance and by letting building users to control the machines remotely. This interconnectivity has brought new opportunities on how building data can be collected and put to use, but it has also increased the attack surface of smart buildings by introducing security challenges that need to be addressed. Traditional building automation systems with their proprietary communication protocols and interfaces are giving way to interoperable systems utilising open technologies. This interoperability is an important aspect in streamlining the data collection process by ensuring that different components of the environment are able to exchange information and operate in a coordinated manner. Turning these opportunities into actual products and platforms requires multi-sector collaboration and joint research projects, so that the buildings of tomorrow can become reality with as few compromises as possible. This work examines one of these experimental project platforms, KEKO ecosystem, with the focus on assessing the cyber security challenges faced by the platform by using the well-recognised MITRE ATT&CK knowledge base of adversary tactics and techniques. The assessment provides a detailed categorisation of identified challenges and recommendations on how they should be addressed. This work also presents one possible solution for improving the detection of offensive techniques targeting building automation by implementing a monitoring pipeline within the experimental platform, and a security event API that can be integrated to a remote SIEM system to increase visibility on the platform’s data processing operations
    corecore