1,423 research outputs found

    Cyber Threat Intelligence Model: An Evaluation of Taxonomies, Sharing Standards, and Ontologies within Cyber Threat Intelligence

    Full text link
    Cyber threat intelligence is the provision of evidence-based knowledge about existing or emerging threats. Benefits of threat intelligence include increased situational awareness and efficiency in security operations and improved prevention, detection, and response capabilities. To process, analyze, and correlate vast amounts of threat information and derive highly contextual intelligence that can be shared and consumed in meaningful times requires utilizing machine-understandable knowledge representation formats that embed the industry-required expressivity and are unambiguous. To a large extend, this is achieved by technologies like ontologies, interoperability schemas, and taxonomies. This research evaluates existing cyber-threat-intelligence-relevant ontologies, sharing standards, and taxonomies for the purpose of measuring their high-level conceptual expressivity with regards to the who, what, why, where, when, and how elements of an adversarial attack in addition to courses of action and technical indicators. The results confirmed that little emphasis has been given to developing a comprehensive cyber threat intelligence ontology with existing efforts not being thoroughly designed, non-interoperable and ambiguous, and lacking semantic reasoning capability

    Proceedings, MSVSCC 2018

    Get PDF
    Proceedings of the 12th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 19, 2018 at VMASC in Suffolk, Virginia. 155 pp

    Dagstuhl News January - December 2008

    Get PDF
    "Dagstuhl News" is a publication edited especially for the members of the Foundation "Informatikzentrum Schloss Dagstuhl" to thank them for their support. The News give a summary of the scientific work being done in Dagstuhl. Each Dagstuhl Seminar is presented by a small abstract describing the contents and scientific highlights of the seminar as well as the perspectives or challenges of the research topic

    Applications of Factorization Theorem and Ontologies for Activity ModelingRecognition and Anomaly Detection

    Get PDF
    In this thesis two approaches for activity modeling and suspicious activity detection are examined. First is application of factorization theorem extension for deformable models in two dierent contexts. First is human activity detection from joint position information, and second is suspicious activity detection for tarmac security. It is shown that the first basis vector from factorization theorem is good enough to dierentiate activities for human data and to distinguish suspicious activities for tarmac security data. Second approach dierentiates individual components of those activities using semantic methodol- ogy. Although currently mainly used for improving search and information retrieval, we show that ontologies are applicable to video surveillance. We evaluate the domain ontologies from Challenge Project on Video Event Taxonomy sponsored by ARDA from the perspective of general ontology design principles. We also focused on the eect of the domain on the granularity of the ontology for suspicious activity detection

    PDID: Database of molecular-level putative protein-drug interactions in the structural human proteome

    Get PDF
    © 2015 The Author 2015. Published by Oxford University Press. All rights reserved. Motivation: Many drugs interact with numerous proteins besides their intended therapeutic targets and a substantial portion of these interactions is yet to be elucidated. Protein-Drug Interaction Database (PDID) addresses incompleteness of these data by providing access to putative protein-drug interactions that cover the entire structural human proteome. Results: PDID covers 9652 structures from 3746 proteins and houses 16 800 putative interactions generated from close to 1.1 million accurate, all-atom structure-based predictions for several dozens of popular drugs. The predictions were generated with three modern methods: ILbind, SMAP and eFindSite. They are accompanied by propensity scores that quantify likelihood of interactions and coordinates of the putative location of the binding drugs in the corresponding protein structures. PDID complements the current databases that focus on the curated interactions and the BioDrugScreen database that relies on docking to find putative interactions. Moreover, we also include experimentally curated interactions which are linked to their sources: DrugBank, BindingDB and Protein Data Bank. Our database can be used to facilitate studies related to polypharmacology of drugs including repurposing and explaining side effects of drugs. Availability and implementation: PDID database is freely available at http://biomine.ece.ualberta.ca/PDID/

    Metaleptic Transgression and Traumatic Experience: The "empty rooms, long hallways, and dead ends" of House of Leaves

    Get PDF
    Mark Z. Danielewski's House of Leaves is a stunningly complex work, blending elements of the traditional haunted house tale, postmodernism, and film analysis with innovative approaches to textuality and to the format of the novel. This thesis explores House of Leaves with regard to many of these elements, presenting a reading which unifies its various modes of discourse by relating them back to the labyrinth at its centre. Using Genette's concepts of diegetic level and metalepsis, it is argued that the narrative structure of House of Leaves echoes the qualities of the labyrinth (infinite space, shifting dimensions, emptiness), in that the heterarchical natures of both labyrinth and text confront the reader with instances of logical paradox. This violation of physical spaces and narratological conventions, moreover, is reflected in the complexity of the novel with regard to narrative unreliability, textual manipulation, and the dismantling of the concepts of authorship and the sacred text. Finally, it is argued that the labyrinth and its effects on the narrative represent traumatic experience, that the absence at its centre and the violations of physical laws, narrative coherence, and semantic meaning are related to the ontological uncertainty which suffering or grief engenders

    Bioinformatics Systems And Mathematical Models For Improved Understanding Of Malaria Transmission, Control, And Elimination

    Get PDF
    The leading malaria vector control strategies (i.e., long-lasting insecticidal nets and indoor residual spraying) can reduce indoor transmission, but these tools alone are insufficient to eliminate it. Strategies that target adult mosquitoes when they feed on humans or animals outdoors or target mosquito immature stages are also needed to achieve malaria elimination. Improved data systems for integrating diverse experimental observations and research groups, as well as process-explicit mathematical models for evaluating them are both essential to achieving these goals. We have developed a generic schema and data repositories for the studies of malaria vectors that encompass a wide variety of different experimental designs that rapidly generate large data volumes. We extended a malaria transmission model to examine the relationship between transmission, control, and the proportion of blood meals a vector population obtains from humans: Assuming the lower limit for this indicator of human feeding preference enabled derivation of simplified models for zoophagic vectors. We present differential equation models to describe the biological processes that mediate novel strategies to control malaria vectors by autodissemination of pyripoxyfen (PPF) as it is transferred from treated stations to the gravid mosquitoes and then to the aquatic habitats where it inhibits mosquito emergence. Data from most of the mosquito studies we reviewed conformed to our generic schema with four tables recording the experimental design, sorting of collections, details of samples, and additional observations. Our corresponding online repository includes 20 experiments, 8 projects, and 15 users at two institutes, resulting in 10 peer-reviewed publications. For zoophagic vectors, the results from model can be used to forecast the likely immediate and delayed impacts of an intervention using only three field-measurable parameters. For the autodissemination of PPF, sensitivity analysis indicates success of the strategy is plausible because the ≥ 80% coverage of aquatic habitats with PPF appears achievable with modest, biologically plausible values of field-measurable input parameters. Therefore, we have applied two of the computational sciences aspects (i.e., research data preparation using computer systems and scenario analysis with mathematical models) to address obstacles to the control and elimination of malaria

    Enhancing Trust –A Unified Meta-Model for Software Security Vulnerability Analysis

    Get PDF
    Over the last decade, a globalization of the software industry has taken place which has facilitated the sharing and reuse of code across existing project boundaries. At the same time, such global reuse also introduces new challenges to the Software Engineering community, with not only code implementation being shared across systems but also any vulnerabilities it is exposed to as well. Hence, vulnerabilities found in APIs no longer affect only individual projects but instead might spread across projects and even global software ecosystem borders. Tracing such vulnerabilities on a global scale becomes an inherently difficult task, with many of the resources required for the analysis not only growing at unprecedented rates but also being spread across heterogeneous resources. Software developers are struggling to identify and locate the required data to take full advantage of these resources. The Semantic Web and its supporting technology stack have been widely promoted to model, integrate, and support interoperability among heterogeneous data sources. This dissertation introduces four major contributions to address these challenges: (1) It provides a literature review of the use of software vulnerabilities databases (SVDBs) in the Software Engineering community. (2) Based on findings from this literature review, we present SEVONT, a Semantic Web based modeling approach to support a formal and semi-automated approach for unifying vulnerability information resources. SEVONT introduces a multi-layer knowledge model which not only provides a unified knowledge representation, but also captures software vulnerability information at different abstract levels to allow for seamless integration, analysis, and reuse of the modeled knowledge. The modeling approach takes advantage of Formal Concept Analysis (FCA) to guide knowledge engineers in identifying reusable knowledge concepts and modeling them. (3) A Security Vulnerability Analysis Framework (SV-AF) is introduced, which is an instantiation of the SEVONT knowledge model to support evidence-based vulnerability detection. The framework integrates vulnerability ontologies (and data) with existing Software Engineering ontologies allowing for the use of Semantic Web reasoning services to trace and assess the impact of security vulnerabilities across project boundaries. Several case studies are presented to illustrate the applicability and flexibility of our modelling approach, demonstrating that the presented knowledge modeling approach cannot only unify heterogeneous vulnerability data sources but also enables new types of vulnerability analysis

    Trusted Artificial Intelligence in Manufacturing; Trusted Artificial Intelligence in Manufacturing

    Get PDF
    The successful deployment of AI solutions in manufacturing environments hinges on their security, safety and reliability which becomes more challenging in settings where multiple AI systems (e.g., industrial robots, robotic cells, Deep Neural Networks (DNNs)) interact as atomic systems and with humans. To guarantee the safe and reliable operation of AI systems in the shopfloor, there is a need to address many challenges in the scope of complex, heterogeneous, dynamic and unpredictable environments. Specifically, data reliability, human machine interaction, security, transparency and explainability challenges need to be addressed at the same time. Recent advances in AI research (e.g., in deep neural networks security and explainable AI (XAI) systems), coupled with novel research outcomes in the formal specification and verification of AI systems provide a sound basis for safe and reliable AI deployments in production lines. Moreover, the legal and regulatory dimension of safe and reliable AI solutions in production lines must be considered as well. To address some of the above listed challenges, fifteen European Organizations collaborate in the scope of the STAR project, a research initiative funded by the European Commission in the scope of its H2020 program (Grant Agreement Number: 956573). STAR researches, develops, and validates novel technologies that enable AI systems to acquire knowledge in order to take timely and safe decisions in dynamic and unpredictable environments. Moreover, the project researches and delivers approaches that enable AI systems to confront sophisticated adversaries and to remain robust against security attacks. This book is co-authored by the STAR consortium members and provides a review of technologies, techniques and systems for trusted, ethical, and secure AI in manufacturing. The different chapters of the book cover systems and technologies for industrial data reliability, responsible and transparent artificial intelligence systems, human centered manufacturing systems such as human-centred digital twins, cyber-defence in AI systems, simulated reality systems, human robot collaboration systems, as well as automated mobile robots for manufacturing environments. A variety of cutting-edge AI technologies are employed by these systems including deep neural networks, reinforcement learning systems, and explainable artificial intelligence systems. Furthermore, relevant standards and applicable regulations are discussed. Beyond reviewing state of the art standards and technologies, the book illustrates how the STAR research goes beyond the state of the art, towards enabling and showcasing human-centred technologies in production lines. Emphasis is put on dynamic human in the loop scenarios, where ethical, transparent, and trusted AI systems co-exist with human workers. The book is made available as an open access publication, which could make it broadly and freely available to the AI and smart manufacturing communities

    Proceedings, MSVSCC 2017

    Get PDF
    Proceedings of the 11th Annual Modeling, Simulation & Visualization Student Capstone Conference held on April 20, 2017 at VMASC in Suffolk, Virginia. 211 pp
    • …
    corecore