642 research outputs found

    A Relevance Model for Threat-Centric Ranking of Cybersecurity Vulnerabilities

    Get PDF
    The relentless and often haphazard process of tracking and remediating vulnerabilities is a top concern for cybersecurity professionals. The key challenge they face is trying to identify a remediation scheme specific to in-house, organizational objectives. Without a strategy, the result is a patchwork of fixes applied to a tide of vulnerabilities, any one of which could be the single point of failure in an otherwise formidable defense. This means one of the biggest challenges in vulnerability management relates to prioritization. Given that so few vulnerabilities are a focus of real-world attacks, a practical remediation strategy is to identify vulnerabilities likely to be exploited and focus efforts towards remediating those vulnerabilities first. The goal of this research is to demonstrate that aggregating and synthesizing readily accessible, public data sources to provide personalized, automated recommendations that an organization can use to prioritize its vulnerability management strategy will offer significant improvements over what is currently realized using the Common Vulnerability Scoring System (CVSS). We provide a framework for vulnerability management specifically focused on mitigating threats using adversary criteria derived from MITRE ATT&CK. We identify the data mining steps needed to acquire, standardize, and integrate publicly available cyber intelligence data sets into a robust knowledge graph from which stakeholders can infer business logic related to known threats. We tested our approach by identifying vulnerabilities in academic and common software associated with six universities and four government facilities. Ranking policy performance was measured using the Normalized Discounted Cumulative Gain (nDCG). Our results show an average 71.5% to 91.3% improvement towards the identification of vulnerabilities likely to be targeted and exploited by cyber threat actors. The ROI of patching using our policies resulted in a savings in the range of 23.3% to 25.5% in annualized unit costs. Our results demonstrate the efficiency of creating knowledge graphs to link large data sets to facilitate semantic queries and create data-driven, flexible ranking policies. Additionally, our framework uses only open standards, making implementation and improvement feasible for cyber practitioners and academia

    Vertically Integrated Projects (VIP) Programs: Multidisciplinary Projects with Homes in Any Discipline

    Get PDF
    A survey of papers in the ASEE Multidisciplinary Engineering Division over the last three years shows three main areas of emphasis: individual courses; profiles of specific projects; and capstone design courses. However, propagating multidisciplinary education across the vast majority of disciplines offered at educational institutions with varying missions requires models that are independent of the disciplines, programs, and institutions in which they were originally conceived. Further, models that can propagate must be cost effective, scalable, and engage and benefit participating faculty. Since 2015, a consortium of twenty-four institutions has come together around one such model, the Vertically Integrated Projects (VIP) Program. VIP unites undergraduate education and faculty research in a team-based context, with students earning academic credits toward their degrees, and faculty and graduate students benefitting from the design/discovery efforts of their multidisciplinary teams. VIP integrates rich student learning experiences with faculty research, transforming both contexts for undergraduate learning and concepts of faculty research as isolated from undergraduate teaching. It provides a rich, cost-effective, scalable, and sustainable model for multidisciplinary project-based learning. (1) It is rich because students participate multiple years as they progress through their curriculum; (2) It is cost-effective since students earn academic credit instead of stipends; (3) It is scalable because faculty can work with teams of students instead of individual undergraduate research fellows, and typical teams consist of fifteen or more students from different disciplines; (4) It is sustainable because faculty benefit from the research and design efforts of their teams, with teams becoming integral parts of their research. While VIP programs share key elements, approaches and implementations vary by institution. This paper shows how the VIP model works across sixteen different institutions with different missions, sizes, and student profiles. The sixteen institutions represent new and long-established VIP programs, varying levels of research activity, two Historically Black Colleges and Universities (HBCUs), a Hispanic-Serving Institution (HSI), and two international universities1. Theses sixteen profiles illustrate adaptability of the VIP model across different academic settings

    Laboratory Directed Research and Development FY-10 Annual Report

    Full text link

    Efficient Implementation of Stochastic Inference on Heterogeneous Clusters and Spiking Neural Networks

    Get PDF
    Neuromorphic computing refers to brain inspired algorithms and architectures. This paradigm of computing can solve complex problems which were not possible with traditional computing methods. This is because such implementations learn to identify the required features and classify them based on its training, akin to how brains function. This task involves performing computation on large quantities of data. With this inspiration, a comprehensive multi-pronged approach is employed to study and efficiently implement neuromorphic inference model using heterogeneous clusters to address the problem using traditional Von Neumann architectures and by developing spiking neural networks (SNN) for native and ultra-low power implementation. In this regard, an extendable high-performance computing (HPC) framework and optimizations are proposed for heterogeneous clusters to modularize complex neuromorphic applications in a distributed manner. To achieve best possible throughput and load balancing for such modularized architectures a set of algorithms are proposed to suggest the optimal mapping of different modules as an asynchronous pipeline to the available cluster resources while considering the complex data dependencies between stages. On the other hand, SNNs are more biologically plausible and can achieve ultra-low power implementation due to its sparse spike based communication, which is possible with emerging non-Von Neumann computing platforms. As a significant progress in this direction, spiking neuron models capable of distributed online learning are proposed. A high performance SNN simulator (SpNSim) is developed for simulation of large scale mixed neuron model networks. An accompanying digital hardware neuron RTL is also proposed for efficient real time implementation of SNNs capable of online learning. Finally, a methodology for mapping probabilistic graphical model to off-the-shelf neurosynaptic processor (IBM TrueNorth) as a stochastic SNN is presented with ultra-low power consumption

    Deliverable D2.1 - Ecosystem analysis and 6G-SANDBOX facility design

    Get PDF
    This document provides a comprehensive overview of the core aspects of the 6G-SANDBOX project. It outlines the project's vision, objectives, and the Key Performance Indicators (KPIs) and Key Value Indicators (KVIs) targeted for achievement. The functional and non-functional requirements of the 6G-SANDBOX Facility are extensively presented, based on a proposed reference blueprint. A detailed description of the updated reference architecture of the facility is provided, considering the requirements outlined. The document explores the experimentation framework, including the lifecycle of experiments and the methodology for validating KPIs and KVIs. It presents the key technologies and use case enablers towards 6G that will be offered within the trial networks. Each of the platforms constituting the 6G-SANDBOX Facility is described, along with the necessary enhancements to align them with the project's vision in terms of hardware, software updates, and functional improvements

    Toward a Unified Theory of Access to Local Telephone Systems

    Get PDF
    One of the most distinctive developments in telecommunications policy over the past few decades has been the increasingly broad array of access requirements regulatory authorities have imposed on local telephone providers. In so doing, policymakers did not fully consider whether the justifications for regulating telecommunications remained valid. They also allowed each access regime to be governed by its own pricing methodology and set access prices in a way that treated each network component as if it existed in isolation. The result was a regulatory regime that was internally inconsistent, vulnerable to regulatory arbitrage, and unable to capture the interactions among network elements that give networks their distinctive character. In this Article, Professors Daniel Spulber and Christopher Yoo trace the development of these access regimes and evaluate the extent to which the emergence of competition among local telephone providers has undercut the rationales traditionally invoked to justify regulating local telephone networks (e.g., natural monopoly, network economic effects, vertical exclusion, and ruinous competition). They then apply a five-part framework for classifying different types of access that models the interactions among different network components. This framework demonstrates the impact of different types of access on network configuration, capacity, reliability, and cost. The framework also demonstrates how mandated access can increase transaction costs by forcing local telephone providers to externalize functions that would be more efficiently provided within the boundaries of the firm

    Toward a Unified Theory of Access to Local Telephone Networks

    Get PDF
    The Enduring Lessons of the Breakup of AT&T: A Twenty-Five Year Retrospective. \u27 Conference held at the University of Pennsylvania Law School on April 18-19, 2008. Over the past several decades, regulatory authorities have imposed an increasingly broad array of access requirements on local telephone providers. In so doing, policymakers typically applied previous approaches to access regulation without fully considering whether the regulatory justifications used in favor of those previous access requirements remained valid. They also allowed each access regime to be governed by a different pricing methodology and set access prices in a way that treated each network component as if it existed in isolation. The result was a regulatory regime that was internally inconsistent and vulnerable to regulatory arbitrage. In this Article, Professors Daniel Spulber and Christopher Yoo trace the development of these access regimes and evaluate the continuing validity of the rationales traditionally invoked to justify mandating access to local telephone networks (e.g., natural monopoly, network economic effects, vertical exclusion, and ruinous competition) in a world in which competition among local telephone providers is a real possibility. They then apply a five-part framework for classifying different types of access based on the branch of mathematics known as graph theory that models the interactions among different components. This framework shows how different types of access can have a differential impact on network configuration, capacity, reliability, and cost. It also captures the extent to which networks constitute complex systems in which network components interact with one another in ways that can make network behavior quite unpredictable. In addition, the framework demonstrates how mandated access can increase transaction costs by forcing local telephone providers to externalize functions that would be more efficiently provided within the boundaries of the firm
    corecore