32,998 research outputs found

    Risk and Business Goal Based Security Requirement and Countermeasure Prioritization

    Get PDF
    Companies are under pressure to be in control of their assets but at the same time they must operate as efficiently as possible. This means that they aim to implement “good-enough security” but need to be able to justify their security investment plans. Currently companies achieve this by means of checklist-based security assessments, but these methods are a way to achieve consensus without being able to provide justifications of countermeasures in terms of business goals. But such justifications are needed to operate securely and effectively in networked businesses. In this paper, we first compare a Risk-Based Requirements Prioritization method (RiskREP) with some requirements engineering and risk assessment methods based on their requirements elicitation and prioritization properties. RiskREP extends misuse case-based requirements engineering methods with IT architecture-based risk assessment and countermeasure definition and prioritization. Then, we present how RiskREP prioritizes countermeasures by linking business goals to countermeasure specification. Prioritizing countermeasures based on business goals is especially important to provide the stakeholders with structured arguments for choosing a set of countermeasures to implement. We illustrate RiskREP and how it prioritizes the countermeasures it elicits by an application to an action case

    Mapping Big Data into Knowledge Space with Cognitive Cyber-Infrastructure

    Full text link
    Big data research has attracted great attention in science, technology, industry and society. It is developing with the evolving scientific paradigm, the fourth industrial revolution, and the transformational innovation of technologies. However, its nature and fundamental challenge have not been recognized, and its own methodology has not been formed. This paper explores and answers the following questions: What is big data? What are the basic methods for representing, managing and analyzing big data? What is the relationship between big data and knowledge? Can we find a mapping from big data into knowledge space? What kind of infrastructure is required to support not only big data management and analysis but also knowledge discovery, sharing and management? What is the relationship between big data and science paradigm? What is the nature and fundamental challenge of big data computing? A multi-dimensional perspective is presented toward a methodology of big data computing.Comment: 59 page

    Design: One, but in different forms

    Full text link
    This overview paper defends an augmented cognitively oriented generic-design hypothesis: there are both significant similarities between the design activities implemented in different situations and crucial differences between these and other cognitive activities; yet, characteristics of a design situation (related to the design process, the designers, and the artefact) introduce specificities in the corresponding cognitive activities and structures that are used, and in the resulting designs. We thus augment the classical generic-design hypothesis with that of different forms of designing. We review the data available in the cognitive design research literature and propose a series of candidates underlying such forms of design, outlining a number of directions requiring further elaboration

    Entering the blackboard jungle: canonical dysfunction in conscious machines

    Get PDF
    The central paradigm of Artificial Intelligence is rapidly shifting toward biological models for both robotic devices and systems performing such critical tasks as network management and process control. Here we apply recent mathematical analysis of the necessary conditions for consciousness in humans in an attempt to gain some understanding of the likely canonical failure modes inherent to a broad class of global workspace/blackboard machines designed to emulate biological functions. Similar problems are likely to confront other possible architectures, although their mathematical description may be far less straightforward

    On cost-effective reuse of components in the design of complex reconfigurable systems

    Get PDF
    Design strategies that benefit from the reuse of system components can reduce costs while maintaining or increasing dependability—we use the term dependability to tie together reliability and availability. D3H2 (aDaptive Dependable Design for systems with Homogeneous and Heterogeneous redundancies) is a methodology that supports the design of complex systems with a focus on reconfiguration and component reuse. D3H2 systematizes the identification of heterogeneous redundancies and optimizes the design of fault detection and reconfiguration mechanisms, by enabling the analysis of design alternatives with respect to dependability and cost. In this paper, we extend D3H2 for application to repairable systems. The method is extended with analysis capabilities allowing dependability assessment of complex reconfigurable systems. Analysed scenarios include time-dependencies between failure events and the corresponding reconfiguration actions. We demonstrate how D3H2 can support decisions about fault detection and reconfiguration that seek to improve dependability while reducing costs via application to a realistic railway case study

    Cognitive Machine Individualism in a Symbiotic Cybersecurity Policy Framework for the Preservation of Internet of Things Integrity: A Quantitative Study

    Get PDF
    This quantitative study examined the complex nature of modern cyber threats to propose the establishment of cyber as an interdisciplinary field of public policy initiated through the creation of a symbiotic cybersecurity policy framework. For the public good (and maintaining ideological balance), there must be recognition that public policies are at a transition point where the digital public square is a tangible reality that is more than a collection of technological widgets. The academic contribution of this research project is the fusion of humanistic principles with Internet of Things (IoT) technologies that alters our perception of the machine from an instrument of human engineering into a thinking peer to elevate cyber from technical esoterism into an interdisciplinary field of public policy. The contribution to the US national cybersecurity policy body of knowledge is a unified policy framework (manifested in the symbiotic cybersecurity policy triad) that could transform cybersecurity policies from network-based to entity-based. A correlation archival data design was used with the frequency of malicious software attacks as the dependent variable and diversity of intrusion techniques as the independent variable for RQ1. For RQ2, the frequency of detection events was the dependent variable and diversity of intrusion techniques was the independent variable. Self-determination Theory is the theoretical framework as the cognitive machine can recognize, self-endorse, and maintain its own identity based on a sense of self-motivation that is progressively shaped by the machine’s ability to learn. The transformation of cyber policies from technical esoterism into an interdisciplinary field of public policy starts with the recognition that the cognitive machine is an independent consumer of, advisor into, and influenced by public policy theories, philosophical constructs, and societal initiatives
    • …
    corecore