220 research outputs found

    Security Requirements Engineering: A Framework for Representation and Analysis

    Get PDF
    This paper presents a framework for security requirements elicitation and analysis. The framework is based on constructing a context for the system, representing security requirements as constraints, and developing satisfaction arguments for the security requirements. The system context is described using a problem-oriented notation, then is validated against the security requirements through construction of a satisfaction argument. The satisfaction argument consists of two parts: a formal argument that the system can meet its security requirements and a structured informal argument supporting the assumptions expressed in the formal argument. The construction of the satisfaction argument may fail, revealing either that the security requirement cannot be satisfied in the context or that the context does not contain sufficient information to develop the argument. In this case, designers and architects are asked to provide additional design information to resolve the problems. We evaluate the framework by applying it to a security requirements analysis within an air traffic control technology evaluation project

    A Framework for Representation, Validation and Implementation of Database Application Semantics

    Get PDF
    New application domains in data-processing environments pose new requirements on the methodologies, techniques and tools used to design them. The applications’ semantics should be fully represented at an increasingly high level, and the representation should be subject to rigorous validation and verification. We present a semantic representation framework (including the language, methods and tools) for design of data-processing applications. The new features of the framework include a small number of precisely defined domain-independent concepts, high-level possibilities for describing behavioural semantics (methods and constraints) and the validation and verification tools included in the framework. We present examples of the use of the framework, including the use of its tools

    Reinforcement Learning With Simulated User For Automatic Dialog Strategy Optimization

    Get PDF
    In this paper, we propose a solution to the problem of formulating strategies for a spoken dialog system. Our approach is based on reinforcement learning with the help of a simulated user in order to identify an optimal dialog strategy. Our method considers the Markov decision process to be a framework for representation of speech dialog in which the states represent history and discourse context, the actions are dialog acts and the transition strategies are decisions on actions to take between states. We present our reinforcement learning architecture with a novel objective function that is based on dialog quality rather than its duration

    Emergent intentionality in perception-action subsumption hierarchies

    Get PDF
    A cognitively-autonomous artificial agent may be defined as one able to modify both its external world-model and the framework by which it represents the world, requiring two simultaneous optimization objectives. This presents deep epistemological issues centered on the question of how a framework for representation (as opposed to the entities it represents) may be objectively validated. In this summary paper, formalizing previous work in this field, it is argued that subsumptive perception-action learning has the capacity to resolve these issues by {\em a)} building the perceptual hierarchy from the bottom up so as to ground all proposed representations and {\em b)} maintaining a bijective coupling between proposed percepts and projected action possibilities to ensure empirical falsifiability of these grounded representations. In doing so, we will show that such subsumptive perception-action learners intrinsically incorporate a model for how intentionality emerges from randomized exploratory activity in the form of 'motor babbling'. Moreover, such a model of intentionality also naturally translates into a model for human-computer interfacing that makes minimal assumptions as to cognitive states

    Emergent intentionality in perception-action subsumption hierarchies

    Get PDF
    A cognitively-autonomous artificial agent may be defined as one able to modify both its external world-model and the framework by which it represents the world, requiring two simultaneous optimization objectives. This presents deep epistemological issues centered on the question of how a framework for representation (as opposed to the entities it represents) may be objectively validated. In this summary paper, formalizing previous work in this field, it is argued that subsumptive perception-action learning has the capacity to resolve these issues by {\em a)} building the perceptual hierarchy from the bottom up so as to ground all proposed representations and {\em b)} maintaining a bijective coupling between proposed percepts and projected action possibilities to ensure empirical falsifiability of these grounded representations. In doing so, we will show that such subsumptive perception-action learners intrinsically incorporate a model for how intentionality emerges from randomized exploratory activity in the form of 'motor babbling'. Moreover, such a model of intentionality also naturally translates into a model for human-computer interfacing that makes minimal assumptions as to cognitive states

    Towards Security Goals in Summative E-Assessment Security

    No full text
    The general security goals of a computer system are known to include confidentiality, integrity and availability (C-I-A) which prevent critical assets from potential threats. The C-I-A security goals are well researched areas; however they may be insufficient to address all the needs of the summative e-assessment. In this paper, we do not discard the fundamental C-I-A security goals; rather we define security goals which are specific to summative e-assessment security
    • …
    corecore