32,640 research outputs found

    RiskREP: Risk-Based Security Requirements Elicitation and Prioritization (extended version)

    Get PDF
    Today, companies are required to be in control of the security of their IT assets. This is especially challenging in the presence of limited budgets and conflicting requirements. Here, we present Risk-Based Requirements Elicitation and Prioritization (RiskREP), a method for managing IT security risks by combining the results of a top-down requirements analysis with a bottom-up threat analysis. Top-down, it prioritizes security goals and from there derives verifiable requirements. Bottom-up, it analyzes architectures in order to identify security risks in the form of critical components. Linking these critical components to security requirements helps to analyze the effects of these requirements on business goals, and to prioritize security requirements. The security requirements also are the basis for deriving test cases for security analysis and compliance monitoring

    From i* models to service oriented architecture models

    Get PDF
    Requirements engineering and architectural design are key activities for successful development of software systems. Specifically in the service-oriented development systems there is a gap between the requirements description and architecture design and assessment. This article presents a systematic process for systematically deriving service-oriented architecture from goal-oriented models. This process allows generate candidate architectures based on i* models and helps architects to select a solution using services oriented patterns for both services and components levels. The process is exemplified by applying it in a synthesis metadata and assembly learning objects system.Peer ReviewedPostprint (author’s final draft

    Contention-aware performance monitoring counter support for real-time MPSoCs

    Get PDF
    Tasks running in MPSoCs experience contention delays when accessing MPSoC’s shared resources, complicating task timing analysis and deriving execution time bounds. Understanding the Actual Contention Delay (ACD) each task suffers due to other corunning tasks, and the particular hardware shared resources in which contention occurs, is of prominent importance to increase confidence on derived execution time bounds of tasks. And, whenever those bounds are violated, ACD provides information on the reasons for overruns. Unfortunately, existing MPSoC designs considered in real-time domains offer limited hardware support to measure tasks’ ACD losing all these potential benefits. In this paper we propose the Contention Cycle Stack (CCS), a mechanism that extends performance monitoring counters to track specific events that allow estimating the ACD that each task suffers from every contending task on every hardware shared resource. We build the CCS using a set of specialized low-overhead Performance Monitoring Counters for the Cobham Gaisler GR740 (NGMP) MPSoC – used in the space domain – for which we show CCS’s benefits.The research leading to these results has received funding from the European Space Agency under contracts 4000109680, 4000110157 and NPI 4000102880, and the Ministry of Science and Technology of Spain under contract TIN-2015-65316-P. Jaume Abella has been partially supported by the Ministry of Economy and Competitiveness under Ramon y Cajal postdoctoral fellowship number RYC-2013-14717.Peer ReviewedPostprint (author's final draft

    The evolution of tropos: Contexts, commitments and adaptivity

    Get PDF
    Software evolution is the main research focus of the Tropos group at University of Trento (UniTN): how do we build systems that are aware of their requirements, and are able to dynamically reconfigure themselves in response to changes in context (the environment within which they operate) and requirements. The purpose of this report is to offer an overview of ongoing work at UniTN. In particular, the report presents ideas and results of four lines of research: contextual requirements modeling and reasoning, commitments and goal models, developing self-reconfigurable systems, and requirements awareness

    On the tailoring of CAST-32A certification guidance to real COTS multicore architectures

    Get PDF
    The use of Commercial Off-The-Shelf (COTS) multicores in real-time industry is on the rise due to multicores' potential performance increase and energy reduction. Yet, the unpredictable impact on timing of contention in shared hardware resources challenges certification. Furthermore, most safety certification standards target single-core architectures and do not provide explicit guidance for multicore processors. Recently, however, CAST-32A has been presented providing guidance for software planning, development and verification in multicores. In this paper, from a theoretical level, we provide a detailed review of CAST-32A objectives and the difficulty of reaching them under current COTS multicore design trends; at experimental level, we assess the difficulties of the application of CAST-32A to a real multicore processor, the NXP P4080.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness (MINECO) under grant TIN2015-65316-P and the HiPEAC Network of Excellence. Jaume Abella has been partially supported by the MINECO under Ramon y Cajal grant RYC-2013-14717.Peer ReviewedPostprint (author's final draft

    Machine Learning-Based Elastic Cloud Resource Provisioning in the Solvency II Framework

    Get PDF
    The Solvency II Directive (Directive 2009/138/EC) is a European Directive issued in November 2009 and effective from January 2016, which has been enacted by the European Union to regulate the insurance and reinsurance sector through the discipline of risk management. Solvency II requires European insurance companies to conduct consistent evaluation and continuous monitoring of risks—a process which is computationally complex and extremely resource-intensive. To this end, companies are required to equip themselves with adequate IT infrastructures, facing a significant outlay. In this paper we present the design and the development of a Machine Learning-based approach to transparently deploy on a cloud environment the most resource-intensive portion of the Solvency II-related computation. Our proposal targets DISAR®, a Solvency II-oriented system initially designed to work on a grid of conventional computers. We show how our solution allows to reduce the overall expenses associated with the computation, without hampering the privacy of the companies’ data (making it suitable for conventional public cloud environments), and allowing to meet the strict temporal requirements required by the Directive. Additionally, the system is organized as a self-optimizing loop, which allows to use information gathered from actual (useful) computations, thus requiring a shorter training phase. We present an experimental study conducted on Amazon EC2 to assess the validity and the efficiency of our proposal
    corecore