112,390 research outputs found

    Early Quantitative Assessment of Non-Functional Requirements

    Get PDF
    Non-functional requirements (NFRs) of software systems are a well known source of uncertainty in effort estimation. Yet, quantitatively approaching NFR early in a project is hard. This paper makes a step towards reducing the impact of uncertainty due to NRF. It offers a solution that incorporates NFRs into the functional size quantification process. The merits of our solution are twofold: first, it lets us quantitatively assess the NFR modeling process early in the project, and second, it lets us generate test cases for NFR verification purposes. We chose the NFR framework as a vehicle to integrate NFRs into the requirements modeling process and to apply quantitative assessment procedures. Our solution proposal also rests on the functional size measurement method, COSMIC-FFP, adopted in 2003 as the ISO/IEC 19761 standard. We extend its use for NFR testing purposes, which is an essential step for improving NFR development and testing effort estimates, and consequently for managing the scope of NFRs. We discuss the advantages of our approach and the open questions related to its design as well

    Non-functional requirements: size measurement and testing with COSMIC-FFP

    Get PDF
    The non-functional requirements (NFRs) of software systems are well known to add a degree of uncertainty to process of estimating the cost of any project. This paper contributes to the achievement of more precise project size measurement through incorporating NFRs into the functional size quantification process. We report on an initial solution proposed to deal with the problem of quantitatively assessing the NFR modeling process early in the project, and of generating test cases for NFR verification purposes. The NFR framework has been chosen for the integration of NFRs into the requirements modeling process and for their quantitative assessment. Our proposal is based on the functional size measurement method, COSMIC-FFP, adopted in 2003 as the ISO/IEC 19761 standard. Also in this paper, we extend the use of COSMIC-FFP for NFR testing purposes. This is an essential step for improving NFR development and testing effort estimates, and consequently for managing the scope of NFRs. We discuss the merits of the proposed approach and the open questions related to its design

    Type-driven automated program transformations and cost modelling for optimising streaming programs on FPGAs

    Get PDF
    In this paper we present a novel approach to program optimisation based on compiler-based type-driven program transformations and a fast and accurate cost/performance model for the target architecture. We target streaming programs for the problem domain of scientific computing, such as numerical weather prediction. We present our theoretical framework for type-driven program transformation, our target high-level language and intermediate representation languages and the cost model and demonstrate the effectiveness of our approach by comparison with a commercial toolchain

    HARVEST FUNCTIONS: THE NORWEGIAN BOTTOM TRAWL COD FISHERIES

    Get PDF
    A detailed and comprehensive set of catch and effort data for the cod fisheries of 18 Norwegian bottom trawlers have been obtained for the period 1971–85, a period with few binding quota restrictions on vessel operations. Harvest functions have been designed and estimated. The independent variables are hours of trawling per vessel day and biomass of the cod stock (3+). Daily biomass estimates have been calculated by polynomial interpolation of the annual estimates of the International Council for the Exploration of the Sea (ICES). By maximizing the log-likelihood function using numerical methods, parameter estimates and performance indicators of the different models were obtained. The best result was obtained for a harvest model allowing for seasonal changes and with an autocorrelated error term. For this model, the stock-output elasticity is estimated at 0.424, the effort-output elasticity at 1.232, and the technological change at about a 2% annual increase in productivity. The seasonal changes in catchability are significant, with the lowest intra-annual catchability being less than 30% of the annual maximum.Resource /Energy Economics and Policy,

    Models for an Ecosystem Approach to Fisheries

    Get PDF
    This document is one outcome from a workshop held in Gizo in October 2010 attended by 82 representatives from government, NGO's private sector, and communities. The target audience for the document is primarily organizations planning to work with coastal communities of Solomon Islands to implement Community-Based Resource Management (CBRM). It is however also envisaged that the document will serve as a reference for communities to better understand what to expect from their partners and also for donors, to be informed about agreed approaches amongst Solomon Islands stakeholders. This document does not attempt to summarize all the outcomes of the workshop; rather it focuses on the Solomon Islands Coral Triangle Initiative (CTI) National Plan of Action (NPoA): Theme 1: Support and implementation of CBRM and specifically, the scaling up of CBRM in Solomon Islands. Most of the principles given in this document are derived from experiences in coastal communities and ecosystems as, until relatively recently, these have received most attention in Solomon Islands resource management. It is recognized however that the majority of these principles will be applicable to both coastal and terrestrial initiatives. This document synthesizes information provided by stakeholders at the October 2010 workshop and covers some basic principles of engagement and implementation that have been learned over more than twenty years of activities by the stakeholder partners in Solomon Islands. The document updates and expands on a summary of guiding principles for CBRM which was originally prepared by the Solomon Islands Locally Managed Marine Area Network (SILMMA) in 2007

    Efficient Simulation of Structural Faults for the Reliability Evaluation at System-Level

    Get PDF
    In recent technology nodes, reliability is considered a part of the standard design ¿ow at all levels of embedded system design. While techniques that use only low-level models at gate- and register transfer-level offer high accuracy, they are too inefficient to consider the overall application of the embedded system. Multi-level models with high abstraction are essential to efficiently evaluate the impact of physical defects on the system. This paper provides a methodology that leverages state-of-the-art techniques for efficient fault simulation of structural faults together with transaction-level modeling. This way it is possible to accurately evaluate the impact of the faults on the entire hardware/software system. A case study of a system consisting of hardware and software for image compression and data encryption is presented and the method is compared to a standard gate/RT mixed-level approac

    An Adaptive Design Methodology for Reduction of Product Development Risk

    Full text link
    Embedded systems interaction with environment inherently complicates understanding of requirements and their correct implementation. However, product uncertainty is highest during early stages of development. Design verification is an essential step in the development of any system, especially for Embedded System. This paper introduces a novel adaptive design methodology, which incorporates step-wise prototyping and verification. With each adaptive step product-realization level is enhanced while decreasing the level of product uncertainty, thereby reducing the overall costs. The back-bone of this frame-work is the development of Domain Specific Operational (DOP) Model and the associated Verification Instrumentation for Test and Evaluation, developed based on the DOP model. Together they generate functionally valid test-sequence for carrying out prototype evaluation. With the help of a case study 'Multimode Detection Subsystem' the application of this method is sketched. The design methodologies can be compared by defining and computing a generic performance criterion like Average design-cycle Risk. For the case study, by computing Average design-cycle Risk, it is shown that the adaptive method reduces the product development risk for a small increase in the total design cycle time.Comment: 21 pages, 9 figure

    Mechanistic modeling of architectural vulnerability factor

    Get PDF
    Reliability to soft errors is a significant design challenge in modern microprocessors owing to an exponential increase in the number of transistors on chip and the reduction in operating voltages with each process generation. Architectural Vulnerability Factor (AVF) modeling using microarchitectural simulators enables architects to make informed performance, power, and reliability tradeoffs. However, such simulators are time-consuming and do not reveal the microarchitectural mechanisms that influence AVF. In this article, we present an accurate first-order mechanistic analytical model to compute AVF, developed using the first principles of an out-of-order superscalar execution. This model provides insight into the fundamental interactions between the workload and microarchitecture that together influence AVF. We use the model to perform design space exploration, parametric sweeps, and workload characterization for AVF
    corecore