189 research outputs found

    Agile and Lean Systems Engineering: Kanban in Systems Engineering

    Get PDF
    This is the 2nd of two reports that were created for research on this topic funded through SERC. The first report, SERC-TR-032-1 dated March 13, 2012, constituted the 2011-2012 Annual Technical Report and the Final Technical Report of the SERC Research Task RT-6: Software Intensive Systems Data Quality and Estimation Research In Support of Future Defense Cost Analysis. The overall objectives of RT-6 were to use data submitted to DoD in the Software Resources Data Report (SRDR) forms to provide guidance for DoD projects in estimating software costs for future DoD projects. In analyzing the data, the project found variances in productivity data that made such SRDR-based estimates highly variable. The project then performed additional analyses that provided better bases of estimate, but also identified ambiguities in the SRDR data definitions that enabled the project to help the DoD DCARC organization to develop better SRDR data definitions. In SERC-TR-2012-032-1, the resulting Manual provided the guidance elements for software cost estimation performers and users. Several appendices provide further related information on acronyms, sizing, nomograms, work breakdown structures, and references. SERC-TR-2013-032-2 (current report), included the “Software Cost Estimation Metrics Manual.” This constitutes the 2012-2013 Annual Technical Report and the Final Technical Report of the SERC Research Task Order 0024, RT-6: Software Intensive Systems Cost and Schedule Estimation Estimating the cost to develop a software application is different from almost any other manufacturing process. In other manufacturing disciplines, the product is developed once and replicated many times using physical processes. Replication improves physical process productivity (duplicate machines produce more items faster), reduces learning curve effects on people and spreads unit cost over many items. Whereas a software application is a single production item, i.e. every application is unique. The only physical processes are the documentation of ideas, their translation into computer instructions and their validation and verification. Production productivity reduces, not increases, when more people are employed to develop the software application. Savings through replication are only realized in the development processes and on the learning curve effects on the management and technical staff. Unit cost is not reduced by creating the software application over and over again. This manual helps analysts and decision makers develop accurate, easy and quick software cost estimates for different operating environments such as ground, shipboard, air and space. It was developed by the Air Force Cost Analysis Agency (AFCAA) in conjunction with DoD Service Cost Agencies, and assisted by the SERC through involving the University of Southern California and the Naval Postgraduate School. The intent is to improve quality and consistency of estimating methods across cost agencies and program offices through guidance, standardization, and knowledge sharing. The manual consists of chapters on metric definitions, e.g., what is meant by equivalent lines of code, examples of metric definitions from commercially available cost models, the data collection and repository form, guidelines for preparing the data for analysis, analysis results, cost estimating relationships found in the data, productivity benchmarks, future cost estimation challenges and a very large appendix.SERCU.S. Department of DefenseSystems Engineering Research Center (SERC)Systems Engineering Research Center (SERC) Contract H98230-08-D-0171

    Generating system reliability optimization

    Get PDF
    Reliability optimization can be applied in both conventional and non-conventional generating system planning. This thesis is concerned with generation adequacy optimization, with emphases on applications to wind energy penetration planning and interruptible load utilization. New models, indices and techniques for generation adequacy optimization including wind turbines and interruptible load utilization have been developed in this research work. A sequential Monte Carlo simulation technique for wind power modeling and reliability assessment of a generating system was developed in the research associated with optimum wind energy penetration planning. An auto-regressive and moving average (ARMA) time series model is used to simulate the hourly wind speeds. Two new risk-based capacity benefit indicators designated as the Load Carrying Capability Benefit Ratio (LCCBR) and the Equivalent Capacity Ratio (ECR) are introduced. These two indices are used to indicate capacity benefit and credit associated with a windenergy conversion system. A bisection technique to assess them was further developed. The problem of determining the optimum site-matching windturbine parameters was studied with the LCCBR and ECR as the optimization objective functions. Sensitivity studies were conducted to show the effect of wind energy penetration level on generation capacity benefit. A procedure for optimum penetration planning was formed, which extends the methods developed for conventional generation adequacy optimization. A basic framework and techniques to conduct interruptible load analysis using sequential Monte Carlo simulation were created in the research associated with interruptible load utilization. A new index designated as the Avoidable Additional Generating Capacity (AAGC) is introduced. Bisection search techniques were developed to effectively determine the Incremental Load Carrying Capability (ILCC) and AAGC. Case studies on suitable contractual options for interruptible load customers under given conditions are also presented in this thesis. The results show that selecting a suitable set of interruptible load contractual conditions, in which various risk conditions are well matched, will achieve enhanced interruptible load carrying capability or capacity benefits. The series of case studies described in this thesis indicate that the proposed concepts, framework, models and quantitative techniques can be applied in practical engineering situations to provide a scientific basis for generating system planning

    Computer Aided Verification

    Get PDF
    The open access two-volume set LNCS 12224 and 12225 constitutes the refereed proceedings of the 32st International Conference on Computer Aided Verification, CAV 2020, held in Los Angeles, CA, USA, in July 2020.* The 43 full papers presented together with 18 tool papers and 4 case studies, were carefully reviewed and selected from 240 submissions. The papers were organized in the following topical sections: Part I: AI verification; blockchain and Security; Concurrency; hardware verification and decision procedures; and hybrid and dynamic systems. Part II: model checking; software verification; stochastic systems; and synthesis. *The conference was held virtually due to the COVID-19 pandemic

    Decomposing responses to mobile notifications

    Get PDF
    Notifications from mobile devices frequently prompt us with information, either to merely inform us or to elicit a reaction. This has led to increasing research interest in considering an individual’s interruptibility prior to issuing notifications, in order for them to be positively received. To achieve this, predictive models need to be built from previous response behaviour where the individual’s interruptibility is known. However, there are several degrees of freedom in achieving this, from different definitions in what it means to be interruptible and a notification to be successful, to various methods for collecting data, and building predictive models. The primary focus of this thesis is to improve upon the typical convention used for labelling interruptibility, an area which has had limited direct attention. This includes the proposal of a flexible framework, called the decision-on-information-gain model, which passively observes response behaviour in order to support various interruptibility definitions. In contrast, previous studies have largely surrounded the investigation of influential contextual factors on predicting interruptibility, using a broad labelling convention that relies on notifications being responded to fully and potentially a survey needing to be completed. The approach is supported through two in-the-wild studies of Android notifications, one with 11,000 notifications across 90 users, and another with 32,000,000 across 3000 users. Analysis of these datasets shows that: a) responses to notifications is a decisionmaking process, whereby individuals can be reachable but not receptive to their content, supporting the premise of the approach; b) the approach is implementable on typical Android devices and capable of adapting to different notification designs and user preferences; and c) the different labels produced by the model are predictable using data sources that do not require invasive permissions or persistent background monitoring; however there are notable performance differences between different machine learning strategies for training and evaluation

    Best Practices and Methodological Guidelines for Conducting Gas Risk Assessments

    Get PDF
    The EC Regulation concerning measures to safeguard security of gas supply (EC/994/2010) requires member states to make a full assessment of the risks affecting the security of gas supply. According to Article 9, this risk assessment must: (a) use the infrastructure and supply standards (articles 6 and 8); (b) take into account all relevant national and regional circumstances; (c) run various disruption scenarios; (d) identify the interaction and correlation of risks with other Member States. (e) take into account the maximal interconnection capacity of each border entry and exit point. The objective of this report is to provide guidance and advice for performing risk assessments. It will do so by first providing a literature review, and then by proposing a basic structure for undertaking a gas security risk assessment, in accordance with best practices and standard procedures found in risk management.JRC.F.3-Energy securit

    Approximate Assertional Reasoning Over Expressive Ontologies

    Get PDF
    In this thesis, approximate reasoning methods for scalable assertional reasoning are provided whose computational properties can be established in a well-understood way, namely in terms of soundness and completeness, and whose quality can be analyzed in terms of statistical measurements, namely recall and precision. The basic idea of these approximate reasoning methods is to speed up reasoning by trading off the quality of reasoning results against increased speed
    • …
    corecore