169 research outputs found

    Benefits and Challenges of Model-based Software Engineering: Lessons Learned based on Qualitative and Quantitative Findings

    Get PDF
    Even though Model-based Software Engineering (MBSwE) techniques and Autogenerated Code (AGC) have been increasingly used to produce complex software systems, there is only anecdotal knowledge about the state-of-thepractice. Furthermore, there is a lack of empirical studies that explore the potential quality improvements due to the use of these techniques. This paper presents in-depth qualitative findings about development and Software Assurance (SWA) practices and detailed quantitative analysis of software bug reports of a NASA mission that used MBSwE and AGC. The missions flight software is a combination of handwritten code and AGC developed by two different approaches: one based on state chart models (AGC-M) and another on specification dictionaries (AGC-D). The empirical analysis of fault proneness is based on 380 closed bug reports created by software developers. Our main findings include: (1) MBSwE and AGC provide some benefits, but also impose challenges. (2) SWA done only at a model level is not sufficient. AGC code should also be tested and the models and AGC should always be kept in-sync. AGC must not be changed manually. (3) Fixes made to address an individual bug report were spread both across multiple modules and across multiple files. On average, for each bug report 1.4 modules, that is, 3.4 files were fixed. (4) Most bug reports led to changes in more than one type of file. The majority of changes to auto-generated source code files were made in conjunction to changes in either file with state chart models or XML files derived from dictionaries. (5) For newly developed files, AGC-M and handwritten code were of similar quality, while AGC-D files were the least fault prone

    PEER Testbed Study on a Laboratory Building: Exercising Seismic Performance Assessment

    Get PDF
    From 2002 to 2004 (years five and six of a ten-year funding cycle), the PEER Center organized the majority of its research around six testbeds. Two buildings and two bridges, a campus, and a transportation network were selected as case studies to “exercise” the PEER performance-based earthquake engineering methodology. All projects involved interdisciplinary teams of researchers, each producing data to be used by other colleagues in their research. The testbeds demonstrated that it is possible to create the data necessary to populate the PEER performancebased framing equation, linking the hazard analysis, the structural analysis, the development of damage measures, loss analysis, and decision variables. This report describes one of the building testbeds—the UC Science Building. The project was chosen to focus attention on the consequences of losses of laboratory contents, particularly downtime. The UC Science testbed evaluated the earthquake hazard and the structural performance of a well-designed recently built reinforced concrete laboratory building using the OpenSees platform. Researchers conducted shake table tests on samples of critical laboratory contents in order to develop fragility curves used to analyze the probability of losses based on equipment failure. The UC Science testbed undertook an extreme case in performance assessment—linking performance of contents to operational failure. The research shows the interdependence of building structure, systems, and contents in performance assessment, and highlights where further research is needed. The Executive Summary provides a short description of the overall testbed research program, while the main body of the report includes summary chapters from individual researchers. More extensive research reports are cited in the reference section of each chapter

    Security Vulnerability Profiles of Mission Critical Software: Empirical Analysis of Security Related Bug Reports

    Get PDF
    While some prior research work exists on characteristics of software faults (i.e., bugs) and failures, very little work has been published on analysis of software applications vulnerabilities. This paper aims to contribute towards filling that gap by presenting an empirical investigation of application vulnerabilities. The results are based on data extracted from issue tracking systems of two NASA missions. These data were organized in three datasets: Ground mission IVV issues, Flight mission IVV issues, and Flight mission Developers issues. In each dataset, we identified security related software bugs and classified them in specific vulnerability classes. Then, we created the security vulnerability profiles, i.e., determined where and when the security vulnerabilities were introduced and what were the dominating vulnerabilities classes. Our main findings include: (1) In IVV issues datasets the majority of vulnerabilities were code related and were introduced in the Implementation phase. (2) For all datasets, around 90 of the vulnerabilities were located in two to four subsystems. (3) Out of 21 primary classes, five dominated: Exception Management, Memory Access, Other, Risky Values, and Unused Entities. Together, they contributed from 80 to 90 of vulnerabilities in each dataset

    Johnson Space Center's Risk and Reliability Analysis Group 2008 Annual Report

    Get PDF
    The Johnson Space Center (JSC) Safety & Mission Assurance (S&MA) Directorate s Risk and Reliability Analysis Group provides both mathematical and engineering analysis expertise in the areas of Probabilistic Risk Assessment (PRA), Reliability and Maintainability (R&M) analysis, and data collection and analysis. The fundamental goal of this group is to provide National Aeronautics and Space Administration (NASA) decisionmakers with the necessary information to make informed decisions when evaluating personnel, flight hardware, and public safety concerns associated with current operating systems as well as with any future systems. The Analysis Group includes a staff of statistical and reliability experts with valuable backgrounds in the statistical, reliability, and engineering fields. This group includes JSC S&MA Analysis Branch personnel as well as S&MA support services contractors, such as Science Applications International Corporation (SAIC) and SoHaR. The Analysis Group s experience base includes nuclear power (both commercial and navy), manufacturing, Department of Defense, chemical, and shipping industries, as well as significant aerospace experience specifically in the Shuttle, International Space Station (ISS), and Constellation Programs. The Analysis Group partners with project and program offices, other NASA centers, NASA contractors, and universities to provide additional resources or information to the group when performing various analysis tasks. The JSC S&MA Analysis Group is recognized as a leader in risk and reliability analysis within the NASA community. Therefore, the Analysis Group is in high demand to help the Space Shuttle Program (SSP) continue to fly safely, assist in designing the next generation spacecraft for the Constellation Program (CxP), and promote advanced analytical techniques. The Analysis Section s tasks include teaching classes and instituting personnel qualification processes to enhance the professional abilities of our analysts as well as performing major probabilistic assessments used to support flight rationale and help establish program requirements. During 2008, the Analysis Group performed more than 70 assessments. Although all these assessments were important, some were instrumental in the decisionmaking processes for the Shuttle and Constellation Programs. Two of the more significant tasks were the Space Transportation System (STS)-122 Low Level Cutoff PRA for the SSP and the Orion Pad Abort One (PA-1) PRA for the CxP. These two activities, along with the numerous other tasks the Analysis Group performed in 2008, are summarized in this report. This report also highlights several ongoing and upcoming efforts to provide crucial statistical and probabilistic assessments, such as the Extravehicular Activity (EVA) PRA for the Hubble Space Telescope service mission and the first fully integrated PRAs for the CxP's Lunar Sortie and ISS missions

    Preliminary Recommendations for the Collection, Storage, and Analysis of UAS Safety Data

    Get PDF
    Although the use of UASs in military and public service operations is proliferating, civilian use of UASs remains limited in the United States today. With efforts underway to accommodate and integrate UASs into the NAS, a proactive understanding of safety issues, i.e., the unique hazards and the corresponding risks that UASs pose not only through their operations for commercial purposes, but also to existing operations in the NAS, is especially important so as to (a) support the development of a sound regulatory basis, (b) regulate, design and properly equip UASs, and (c) effectively mitigate the risks posed. Data, especially about system and component failures, incidents, and accidents, provides valuable insight into how performance and operational capabilities/limitations contribute to hazards. Since the majority of UAS operations today take place in a context that is significantly different from the norm in civil aviation, i.e., with different operational goals and standards, identifying that which constitutes useful and sufficient data on UASs and their operations is a substantial research challenge

    Engineering, Life Sciences, and Health/Medicine Synergy in Aerospace Human Systems Integration: The Rosetta Stone Project

    Get PDF
    In the realm of aerospace engineering and the physical sciences, we have developed laws of physics based on empirical and research evidence that reliably guide design, research, and development efforts. For instance, an engineer designs a system based on data and experience that can be consistently and repeatedly verified. This reproducibility depends on the consistency and dependability of the materials on which the engineer works and is subject to physics, geometry and convention. In life sciences and medicine, these apply as well, but individuality introduces a host of variables into the mix, resulting in characteristics and outcomes that can be quite broad within a population of individuals. This individuality ranges from differences at the genetic and cellular level to differences in an individuals personality and abilities due to sex and gender, environment, education, etc

    Reproducibility of environment-dependent software failures: An experience report

    Get PDF
    Abstract-We investigate the dependence of software failure reproducibility on the environment in which the software is executed. The existence of such dependence is ascertained in literature, but so far it is not fully characterized. In this paper we pinpoint some of the environmental components that can affect the reproducibility of a failure and show this influence through an experimental campaign conducted on the MySQL Server software system. The set of failures of interest is drawn from MySQL's failure reports database and an experiment is designed for each of these failures. The experiments expose the influence of disk usage and level of concurrency on MySQL failure reproducibility. Furthermore, the results show that high levels of usage of these factors increase the probabilities of failure reproducibility

    Empirical Analysis and Automated Classification of Security Bug Reports

    Get PDF
    With the ever expanding amount of sensitive data being placed into computer systems, the need for effective cybersecurity is of utmost importance. However, there is a shortage of detailed empirical studies of security vulnerabilities from which cybersecurity metrics and best practices could be determined. This thesis has two main research goals: (1) to explore the distribution and characteristics of security vulnerabilities based on the information provided in bug tracking systems and (2) to develop data analytics approaches for automatic classification of bug reports as security or non-security related. This work is based on using three NASA datasets as case studies. The empirical analysis showed that the majority of software vulnerabilities belong only to a small number of types. Addressing these types of vulnerabilities will consequently lead to cost efficient improvement of software security. Since this analysis requires labeling of each bug report in the bug tracking system, we explored using machine learning to automate the classification of each bug report as a security or non-security related (two-class classification), as well as each security related bug report as specific security type (multiclass classification). In addition to using supervised machine learning algorithms, a novel unsupervised machine learning approach is proposed. Of the machine learning algorithms tested, Naive Bayes was the most consistent, well performing classifier across all datasets. The novel unsupervised approach did not perform as well as the supervised methods, but still performed well resulting in a G-Score of 0.715 in the case of best performance whereas the supervised approach achieved a G-Score of 0.903 in the case of best performance

    Rapid Mission Assurance Assessment via Sociotechnical Modeling and Simulation

    Get PDF
    How do organizations rapidly assess command-level effects of cyber attacks? Leaders need a way of assuring themselves that their organization, people, and information technology can continue their missions in a contested cyber environment. To do this, leaders should: 1) require assessments be more than analogical, anecdotal or simplistic snapshots in time; 2) demand the ability to rapidly model their organizations; 3) identify their organization’s structural vulnerabilities; and 4) have the ability to forecast mission assurance scenarios. Using text mining to build agent based dynamic network models of information processing organizations, I examine impacts of contested cyber environments on three common focus areas of information assurance—confidentiality, integrity, and availability. I find that assessing impacts of cyber attacks is a nuanced affair dependent on the nature of the attack, the nature of the organization and its missions, and the nature of the measurements. For well-manned information processing organizations, many attacks are in the nuisance range and that only multipronged or severe attacks cause meaningful failure. I also find that such organizations can design for resiliency and provide guidelines in how to do so

    A Bridge and Engine Room Staffing and Scheduling Model for Robust Mission Accomplishment in the Littoral Combat Ships

    Get PDF
    The Navy’s Littoral Combat Ships were designed to be relatively small surface vessels for operations near a littoral shore theater. These ships were envisioned to be highly automated, networked, agile, stealthy surface combatants capable of defeating anti-access and asymmetric threats in the littorals with minimum manpower. To date, however, some of these ships have experienced significant engineering and propulsion plant failures that impacted mission accomplishment and were attributable, at least in part, to under staffing and over scheduling the human component of the automation-human operational environment. The critical human components on the Littoral Combat Ship are bridge and engine room staffing. Since the engineering plant has been the source of most major failures to date, this project sought to develop an engine room staffing and scheduling model for the Littoral Combat Ship class given a stated set of minimum mission objectives when operating under normal conditions – called “Condition III Underway Steaming”, which is used as the basis for official Navy manning calculations, and to provide recommendations for improved automation-human modeling. A survey of the crew of several LCS ships was conducted and the results were analyzed using exploratory data analysis and multiple joint correspondence analysis. Results of the survey analysis were applied to the design of a joint physical-cognitive-automation workflow analysis of critical procedures and failure modes as they map to four dimensions: fatigue, watch and maintenance tasking, and automation-human interface. Workflow analysis results were then simulated in an IMPRINT model of a typical watch period, and the results were evaluated against the four dimensions of the survey. The project validated that the four dimensions analyzed are indeed worthy of consideration in manpower models, and that IMPRINT has the potential, with a few modifications, to model joint physical-cognitive-automation workflows as an improvement to the current manpower-only models used in Navy ship design by accounting for human factors
    • …
    corecore