15,789 research outputs found

    A Life Cycle Software Quality Model Using Bayesian Belief Networks

    Get PDF
    Software practitioners lack a consistent approach to assessing and predicting quality within their products. This research proposes a software quality model that accounts for the influences of development team skill/experience, process maturity, and problem complexity throughout the software engineering life cycle. The model is structured using Bayesian Belief Networks and, unlike previous efforts, uses widely-accepted software engineering standards and in-use industry techniques to quantify the indicators and measures of software quality. Data from 28 software engineering projects was acquired for this study, and was used for validation and comparison of the presented software quality models. Three Bayesian model structures are explored and the structure with the highest performance in terms of accuracy of fit and predictive validity is reported. In addition, the Bayesian Belief Networks are compared to both Least Squares Regression and Neural Networks in order to identify the technique is best suited to modeling software product quality. The results indicate that Bayesian Belief Networks outperform both Least Squares Regression and Neural Networks in terms of producing modeled software quality variables that fit the distribution of actual software quality values, and in accurately forecasting 25 different indicators of software quality. Between the Bayesian model structures, the simplest structure, which relates software quality variables to their correlated causal factors, was found to be the most effective in modeling software quality. In addition, the results reveal that the collective skill and experience of the development team, over process maturity or problem complexity, has the most significant impact on the quality of software products

    Evaluating testing methods by delivered reliability

    Get PDF
    There are two main goals in testing software: (1) to achieve adequate quality (debug testing), where the objective is to probe the software for defects so that these can be removed, and (2) to assess existing quality (operational testing), where the objective is to gain confidence that the software is reliable. Debug methods tend to ignore random selection of test data from an operational profile, while for operational methods this selection is all-important. Debug methods are thought to be good at uncovering defects so that these can be repaired, but having done so they do not provide a technically defensible assessment of the reliability that results. On the other hand, operational methods provide accurate assessment, but may not be as useful for achieving reliability. This paper examines the relationship between the two testing goals, using a probabilistic analysis. We define simple models of programs and their testing, and try to answer the question of how to attain program reliability: is it better to test by probing for defects as in debug testing, or to assess reliability directly as in operational testing? Testing methods are compared in a model where program failures are detected and the software changed to eliminate them. The “better” method delivers higher reliability after all test failures have been eliminated. Special cases are exhibited in which each kind of testing is superior. An analysis of the distribution of the delivered reliability indicates that even simple models have unusual statistical properties, suggesting caution in interpreting theoretical comparisons

    Space station advanced automation

    Get PDF
    In the development of a safe, productive and maintainable space station, Automation and Robotics (A and R) has been identified as an enabling technology which will allow efficient operation at a reasonable cost. The Space Station Freedom's (SSF) systems are very complex, and interdependent. The usage of Advanced Automation (AA) will help restructure, and integrate system status so that station and ground personnel can operate more efficiently. To use AA technology for the augmentation of system management functions requires a development model which consists of well defined phases of: evaluation, development, integration, and maintenance. The evaluation phase will consider system management functions against traditional solutions, implementation techniques and requirements; the end result of this phase should be a well developed concept along with a feasibility analysis. In the development phase the AA system will be developed in accordance with a traditional Life Cycle Model (LCM) modified for Knowledge Based System (KBS) applications. A way by which both knowledge bases and reasoning techniques can be reused to control costs is explained. During the integration phase the KBS software must be integrated with conventional software, and verified and validated. The Verification and Validation (V and V) techniques applicable to these KBS are based on the ideas of consistency, minimal competency, and graph theory. The maintenance phase will be aided by having well designed and documented KBS software

    Traffic Alert and Collision Avoidance System (TCAS): Cockpit Display of Traffic Information (CDTI) investigation. Phase 1: Feasibility study

    Get PDF
    The possibility of the Threat Alert and Collision Avoidance System (TCAS) traffic sensor and display being used for meaningful Cockpit Display of Traffic Information (CDTI) applications has resulted in the Federal Aviation Administration initiating a project to establish the technical and operational requirements to realize this potential. Phase 1 of the project is presented here. Phase 1 was organized to define specific CDTI applications for the terminal area, to determine what has already been learned about CDTI technology relevant to these applications, and to define the engineering required to supply the remaining TCAS-CDTI technology for capacity benefit realization. The CDTI applications examined have been limited to those appropriate to the final approach and departure phases of flight

    An Augmented Interaction Strategy For Designing Human-Machine Interfaces For Hydraulic Excavators

    Get PDF
    Lack of adequate information feedback and work visibility, and fatigue due to repetition have been identified as the major usability gaps in the human-machine interface (HMI) design of modern hydraulic excavators that subject operators to undue mental and physical workload, resulting in poor performance. To address these gaps, this work proposed an innovative interaction strategy, termed “augmented interaction”, for enhancing the usability of the hydraulic excavator. Augmented interaction involves the embodiment of heads-up display and coordinated control schemes into an efficient, effective and safe HMI. Augmented interaction was demonstrated using a framework consisting of three phases: Design, Implementation/Visualization, and Evaluation (D.IV.E). Guided by this framework, two alternative HMI design concepts (Design A: featuring heads-up display and coordinated control; and Design B: featuring heads-up display and joystick controls) in addition to the existing HMI design (Design C: featuring monitor display and joystick controls) were prototyped. A mixed reality seating buck simulator, named the Hydraulic Excavator Augmented Reality Simulator (H.E.A.R.S), was used to implement the designs and simulate a work environment along with a rock excavation task scenario. A usability evaluation was conducted with twenty participants to characterize the impact of the new HMI types using quantitative (task completion time, TCT; and operating error, OER) and qualitative (subjective workload and user preference) metrics. The results indicated that participants had a shorter TCT with Design A. For OER, there was a lower error probability due to collisions (PER1) with Design A, and lower error probability due to misses (PER2)with Design B. The subjective measures showed a lower overall workload and a high preference for Design B. It was concluded that augmented interaction provides a viable solution for enhancing the usability of the HMI of a hydraulic excavator

    Harmonization of IEEE 1012 and IEC 60880 standards regarding verification and validation of nuclear power plant safety systems software using model-based methodology

    Get PDF
    © 2017 Elsevier Ltd This paper compares two standards, namely IEC 60880 and IEEE 1012, and defines a harmonized core amongst them with regard to their verification and validation processes for the nuclear power plant instrumentation and control safety system software. The problem of harmonizing standards requires a transparent representation of standards in order to make comparison possible. A model-based methodology using SysML is used to establish this transparency. Transformation rules are a crucial part of the methodology. These enable the natural language used in a standard to be translated into structural and behavioural models in SysML. Due to the high level of ambiguity of natural language, certainty definition rules for objects and operations are established as well. The result is a rigorously developed harmonized core that is traceable to the parent standards. The core developed using our methodology supports the argument that there is no one-to-one mapping between major IEEE and IEC standards. Nevertheless, some intersections between them do exist, which support the opinion of other experts. The extent of the harmonization depends on the conformance or traceability. The methodology also offers promise to address the challenge of establishing a harmonized core and the formal transferability between future standards

    Factor validation and Rasch analysis of the individual recovery outcomes counter

    Get PDF
    Objective: The Individual Recovery Outcomes Counter is a 12-item personal recovery self assessment tool for adults with mental health problems. Although widely used across Scotland, limited research into its psychometric properties has been conducted. We tested its' measurement properties to ascertain the suitability of the tool for continued use in its present form.Materials and methods: Anonymised data from the assessments of 1,743 adults using mental health services in Scotland were subject to tests based on principles of Rasch measurement theory, principal components analysis and confirmatory factor analysis.Results: Rasch analysis revealed that the 6-point response structure of the Individual Recovery Outcomes Counter was problematic. Re-scoring on a 4-point scale revealed well ordered items that measure a single, recovery-related construct, and has acceptable fit statistics. Confirmatory factor analysis supported this. Scale items covered around 75% of the recovery continuum; those individuals least far along the continuum were least well addressed.Conclusions: A modified tool worked well for many, but not all, service users. The study suggests specific developments are required if the Individual Recovery Outcomes Counter is to maximise its' utility for service users and provide meaningful data for service providers.*Implications for Rehabilitation*Agencies and services working with people with mental health problems aim to help them with their recovery.*The individual recovery outcomes counter has been developed and is used widely in Scotland to help service users track their progress to recovery.*Using a large sample of routinely collected data we have demonstrated that a number of modifications are needed if the tool is to adequately measure recovery.*This will involve consideration of the scoring system, item content and inclusion, and theoretical basis of the tool
    corecore