1,828 research outputs found

    Benefits and Challenges of Model-based Software Engineering: Lessons Learned based on Qualitative and Quantitative Findings

    Get PDF
    Even though Model-based Software Engineering (MBSwE) techniques and Autogenerated Code (AGC) have been increasingly used to produce complex software systems, there is only anecdotal knowledge about the state-of-thepractice. Furthermore, there is a lack of empirical studies that explore the potential quality improvements due to the use of these techniques. This paper presents in-depth qualitative findings about development and Software Assurance (SWA) practices and detailed quantitative analysis of software bug reports of a NASA mission that used MBSwE and AGC. The missions flight software is a combination of handwritten code and AGC developed by two different approaches: one based on state chart models (AGC-M) and another on specification dictionaries (AGC-D). The empirical analysis of fault proneness is based on 380 closed bug reports created by software developers. Our main findings include: (1) MBSwE and AGC provide some benefits, but also impose challenges. (2) SWA done only at a model level is not sufficient. AGC code should also be tested and the models and AGC should always be kept in-sync. AGC must not be changed manually. (3) Fixes made to address an individual bug report were spread both across multiple modules and across multiple files. On average, for each bug report 1.4 modules, that is, 3.4 files were fixed. (4) Most bug reports led to changes in more than one type of file. The majority of changes to auto-generated source code files were made in conjunction to changes in either file with state chart models or XML files derived from dictionaries. (5) For newly developed files, AGC-M and handwritten code were of similar quality, while AGC-D files were the least fault prone

    Quantifying usability of domain-specific languages: An empirical study on software maintenance

    Get PDF
    A domain-specific language (DSL) aims to support software development by offering abstractions to a particular domain. It is expected that DSLs improve the maintainability of artifacts otherwise produced with general-purpose languages. However, the maintainability of the DSL artifacts and, hence, their adoption in mainstream development, is largely dependent on the usability of the language itself. Unfortunately, it is often hard to identify their usability strengths and weaknesses early, as there is no guidance on how to objectively reveal them. Usability is a multi-faceted quality characteristic, which is challenging to quantify beforehand by DSL stakeholders. There is even less support on how to quantitatively evaluate the usability of DSLs used in maintenance tasks. In this context, this paper reports a study to compare the usability of textual DSLs under the perspective of software maintenance. A usability measurement framework was developed based on the cognitive dimensions of notations. The framework was evaluated both qualitatively and quantitatively using two DSLs in the context of two evolving object-oriented systems. The results suggested that the proposed metrics were useful: (1) to early identify DSL usability limitations, (2) to reveal specific DSL features favoring maintenance tasks, and (3) to successfully analyze eight critical DSL usability dimensions.This work was funded by B. Cafeo CAPES PhD Scholarship, and CNPq scholarship grant number 141688/2013-0; A. Garcia FAPERJ - distinguished scientist grant (number E-26/102.211/2009), CNPq - productivity grants (number 305526/2009-0 and 308490/2012-6), Universal project grants (number 483882/2009-7 and 485348/2011-0), and PUC-Rio (productivity grant).info:eu-repo/semantics/publishedVersio

    Quantifying inhibitory control as externalizing proneness: A cross-domain model

    Get PDF
    Recent mental health initiatives have called for a shift away from purely report-based conceptualizations of psychopathology toward a biobehaviorally oriented framework. The current work illustrates a measurement-oriented approach to challenges inherent in efforts to integrate biological and behavioral indicators with psychological-report variables. Specifically, we undertook to quantify the construct of inhibitory control (inhibition-disinhibition) as the individual difference dimension tapped by self-report, task-behavioral, and brain response indicators of susceptibility to disinhibitory problems (externalizing proneness). In line with prediction, measures of each type cohered to form domain-specific factors, and these factors loaded in turn onto a cross-domain inhibitory control factor reflecting the variance in common among the domain factors. Cross-domain scores predicted behavioral-performance and brain-response criterion measures as well as clinical problems (i.e., antisocial behaviors and substance abuse). Implications of this new cross-domain model for research on neurobiological mechanisms of inhibitory control and health/performance outcomes associated with this dispositional characteristic are discussed

    A Fault-Based Model of Fault Localization Techniques

    Get PDF
    Every day, ordinary people depend on software working properly. We take it for granted; from banking software, to railroad switching software, to flight control software, to software that controls medical devices such as pacemakers or even gas pumps, our lives are touched by software that we expect to work. It is well known that the main technique/activity used to ensure the quality of software is testing. Often it is the only quality assurance activity undertaken, making it that much more important. In a typical experiment studying these techniques, a researcher will intentionally seed a fault (intentionally breaking the functionality of some source code) with the hopes that the automated techniques under study will be able to identify the fault\u27s location in the source code. These faults are picked arbitrarily; there is potential for bias in the selection of the faults. Previous researchers have established an ontology for understanding or expressing this bias called fault size. This research captures the fault size ontology in the form of a probabilistic model. The results of applying this model to measure fault size suggest that many faults generated through program mutation (the systematic replacement of source code operators to create faults) are very large and easily found. Secondary measures generated in the assessment of the model suggest a new static analysis method, called testability, for predicting the likelihood that code will contain a fault in the future. While software testing researchers are not statisticians, they nonetheless make extensive use of statistics in their experiments to assess fault localization techniques. Researchers often select their statistical techniques without justification. This is a very worrisome situation because it can lead to incorrect conclusions about the significance of research. This research introduces an algorithm, MeansTest, which helps automate some aspects of the selection of appropriate statistical techniques. The results of an evaluation of MeansTest suggest that MeansTest performs well relative to its peers. This research then surveys recent work in software testing using MeansTest to evaluate the significance of researchers\u27 work. The results of the survey indicate that software testing researchers are underreporting the significance of their work

    Software Measures for Business Processes

    Get PDF
    Designing a business process, which is executed by a Workflow Management System, recalls the activity of writing software source code, which is executed by a computer. Different business processes may have different qualities, e.g., size, structural complexity, some of which can be measured based on the formal descriptions of the business processes. This paper defines measures for quantifying business process qualities by drawing on concepts that have been used for defining measures for software code. Specifically, the measures we propose and apply to business processes are related to attributes of activities, control-flow, data-flow, and resources. This allows the business process designer to have a comprehensive evaluation of business processes according to several different attributes
    • …
    corecore