61 research outputs found

    Do software models based on the UML aid in source-code comprehensibility? Aggregating evidence from 12 controlled experiments

    Get PDF
    In this paper, we present the results of long-term research conducted in order to study the contribution made by software models based on the Unified Modeling Language (UML) to the comprehensibility of Java source-code deprived of comments. We have conducted 12 controlled experiments in different experimental contexts and on different sites with participants with different levels of expertise (i.e., Bachelor’s, Master’s, and PhD students and software practitioners from Italy and Spain). A total of 333 observations were obtained from these experiments. The UML models in our experiments were those produced in the analysis and design phases. The models produced in the analysis phase were created with the objective of abstracting the environment in which the software will work (i.e., the problem domain), while those produced in the design phase were created with the goal of abstracting implementation aspects of the software (i.e., the solution/application domain). Source-code comprehensibility was assessed with regard to correctness of understanding, time taken to accomplish the comprehension tasks, and efficiency as regards accomplishing those tasks. In order to study the global effect of UML models on source-code comprehensibility, we aggregated results from the individual experiments using a meta-analysis. We made every effort to account for the heterogeneity of our experiments when aggregating the results obtained from them. The overall results suggest that the use of UML models affects the comprehensibility of source-code, when it is deprived of comments. Indeed, models produced in the analysis phase might reduce source-code comprehensibility, while increasing the time taken to complete comprehension tasks. That is, browsing source code and this kind of models together negatively impacts on the time taken to complete comprehension tasks without having a positive effect on the comprehensibility of source code. One plausible justification for this is that the UML models produced in the analysis phase focus on the problem domain. That is, models produced in the analysis phase say nothing about source code and there should be no expectation that they would, in any way, be beneficial to comprehensibility. On the other hand, UML models produced in the design phase improve source-code comprehensibility. One possible justification for this result is that models produced in the design phase are more focused on implementation details. Therefore, although the participants had more material to read and browse, this additional effort was paid back in the form of an improved comprehension of source code

    Catching up with Method and Process Practice: An Industry-Informed Baseline for Researchers

    Get PDF
    Software development methods are usually not applied by the book.companies are under pressure to continuously deploy software products that meet market needs and stakeholders\u27 requests. To implement efficient and effective development processes, companies utilize multiple frameworks, methods and practices, and combine these into hybrid methods. A common combination contains a rich management framework to organize and steer projects complemented with a number of smaller practices providing the development teams with tools to complete their tasks. In this paper, based on 732 data points collected through an international survey, we study the software development process use in practice. Our results show that 76.8% of the companies implement hybrid methods.company size as well as the strategy in devising and evolving hybrid methods affect the suitability of the chosen process to reach company or project goals. Our findings show that companies that combine planned improvement programs with process evolution can increase their process\u27 suitability by up to 5%

    Defining and Validating Metrics for Assessing the Maintainability of Entity-Relationship Diagrams

    No full text
    Database and data model evolution is a significant problem in the highly dynamic business environment that we experience these days. To support the rapidly changing data requirements of agile companies, conceptual data models, which constitute the foundation of database design, should be sufficiently flexible to be able to incorporate changes easily and smoothly. In order to understand what factors drive the maintainability of conceptual data models and to improve conceptual modelling processes, we need to be able to assess conceptual data model properties and qualities in an objective and cost-efficient manner. The scarcity of early available and thoroughly validated maintainability measurement instruments motivated us to define a set of metrics for Entity-Relationship (ER) diagrams, which are a relevant graphical formalism of the conceptual data modelling method. In this paper we show that these objectives and easily calculated metrics, measuring internal properties of ER diagrams related to their structural complexity, can be used as indirect measures (hereafter called indicators) of the maintainability of the diagrams. These metrics may replace more expensive, subjective, and hence potentially unreliable maintainability measurement instruments that are based on expert judgement. Moreover, these metrics are an alternative to direct measurements that can only be obtained during the actual process of data model maintenance. Another result is that the validation of the metrics as early maintainability indicators opens up the way for an in-depth study of structural complexity as a major determinant of conceptual data model maintainability. Apart from the definition of a metrics suite, a contribution of this study is the methodological approach that was followed to theoretically validate the proposed metrics as structural complexity measures and to empirically validate them as maintainability indicators. This approach is based both on Measurement Theory and on an experimental research methodology, stemming mainly from current research in the field of empirical software engineering. In the paper we specifically emphasize the need to conduct a family of related experiments, improving and confirming each other, to produce relevant, empirically supported knowledge on the validity and usefulness of metrics.conceptual data model, entity relationship diagrams, model evolution, model quality, maintainability, structural complexity, metrics, prediction, theoretical validation, empirical validation, experimentation

    Endoscopic picture in 12 children with tracheomalacia

    No full text

    On the Impact of UML Analysis Models on Source Code Comprehensibility and Modifiability

    No full text
    We carried out a family of experiments to investigate whether the use of UML models produced in the requirements analysis process helps in the comprehensibility and modifiability of source code. The family consists of a controlled experiment and three external replications, carried out with students and profession- als from Italy and Spain. 86 participants with different abilities and levels of experience with the UML took part. The results of the experiments were integrated through the use of a meta-analysis. The results of both the individual experiments and the meta-analysis indicate that UML models produced in the requirements analysis process influence neither the comprehensibility of source code nor its modifiability

    A Comprehensive Framework for Conceptual Modeling Quality

    No full text
    The goal of any modeling activity is a complete and accurate understanding of the real world domain, within the bounds of the problem at hand and keeping in mind the goals of the actors involved. High quality representations are critical to that understanding. This paper proposes a comprehensive Conceptual Modeling Quality Framework, bringing together two well-known quality frameworks: the framework of Lindland, Sindre, and Sølvberg (LSS) and that of Wand and Weber based on Bunge’s ontology (BWW). This framework builds upon the strengths of the LSS and BWW frameworks, bringing together and organizing the various quality cornerstones and then defining the many quality dimensions that connect one to another. It presents a unified view of Conceptual Modeling Quality that can benefit both researchers and practitioners.

    Towards Improving the Navigability of Web Applications: A Model-Driven Approach

    No full text
    Navigability, defined as the efficiency, effectiveness and satisfaction with which a user navigates through the system in order to fulfil her goals under specific conditions, has a definite impact on the overall success of Web applications. This quality attribute can be measured based on the navigational model provided by Web Engineering methodologies. Most of the measures currently defined for navigational models are tightly coupled with particular Web Engineering methodologies, however. Furthermore, modifications to the design of the navigational model, carried out with the aim of improving navigability, are performed manually. Both practices have seriously hampered the reusability and adoption of proposed navigability measures and improvement techniques. In this paper we present a Model-Driven Engineering approach to solving these problems. On the one hand, we propose a generic approach which defines navigability measurement models that can be integrated into any Web Engineering methodology. On the other hand, we present a model-driven improvement process for the navigational model design which incurs no increase in costs or in timeto- market of Web applications. This process is divided into two phases: evaluation (i.e. assessment of the model through objective measures) and evolution (i.e. transformation of the model when the measurement results do not accomplish certain quality decision criteria that have been defined previously).
    corecore