20,380 research outputs found

    What to Fix? Distinguishing between design and non-design rules in automated tools

    Full text link
    Technical debt---design shortcuts taken to optimize for delivery speed---is a critical part of long-term software costs. Consequently, automatically detecting technical debt is a high priority for software practitioners. Software quality tool vendors have responded to this need by positioning their tools to detect and manage technical debt. While these tools bundle a number of rules, it is hard for users to understand which rules identify design issues, as opposed to syntactic quality. This is important, since previous studies have revealed the most significant technical debt is related to design issues. Other research has focused on comparing these tools on open source projects, but these comparisons have not looked at whether the rules were relevant to design. We conducted an empirical study using a structured categorization approach, and manually classify 466 software quality rules from three industry tools---CAST, SonarQube, and NDepend. We found that most of these rules were easily labeled as either not design (55%) or design (19%). The remainder (26%) resulted in disagreements among the labelers. Our results are a first step in formalizing a definition of a design rule, in order to support automatic detection.Comment: Long version of accepted short paper at International Conference on Software Architecture 2017 (Gothenburg, SE

    A Review on Software Architectures for Heterogeneous Platforms

    Full text link
    The increasing demands for computing performance have been a reality regardless of the requirements for smaller and more energy efficient devices. Throughout the years, the strategy adopted by industry was to increase the robustness of a single processor by increasing its clock frequency and mounting more transistors so more calculations could be executed. However, it is known that the physical limits of such processors are being reached, and one way to fulfill such increasing computing demands has been to adopt a strategy based on heterogeneous computing, i.e., using a heterogeneous platform containing more than one type of processor. This way, different types of tasks can be executed by processors that are specialized in them. Heterogeneous computing, however, poses a number of challenges to software engineering, especially in the architecture and deployment phases. In this paper, we conduct an empirical study that aims at discovering the state-of-the-art in software architecture for heterogeneous computing, with focus on deployment. We conduct a systematic mapping study that retrieved 28 studies, which were critically assessed to obtain an overview of the research field. We identified gaps and trends that can be used by both researchers and practitioners as guides to further investigate the topic

    Mediating Cognitive Transformation with VR 3D Sketching during Conceptual Architectural Design Process

    Get PDF
    Communications for information synchronization during the conceptual design phase require designers to employ more intuitive digital design tools. This paper presents findings of a feasibility study for using VR 3D sketching interface in order to replace current non-intuitive CAD tools. We used a sequential mixed method research methodology including a qualitative case study and a cognitive-based quantitative protocol analysis experiment. Foremost, the case study research was conducted in order to understand how novice designers make intuitive decisions. The case study documented the failure of conventional sketching methods in articulating complicated design ideas and shortcomings of current CAD tools in intuitive ideation. The case study’s findings then became the theoretical foundations for testing the feasibility of using VR 3D sketching interface during design. The latter phase of study evaluated the designers’ spatial cognition and collaboration at six different levels: “physical-actions”, “perceptualac ons”, “functional-actions”, “conceptual-actions”, “cognitive synchronizations”, and “gestures”. The results and confirmed hypotheses showed that the utilized tangible 3D sketching interface improved novice designers’ cognitive and collaborative design activities. In summary this paper presents the influences of current external representation tools on designers’ cognition and collaboration as well as providing the necessary theoretical foundations for implementing VR 3D sketching interface. It contributes towards transforming conceptual architectural design phase from analogue to digital by proposing a new VR design interface. The paper proposes this transformation to fill in the existing gap between analogue conceptual architectural design process and remaining digital engineering parts of building design process hence expediting digital design process

    Technical Debt Prioritization: State of the Art. A Systematic Literature Review

    Get PDF
    Background. Software companies need to manage and refactor Technical Debt issues. Therefore, it is necessary to understand if and when refactoring Technical Debt should be prioritized with respect to developing features or fixing bugs. Objective. The goal of this study is to investigate the existing body of knowledge in software engineering to understand what Technical Debt prioritization approaches have been proposed in research and industry. Method. We conducted a Systematic Literature Review among 384 unique papers published until 2018, following a consolidated methodology applied in Software Engineering. We included 38 primary studies. Results. Different approaches have been proposed for Technical Debt prioritization, all having different goals and optimizing on different criteria. The proposed measures capture only a small part of the plethora of factors used to prioritize Technical Debt qualitatively in practice. We report an impact map of such factors. However, there is a lack of empirical and validated set of tools. Conclusion. We observed that technical Debt prioritization research is preliminary and there is no consensus on what are the important factors and how to measure them. Consequently, we cannot consider current research conclusive and in this paper, we outline different directions for necessary future investigations

    When should I use network emulation ?

    Get PDF
    The design and development of a complex system requires an adequate methodology and efficient instrumental support in order to early detect and correct anomalies in the functional and non-functional properties of the tested protocols. Among the various tools used to provide experimental support for such developments, network emulation relies on real-time production of impairments on real traffic according to a communication model, either realistically or not. This paper aims at simply presenting to newcomers in network emulation (students, engineers, ...) basic principles and practices illustrated with a few commonly used tools. The motivation behind is to fill a gap in terms of introductory and pragmatic papers in this domain. The study particularly considers centralized approaches, allowing cheap and easy implementation in the context of research labs or industrial developments. In addition, an architectural model for emulation systems is proposed, defining three complementary levels, namely hardware, impairment and model levels. With the help of this architectural framework, various existing tools are situated and described. Various approaches for modeling the emulation actions are studied, such as impairment-based scenarios and virtual architectures, real-time discrete simulation and trace-based systems. Those modeling approaches are described and compared in terms of services and we study their ability to respond to various designer needs to assess when emulation is needed

    When Should I Use Network Emulation?

    Get PDF
    The design and development of a complex system requires an adequate methodology and efficient instrumental support in order to early detect and correct anomalies in the functional and non-functional properties of the tested protocols. Among the various tools used to provide experimental support for such developments, network emulation relies on real-time production of impairments on real traffic according to a communication model, either realistically or not. This paper aims at simply presenting to newcomers in network emulation (students, engineers, ...) basic principles and practices illustrated with a few commonly used tools. The motivation behind is to fill a gap in terms of introductory and pragmatic papers in this domain. The study particularly considers centralized approaches, allowing cheap and easy implementation in the context of research labs or industrial developments. In addition, an architectural model for emulation systems is proposed, defining three complementary levels, namely hardware, impairment and model levels. With the help of this architectural framework, various existing tools are situated and described. Various approaches for modeling the emulation actions are studied, such as impairment-based scenarios and virtual architectures, real-time discrete simulation and trace-based systems. Those modeling approaches are described and compared in terms of services and we study their ability to respond to various designer needs to assess when emulation is needed

    Towards guidelines for building a business case and gathering evidence of software reference architectures in industry

    Get PDF
    Background: Software reference architectures are becoming widely adopted by organizations that need to support the design and maintenance of software applications of a shared domain. For organizations that plan to adopt this architecture-centric approach, it becomes fundamental to know the return on investment and to understand how software reference architectures are designed, maintained, and used. Unfortunately, there is little evidence-based support to help organizations with these challenges. Methods: We have conducted action research in an industry-academia collaboration between the GESSI research group and everis, a multinational IT consulting firm based in Spain. Results: The results from such collaboration are being packaged in order to create guidelines that could be used in similar contexts as the one of everis. The main result of this paper is the construction of empirically-grounded guidelines that support organizations to decide on the adoption of software reference architectures and to gather evidence to improve RA-related practices. Conclusions: The created guidelines could be used by other organizations outside of our industry-academia collaboration. With this goal in mind, we describe the guidelines in detail for their use.Peer ReviewedPostprint (published version

    A requirements engineering framework for integrated systems development for the construction industry

    Get PDF
    Computer Integrated Construction (CIC) systems are computer environments through which collaborative working can be undertaken. Although many CIC systems have been developed to demonstrate the communication and collaboration within the construction projects, the uptake of CICs by the industry is still inadequate. This is mainly due to the fact that research methodologies of the CIC development projects are incomplete to bridge the technology transfer gap. Therefore, defining comprehensive methodologies for the development of these systems and their effective implementation on real construction projects is vital. Requirements Engineering (RE) can contribute to the effective uptake of these systems because it drives the systems development for the targeted audience. This paper proposes a requirements engineering approach for industry driven CIC systems development. While some CIC systems are investigated to build a broad and deep contextual knowledge in the area, the EU funded research project, DIVERCITY (Distributed Virtual Workspace for Enhancing Communication within the Construction Industry), is analysed as the main case study project because its requirements engineering approach has the potential to determine a framework for the adaptation of requirements engineering in order to contribute towards the uptake of CIC systems

    Racing to hardware-validated simulation

    Get PDF
    Processor simulators rely on detailed timing models of the processor pipeline to evaluate performance. The diversity in real-world processor designs mandates building flexible simulators that expose parts of the underlying model to the user in the form of configurable parameters. Consequently, the accuracy of modeling a real processor relies on both the accuracy of the pipeline model itself, and the accuracy of adjusting the configuration parameters according to the modeled processor. Unfortunately, processor vendors publicly disclose only a subset of their design decisions, raising the probability of introducing specification inaccuracies when modeling these processors. Inaccurately tuning model parameters deviates the simulated processor from the actual one. In the worst case, using improper parameters may lead to imbalanced pipeline models compromising the simulation output. Therefore, simulation models should be hardware-validated before using them for performance evaluation. As processors increase in complexity and diversity, validating a simulator model against real hardware becomes increasingly more challenging and time-consuming. In this work, we propose a methodology for validating simulation models against real hardware. We create a framework that relies on micro-benchmarks to collect performance statistics on real hardware, and machine learning-based algorithms to fine-tune the unknown parameters based on the accumulated statistics. We overhaul the Sniper simulator to support the ARM AArch64 instruction-set architecture (ISA), and introduce two new timing models for ARM-based in-order and out-of-order cores. Using our proposed simulator validation framework, we tune the in-order and out-of-order models to match the performance of a real-world implementation of the Cortex-A53 and Cortex-A72 cores with an average error of 7% and 15%, respectively, across a set of SPEC CPU2017 benchmarks
    corecore