24 research outputs found

    VISUAL PPINOT: A Graphical Notation for Process Performance Indicators

    Get PDF
    Process performance indicators (PPIs) allow the quantitative evaluation of business processes, providing essential information for decision making. It is common practice today that business processes and PPIs are usually modelled separately using graphical notations for the former and natural language for the latter. This approach makes PPI definitions simple to read and write, but it hinders maintenance consistency between business processes and PPIs. It also requires their manual translation into lower-level implementation languages for their operationalisation, which is a time-consuming, error-prone task because of the ambiguities inherent to natural language definitions. In this article, Visual ppinot, a graphical notation for defining PPIs together with business process models, is presented. Its underlying formal metamodel allows the automated processing of PPIs. Furthermore, it improves current state-of-the-art proposals in terms of expressiveness and in terms of providing an explicit visualisation of the link between PPIs and business processes, which avoids inconsistencies and promotes their co-evolution. The reference implementation, developed as a complete tool suite, has allowed its validation in a multiple-case study, in which five dimensions of Visual ppinot were studied: expressiveness, precision, automation, understandability, and traceability

    Applying surveys and interviews in software test tool evaluation

    No full text
    Abstract Despite the multitude of available software testing tools, literature lists lack of right tools and costs as problems for adopting a tool. We conducted a case study to analyze how a group of practitioners, familiar with Robot Framework (an open source, generic test automation framework), evaluate the tool. We based the case and the unit of analysis on our academia-industry relations, i.e., availability. We used a survey (n = 68) and interviews (n = 6) with convenience sampling to develop a comprehensive view of the phenomena. The study reveals the importance of understanding the interconnection of different criteria and the potency of the context on those. Our results show that unconfirmed or unfocused opinions about criteria, e.g., about Costs or Programming Skills, can lead to misinterpretations or hamper strategic decisions if overlooking required technical competence. We conclude surveys can serve as a useful instrument for collecting empirical knowledge about tool evaluation, but experiential reasoning collected with a complementary method is required to develop into comprehensive understanding about it

    Missing data imputation techniques for software effort estimation: a study of recent issues and challenges

    No full text
    Software effort estimation is one the critical aspects of software engineering. It revolves around predicting the required efforts needed to complete a software task. However, any estimation technique or model relies on an input data in which it defines and predicts future values. Missing data and values within such data is a common occurrence in the software development industry and thus it leads to inaccurate predictions or misleading results. Thus, Missing Data is an important aspect of effort estimation models that is required to be addressed. However, Missing Data is not without its gaps and issues. This review aims at elaborating the recent issues and gaps that exist within the missing data and software effort estimation field. This may allow future researchers to get a better grasp and understanding of the inner workings of Missing Data and the methods through which these challenges can be addressed

    SEON: A pyramid of ontologies for software evolution and its applications

    Full text link
    The Semantic Web provides a standardized, well-established framework to define and work with ontologies. It is especially apt for machine processing. However, researchers in the field of software evolution have not really taken advantage of that so far.In this paper, we address the potential of representing software evolution knowledge with ontologies and Semantic Web technology, such as Linked Data and automated reasoning.We present SEON, a pyramid of ontologies for software evolution, which describes stakeholders, their activities, artifacts they create, and the relations among all of them. We show the use of evolution-specific ontologies for establishing a shared taxonomy of software analysis services, for defining extensible meta-models, for explicitly describing relationships among artifacts, and for linking data such as code structures, issues (change requests), bugs, and basically any changes made to a system over time.For validation, we discuss three different approaches, which are backed by SEON and enable semantically enriched software evolution analysis. These techniques have been fully implemented as tools and cover software analysis with web services, a natural language query interface for developers, and large-scale software visualization

    Algorithms and Logic as Programming Primers

    Get PDF
    To adapt all-immersive digitalization, the Finnish National Curriculum 2014 (FNC-2014) ‘digi-jumps’ by integrating programming into elementary education. However, applying the change to mathematics teachers’ everyday praxis is hindered by a too high-level specification. To elaborate FNC-2014 into more concrete learning targets, we review the computer science syllabi of countries that are well ahead, as well as the education recommendations set by computer science organizations, such as ACM and IEEE. The whole mathematics syllabus should be critically viewed in the light of these recommendations and feedback collected from software professionals and educators. The feedback reveals an imbalance between supply and demand, i.e., what is over-taught versus under-taught, from the point of the requirements of current working life. The surveyed software engineers criticize the unnecessary surplus of calculus and differential equations, i.e., continuous mathematics. In contrast, the emphasis should shift more towards algorithms and data structures, flexibility in handling multiple data representations, and logic: in short – discrete mathematics. The ground for discrete mathematics should be prepared early enough, started already from primary level and continued consistently throughout the secondary till tertiary education. This paper aims to contribute to the further refinement of the mathematics syllabus by proposing such a discrete mathematics subset that especially supports the needs of computer science education, the focus being on algorithms and data structures, and logic in particular.peerReviewe
    corecore