50 research outputs found
Long-Term Average Cost in Featured Transition Systems
A software product line is a family of software products that share a common
set of mandatory features and whose individual products are differentiated by
their variable (optional or alternative) features. Family-based analysis of
software product lines takes as input a single model of a complete product line
and analyzes all its products at the same time. As the number of products in a
software product line may be large, this is generally preferable to analyzing
each product on its own. Family-based analysis, however, requires that standard
algorithms be adapted to accomodate variability.
In this paper we adapt the standard algorithm for computing limit average
cost of a weighted transition system to software product lines. Limit average
is a useful and popular measure for the long-term average behavior of a quality
attribute such as performance or energy consumption, but has hitherto not been
available for family-based analysis of software product lines. Our algorithm
operates on weighted featured transition systems, at a symbolic level, and
computes limit average cost for all products in a software product line at the
same time. We have implemented the algorithm and evaluated it on several
examples
UCAnDoModels: A Context-based Model Editor for Editing and Debugging UML Class and State-Machine Diagrams
© ACM 2019
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected] face cognitive challenges when using model editors to edit and debug UML models, which make them reluctant to
adopt modelling. To assist practitioners in their modelling tasks, we have developed effective and easy-to-use tooling techniques
and interfaces that address some of these challenges. The principle philosophy behind our tool is to employ cognitive-based techniques such as Focus+Context interfaces and increased automation of modelling tasks, in order to provide the users with valid, relevant and meaningful contextual information that are essential to fulfil a focus task (e.g., writing a transition expression). This paper presents our approach, which we call User-Centric and Artefact-Centric Development of Models (UCAnDoModels), and discusses two usecase scenarios to demonstrate how our tooling techniques can enhance the user experience with modelling tools.NSERC CREATE 465463-2015
NSERC Discovery Grant 155243-1
A Focus+Context Approach to Alleviate Cognitive Challenges of Editing and Debugging UML Models
Copyright (c) 2019 IEEE
Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Model-Driven Engineering has been proposed to increase the productivity of developing a software system. Despite its benefits, it has not been fully adopted in the software industry. Research has shown that modelling tools are amongst the top barriers for the adoption of MDE by industry. Recently, researchers have conducted empirical studies to identify the most severe cognitive difficulties of modellers when using UML model editors. Their analyses show that users’ prominent challenges are in remembering the contextual information when performing a particular modelling task; and locating, understanding, and fixing errors in the models. To alleviate these difficulties, we propose two Focus+Context user interfaces that provide enhanced cognitive support and automation in the user’s interaction with a model editor. Moreover, we conducted two empirical studies to assess the effectiveness of our interfaces on human users. Our results reveal that our interfaces help users 1) improve their ability to successfully fulfil their tasks, 2) avoid unnecessary switches among diagrams, 3) produce more error-free models, 4) remember contextual information, and 5) reduce time on tasks.NSERC CREATE 465463-2015
NSERC Discovery Grant 15524
Comprehending Variability in Analysis Results of Software Product Lines
Analyses of a software product line (SPL) typically report variable results
that are annotated with logical expressions indicating the set of product
variants for which the results hold. These expressions can get complicated and
difficult to reason about when the SPL has lots of features and product
variants. Previous work introduced a visualizer that supports filters for
highlighting the analysis results that apply to product variants of interest,
but this work was weakly evaluated. In this paper, we report on a controlled
user study that evaluates the effectiveness of this new visualizer in helping
the user search variable results and compare the results of multiple variants.
Our findings indicate that the use of the new visualizer significantly improves
the correctness and efficiency of the user's work and reduces the user's
cognitive load in working with variable results
Big-Step Semantics
With the popularity of model-driven methodologies, and the abundance of modelling languages, a major question for a requirements engineer is: which language is suitable for modelling a system under study? We address this question from a semantic point-of-view for big-step modelling languages (BSMLs). BSMLs are a popular class of behavioural modelling languages in which a model can respond to an environmental input by executing multiple, possibly concurrent, transitions. We deconstruct the semantics of a large class of BSMLs into high-level, orthogonal semantic aspects and discuss the relative advantages and disadvantages of the semantic options for each of these aspects to allow a requirements engineer to compare and choose the right BSML. We accompany our presentation with many modelling examples that illustrate the differences between a set of relevant semantic options.
Incremental and Commutative Composition of State-Machine Models of Features
In this paper, we present a technique for incre- mental and commutative composition of state-machine models of features, using the FeatureHouse framework. The inputs to FeatureHouse are feature state-machines (or state-machine fragments) modelled in a feature-oriented requirement modelling language called FORML and the outputs are two state-machine models: (1) a model of the whole product line with optional features guarded by presence conditions; this model is suitable for family-based analysis of the product line; and (2) an intermediate model of composition that facilitates incremental composition of future features. We discuss the challenges and benefits of our approach and our implementation in the FeatureHouse.NSERC / Automotive Partnership Canada, APCPJ 386797 - 09 ||
Ontario Research Fund, RE05-044 ||
NSERC Discovery Grant 155243-1
Whodunit: Classifying Code as Human Authored or GPT-4 Generated -- A case study on CodeChef problems
Artificial intelligence (AI) assistants such as GitHub Copilot and ChatGPT,
built on large language models like GPT-4, are revolutionizing how programming
tasks are performed, raising questions about whether code is authored by
generative AI models. Such questions are of particular interest to educators,
who worry that these tools enable a new form of academic dishonesty, in which
students submit AI generated code as their own work. Our research explores the
viability of using code stylometry and machine learning to distinguish between
GPT-4 generated and human-authored code. Our dataset comprises human-authored
solutions from CodeChef and AI-authored solutions generated by GPT-4. Our
classifier outperforms baselines, with an F1-score and AUC-ROC score of 0.91. A
variant of our classifier that excludes gameable features (e.g., empty lines,
whitespace) still performs well with an F1-score and AUC-ROC score of 0.89. We
also evaluated our classifier with respect to the difficulty of the programming
problem and found that there was almost no difference between easier and
intermediate problems, and the classifier performed only slightly worse on
harder problems. Our study shows that code stylometry is a promising approach
for distinguishing between GPT-4 generated code and human-authored code.Comment: 13 pages, 5 figures, MSR Conferenc
Continuous Variable-Specic Resolutions of Feature Interactions
© ACM 2019
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected] that are assembled from independently developed features suffer from feature interactions, in which features affect one another's behaviour in surprising ways. The Feature Interaction Problem results from trying to implement an appropriate resolution for each interaction within each possible context, because the number of possible contexts to consider increases exponentially with the number of features in the system. Resolution strategies aim to combat the Feature Interaction Problem by offering default strategies that resolve entire classes of interactions, thereby reducing the work needed to resolve lots of interactions. However most such approaches employ coarse-grained resolution strategies (e.g., feature priority) or a centralized arbitrator.
Our work focuses on employing variable-specific default-resolution strategies that aim to resolve at runtime features- conflicting actions on a system's outputs. In this paper, we extend prior work to enable co-resolution of interactions on coupled output variables and to promote smooth continuous resolutions over execution paths. We implemented our approach within the PreScan simulator and performed a case study involving 15 automotive features; this entailed our devising and implementing three resolution strategies for three output variables. The results of the case study show that the approach produces smooth and continuous resolutions of interactions throughout interesting scenarios.NSERC Discovery Grant, 155243-12 ||
Ontario Research Fund, RE05-044 ||
NSERC / Automotive Partnership Canada, APCPJ 386797 - 0
Writing Distributed Programs in Polylith
Polylith is a software interconnection system that allows
programmers to configure applications from mixed-language software
components (modules), and then execute those applications in diverse
environments. In general, communication between components can be
implemented with TCP/IP or XNS protocols in a network; via shared memory
between light-weight threads on a tightly coupled multiprocessor; using
custom-hardware channels between processors; or using simply a 'branch'
instruction within the same process space. Flexibility in how components
are interconnected is made possible by a 'software bus' organization.
This document serves as a manual for programmers who wish to use one
particular software busthe TCP/IP-based network bus.
(Also cross-referenced as UMIACS-TR-90-149
Trace Checking for Dynamic Software Product Lines
A key objective of self-adaptive systems is to continue to provide optimal quality of service when the environment changes. A dynamic software product line (DSPL) can benefit from knowing how its various product variants would have performed (in terms of quality of service) with respect to the recent history of inputs. We propose a family-based analysis that simulates all the product variants of a DSPL simultaneously, at runtime, on recent environmental inputs to obtain an estimate of the quality of service that each one of the product variants would have had, provided it had been executing. We assessed the efficiency of our DSPL analysis compared to the efficiency of analyzing each product individually on three case studies. We obtained mixed results due to the explosion of quality-of-service values for the product variants of a DSPL. After introducing a simple data abstraction on the values of quality-of- service variables, our DSPL analysis is between 1.4 and 7.7 times faster than analyzing the products one at a time.NSERC Discovery Grant, 155243-12 ||
Ontario Research Fund, RE05-04