6 research outputs found
Recommended from our members
Teaching the Art of Computer Programming at a Distance by Generating Dialogues using Deep Neural Networks
While teaching the art of Computer Programming, students with visual impairments (VI) are disadvantaged, because speech is their preferred modality. Existing accessibility assistants can only read out predefined texts sequentially, word-for-word, sentence-for-sentence, whilst the presentations of programming concepts could be conveyed in a more structured way. Earlier we have shown that deep neural networks such as Tree-Based Convolutional Neural Networks (TBCNN) and Gated Graph Neural Networks (GGNN) can be used to classify algorithms across different programming languages with over 90% accuracy. Furthermore, TBCNN or GGNN have been shown useful for generating natural and conversational dialogues from natural language texts. In this paper, we propose a novel pedagogy called “Programming Assistant”, by creating a personal tutor that can respond to voice commands, which trigger an explanation of programming concepts, hands-free. We generate dialogues using DNNs, which substitute code with the names of algorithms characterising the programs, and we read aloud descriptions of the code. Furthermore, the application of the dialogue generation can be embodied into an Alexa Skill, which turns them into fully natural voices, forming the basis of a smart assistant to handle a large number of formative questions in teaching the Art of Computer Programming at a distance
Towards Automatic Support of Software Model Evolution with Large Language~Models
Modeling structure and behavior of software systems plays a crucial role, in
various areas of software engineering. As with other software engineering
artifacts, software models are subject to evolution. Supporting modelers in
evolving models by model completion facilities and providing high-level edit
operations such as frequently occurring editing patterns is still an open
problem. Recently, large language models (i.e., generative neural networks)
have garnered significant attention in various research areas, including
software engineering. In this paper, we explore the potential of large language
models in supporting the evolution of software models in software engineering.
We propose an approach that utilizes large language models for model completion
and discovering editing patterns in model histories of software systems.
Through controlled experiments using simulated model repositories, we conduct
an evaluation of the potential of large language models for these two tasks. We
have found that large language models are indeed a promising technology for
supporting software model evolution, and that it is worth investigating further
in the area of software model evolution
Mining domain-specific edit operations from model repositories with applications to semantic lifting of model differences and change profiling
Model transformations are central to model-driven software development. Applications of model transformations include creating models, handling model co-evolution, model merging, and understanding model evolution. In the past, various (semi-)
automatic approaches to derive model transformations from meta-models or from
examples have been proposed. These approaches require time-consuming handcrafting or the recording of concrete examples, or they are unable to derive complex
transformations. We propose a novel unsupervised approach, called Ockham, which
is able to learn edit operations from model histories in model repositories. Ockham
is based on the idea that meaningful domain-specifc edit operations are the ones
that compress the model diferences. It employs frequent subgraph mining to discover frequent structures in model diference graphs. We evaluate our approach in
two controlled experiments and one real-world case study of a large-scale industrial
model-driven architecture project in the railway domain. We found that our approach
is able to discover frequent edit operations that have actually been applied before.
Furthermore, Ockham is able to extract edit operations that are meaningful—in the
sense of explaining model diferences through the edit operations they comprise—to
practitioners in an industrial setting. We also discuss use cases (i.e., semantic lifting of model diferences and change profles) for the discovered edit operations in
this industrial setting. We fnd that the edit operations discovered by Ockham can be
used to better understand and simulate the evolution of models
Specifying and detecting meaningful changes in programs
Software developers are often interested in particular changes in programs that are relevant to their current tasks: not all changes to evolving software are equally important. However, most existing differencing tools, such as diff, notify
developers of more changes than they wish to see. In this paper, we propose a technique to specify and automatically detect only those changes in programs deemed meaningful, or relevant, to a particular development task. Using four elementary annotations on the grammar of any programming language,
namely Ignore, Order, Prefer and Scope, developers can specify, with limited effort, the type of change they wish to detect.
Our algorithms use these annotations to transform the input programs into a normalised form, and to remove clones across
different normalised programs in order to detect non-trivial and relevant differences. We evaluate our tool on a benchmark of
programs to demonstrate its improved precision compared to other differencing approaches