25,343 research outputs found
Towards a Tool-based Development Methodology for Pervasive Computing Applications
Despite much progress, developing a pervasive computing application remains a
challenge because of a lack of conceptual frameworks and supporting tools. This
challenge involves coping with heterogeneous devices, overcoming the
intricacies of distributed systems technologies, working out an architecture
for the application, encoding it in a program, writing specific code to test
the application, and finally deploying it. This paper presents a design
language and a tool suite covering the development life-cycle of a pervasive
computing application. The design language allows to define a taxonomy of
area-specific building-blocks, abstracting over their heterogeneity. This
language also includes a layer to define the architecture of an application,
following an architectural pattern commonly used in the pervasive computing
domain. Our underlying methodology assigns roles to the stakeholders, providing
separation of concerns. Our tool suite includes a compiler that takes design
artifacts written in our language as input and generates a programming
framework that supports the subsequent development stages, namely
implementation, testing, and deployment. Our methodology has been applied on a
wide spectrum of areas. Based on these experiments, we assess our approach
through three criteria: expressiveness, usability, and productivity
Analysis of Software Binaries for Reengineering-Driven Product Line Architecture\^aAn Industrial Case Study
This paper describes a method for the recovering of software architectures
from a set of similar (but unrelated) software products in binary form. One
intention is to drive refactoring into software product lines and combine
architecture recovery with run time binary analysis and existing clustering
methods. Using our runtime binary analysis, we create graphs that capture the
dependencies between different software parts. These are clustered into smaller
component graphs, that group software parts with high interactions into larger
entities. The component graphs serve as a basis for further software product
line work. In this paper, we concentrate on the analysis part of the method and
the graph clustering. We apply the graph clustering method to a real
application in the context of automation / robot configuration software tools.Comment: In Proceedings FMSPLE 2015, arXiv:1504.0301
Ways of Applying Artificial Intelligence in Software Engineering
As Artificial Intelligence (AI) techniques have become more powerful and
easier to use they are increasingly deployed as key components of modern
software systems. While this enables new functionality and often allows better
adaptation to user needs it also creates additional problems for software
engineers and exposes companies to new risks. Some work has been done to better
understand the interaction between Software Engineering and AI but we lack
methods to classify ways of applying AI in software systems and to analyse and
understand the risks this poses. Only by doing so can we devise tools and
solutions to help mitigate them. This paper presents the AI in SE Application
Levels (AI-SEAL) taxonomy that categorises applications according to their
point of AI application, the type of AI technology used and the automation
level allowed. We show the usefulness of this taxonomy by classifying 15 papers
from previous editions of the RAISE workshop. Results show that the taxonomy
allows classification of distinct AI applications and provides insights
concerning the risks associated with them. We argue that this will be important
for companies in deciding how to apply AI in their software applications and to
create strategies for its use
Report from GI-Dagstuhl Seminar 16394: Software Performance Engineering in the DevOps World
This report documents the program and the outcomes of GI-Dagstuhl Seminar
16394 "Software Performance Engineering in the DevOps World".
The seminar addressed the problem of performance-aware DevOps. Both, DevOps
and performance engineering have been growing trends over the past one to two
years, in no small part due to the rise in importance of identifying
performance anomalies in the operations (Ops) of cloud and big data systems and
feeding these back to the development (Dev). However, so far, the research
community has treated software engineering, performance engineering, and cloud
computing mostly as individual research areas. We aimed to identify
cross-community collaboration, and to set the path for long-lasting
collaborations towards performance-aware DevOps.
The main goal of the seminar was to bring together young researchers (PhD
students in a later stage of their PhD, as well as PostDocs or Junior
Professors) in the areas of (i) software engineering, (ii) performance
engineering, and (iii) cloud computing and big data to present their current
research projects, to exchange experience and expertise, to discuss research
challenges, and to develop ideas for future collaborations
SPEEDY: An Eclipse-based IDE for invariant inference
SPEEDY is an Eclipse-based IDE for exploring techniques that assist users in
generating correct specifications, particularly including invariant inference
algorithms and tools. It integrates with several back-end tools that propose
invariants and will incorporate published algorithms for inferring object and
loop invariants. Though the architecture is language-neutral, current SPEEDY
targets C programs. Building and using SPEEDY has confirmed earlier experience
demonstrating the importance of showing and editing specifications in the IDEs
that developers customarily use, automating as much of the production and
checking of specifications as possible, and showing counterexample information
directly in the source code editing environment. As in previous work,
automation of specification checking is provided by back-end SMT solvers.
However, reducing the effort demanded of software developers using formal
methods also requires a GUI design that guides users in writing, reviewing, and
correcting specifications and automates specification inference.Comment: In Proceedings F-IDE 2014, arXiv:1404.578
Grand Challenges of Traceability: The Next Ten Years
In 2007, the software and systems traceability community met at the first
Natural Bridge symposium on the Grand Challenges of Traceability to establish
and address research goals for achieving effective, trustworthy, and ubiquitous
traceability. Ten years later, in 2017, the community came together to evaluate
a decade of progress towards achieving these goals. These proceedings document
some of that progress. They include a series of short position papers,
representing current work in the community organized across four process axes
of traceability practice. The sessions covered topics from Trace Strategizing,
Trace Link Creation and Evolution, Trace Link Usage, real-world applications of
Traceability, and Traceability Datasets and benchmarks. Two breakout groups
focused on the importance of creating and sharing traceability datasets within
the research community, and discussed challenges related to the adoption of
tracing techniques in industrial practice. Members of the research community
are engaged in many active, ongoing, and impactful research projects. Our hope
is that ten years from now we will be able to look back at a productive decade
of research and claim that we have achieved the overarching Grand Challenge of
Traceability, which seeks for traceability to be always present, built into the
engineering process, and for it to have "effectively disappeared without a
trace". We hope that others will see the potential that traceability has for
empowering software and systems engineers to develop higher-quality products at
increasing levels of complexity and scale, and that they will join the active
community of Software and Systems traceability researchers as we move forward
into the next decade of research
Grand Challenges of Traceability: The Next Ten Years
In 2007, the software and systems traceability community met at the first
Natural Bridge symposium on the Grand Challenges of Traceability to establish
and address research goals for achieving effective, trustworthy, and ubiquitous
traceability. Ten years later, in 2017, the community came together to evaluate
a decade of progress towards achieving these goals. These proceedings document
some of that progress. They include a series of short position papers,
representing current work in the community organized across four process axes
of traceability practice. The sessions covered topics from Trace Strategizing,
Trace Link Creation and Evolution, Trace Link Usage, real-world applications of
Traceability, and Traceability Datasets and benchmarks. Two breakout groups
focused on the importance of creating and sharing traceability datasets within
the research community, and discussed challenges related to the adoption of
tracing techniques in industrial practice. Members of the research community
are engaged in many active, ongoing, and impactful research projects. Our hope
is that ten years from now we will be able to look back at a productive decade
of research and claim that we have achieved the overarching Grand Challenge of
Traceability, which seeks for traceability to be always present, built into the
engineering process, and for it to have "effectively disappeared without a
trace". We hope that others will see the potential that traceability has for
empowering software and systems engineers to develop higher-quality products at
increasing levels of complexity and scale, and that they will join the active
community of Software and Systems traceability researchers as we move forward
into the next decade of research
Safe, Remote-Access Swarm Robotics Research on the Robotarium
This paper describes the development of the Robotarium -- a remotely
accessible, multi-robot research facility. The impetus behind the Robotarium is
that multi-robot testbeds constitute an integral and essential part of the
multi-agent research cycle, yet they are expensive, complex, and time-consuming
to develop, operate, and maintain. These resource constraints, in turn, limit
access for large groups of researchers and students, which is what the
Robotarium is remedying by providing users with remote access to a
state-of-the-art multi-robot test facility. This paper details the design and
operation of the Robotarium as well as connects these to the particular
considerations one must take when making complex hardware remotely accessible.
In particular, safety must be built in already at the design phase without
overly constraining which coordinated control programs the users can upload and
execute, which calls for minimally invasive safety routines with provable
performance guarantees.Comment: 13 pages, 7 figures, 3 code samples, 72 reference
- …