5,318 research outputs found
Requirements engineering for explainable systems
Information systems are ubiquitous in modern life and are powered by evermore complex algorithms that are often difficult to understand. Moreover, since systems are part of almost every aspect of human life, the quality in interaction and communication between humans and machines has become increasingly important. Hence the importance of explainability as an essential element of human-machine communication; it has also become an important quality requirement for modern information systems.
However, dealing with quality requirements has never been a trivial task. To develop quality systems, software professionals have to understand how to transform abstract quality goals into real-world information system solutions. Requirements engineering provides a structured approach that aids software professionals in better comprehending, evaluating, and operationalizing quality requirements. Explainability has recently regained prominence and been acknowledged and established as a quality requirement; however, there is currently no requirements engineering recommendations specifically focused on explainable systems.
To fill this gap, this thesis investigated explainability as a quality requirement and how it relates to the information systems context, with an emphasis on requirements engineering. To this end, this thesis proposes two theories that delineate the role of explainability and establish guidelines for the requirements engineering process of explainable systems. These theories are modeled and shaped through five artifacts. These theories and artifacts should help software professionals 1) to communicate and achieve a shared understanding of the concept of explainability; 2) to comprehend how explainability affects system quality and what role it plays; 3) in translating abstract quality goals into design and evaluation strategies; and 4) to shape the software development process for the development of explainable systems.
The theories and artifacts were built and evaluated through literature studies, workshops, interviews, and a case study. The findings show that the knowledge made available helps practitioners understand the idea of explainability better, facilitating the creation of explainable systems. These results suggest that the proposed theories and artifacts are plausible, practical, and serve as a strong starting point for further extensions and improvements in the search for high-quality explainable systems
An Empirical Study on Decision making for Quality Requirements
[Context] Quality requirements are important for product success yet often
handled poorly. The problems with scope decision lead to delayed handling and
an unbalanced scope. [Objective] This study characterizes the scope decision
process to understand influencing factors and properties affecting the scope
decision of quality requirements. [Method] We studied one company's scope
decision process over a period of five years. We analyzed the decisions
artifacts and interviewed experienced engineers involved in the scope decision
process. [Results] Features addressing quality aspects explicitly are a minor
part (4.41%) of all features handled. The phase of the product line seems to
influence the prevalence and acceptance rate of quality features. Lastly,
relying on external stakeholders and upfront analysis seems to lead to long
lead-times and an insufficient quality requirements scope. [Conclusions] There
is a need to make quality mode explicit in the scope decision process. We
propose a scope decision process at a strategic level and a tactical level. The
former to address long-term planning and the latter to cater for a speedy
process. Furthermore, we believe it is key to balance the stakeholder input
with feedback from usage and market in a more direct way than through a long
plan-driven process
Applying the proto-theory of design to explain and modify the parameter analysis method of conceptual design
This article reports on the outcomes of applying the notions provided by the reconstructed proto-theory of design, based on Aristotle’s remarks, to the parameter analysis (PA) method of conceptual design. Two research questions are addressed: (1) What further clarification and explanation to the approach of PA is provided by the proto-theory? (2) Which conclusions can be drawn from the study of an empirically derived
design approach through the proto-theory regarding usefulness, validity and range of that theory? An overview of PA and an application example illustrate its present model and unique characteristics. Then, seven features of the proto-theory are explained and demonstrated through geometrical problem solving and analogies are drawn between these features and the corresponding ideas in modern design thinking.
Historical and current uses of the terms analysis and synthesis in design are also outlined and contrasted, showing that caution should be exercised when applying them. Consequences regarding the design moves, process and strategy of PA allow proposing modifications to its model, while demonstrating how the ancient method of analysis can contribute to better understanding of contemporary design-theoretic issues
On the Presence of Green and Sustainable Software Engineering in Higher Education Curricula
Nowadays, software is pervasive in our everyday lives. Its sustainability and
environmental impact have become major factors to be considered in the
development of software systems. Millennials-the newer generation of university
students-are particularly keen to learn about and contribute to a more
sustainable and green society. The need for training on green and sustainable
topics in software engineering has been reflected in a number of recent
studies. The goal of this paper is to get a first understanding of what is the
current state of teaching sustainability in the software engineering community,
what are the motivations behind the current state of teaching, and what can be
done to improve it. To this end, we report the findings from a targeted survey
of 33 academics on the presence of green and sustainable software engineering
in higher education. The major findings from the collected data suggest that
sustainability is under-represented in the curricula, while the current focus
of teaching is on energy efficiency delivered through a fact-based approach.
The reasons vary from lack of awareness, teaching material and suitable
technologies, to the high effort required to teach sustainability. Finally, we
provide recommendations for educators willing to teach sustainability in
software engineering that can help to suit millennial students needs.Comment: The paper will be presented at the 1st International Workshop on
Software Engineering Curricula for Millennials (SECM2017
Software Engineering for Millennials, by Millennials
Software engineers need to manage both technical and professional skills in
order to be successful. Our university offers a 5.5 year program that mixes
computer science, software and computer engineering, where the first two years
are mostly math and physics courses. As such, our students' first real teamwork
experience is during the introductory SE course, where they modify open source
projects in groups of 6-8. However, students have problems working in such
large teams, and feel that the course material and project are "disconnected".
We decided to redesign this course in 2017, trying to achieve a balance between
theory and practice, and technical and professional skills, with a maximum
course workload of 150 hrs per semester. We share our experience in this paper,
discussing the strategies we used to improve teamwork and help students learn
new technologies in a more autonomous manner. We also discuss what we learned
from the two times we taught the new course.Comment: 8 pages, 9 tables, 4 figures, Second International Workshop on
Software Engineering Education for Millennial
Quality-aware model-driven service engineering
Service engineering and service-oriented architecture as an integration and platform technology is a recent approach to software systems integration. Quality aspects
ranging from interoperability to maintainability to performance are of central importance for the integration of heterogeneous, distributed service-based systems. Architecture models can substantially influence quality attributes of the implemented software systems. Besides the benefits of explicit architectures on maintainability and reuse, architectural constraints such as styles, reference architectures and architectural patterns can influence observable software properties such as performance. Empirical performance evaluation is a process of measuring and evaluating the performance of implemented software. We present an approach for addressing the quality of services and service-based systems at the model-level in the context of model-driven service engineering. The focus on architecture-level models is a consequence of the black-box
character of services
Customising software products in distributed software development a model for allocating customisation requirements across organisational boundaries
Requirements engineering plays a vital role in the software development process. While it is difficult to manage those requirements locally, it is even more difficult to communicate those requirements over organisational boundaries and to convey them to multiple distribution customers. This paper discusses the requirements of multiple distribution customers empirically in the context of customised software products. The main purpose is to understand the challenges of communicating and allocating customisation requirements across distributed organisational boundaries. We conducted an empirical survey with 19 practitioners, which confirmed that communicating customisation requirements in a DSD context is a significant challenge. We therefore propose a model for allocating customisation requirements between a local, customer-based agile team and a distributed development team that uses a traditional development approach. Our conjecture is that the model would reduce the challenge of communicating requirements across organisational boundaries, address customers’ requirements and provide a focus for future empirical studies
Sensor System for Rescue Robots
A majority of rescue worker fatalities are a result of on-scene responses. Existing technologies help assist the first responders in scenarios of no light, and there even exist robots that can navigate radioactive areas. However, none are able to be both quickly deployable and enter hard to reach or unsafe areas in an emergency event such as an earthquake or storm that damages a structure. In this project we created a sensor platform system to augment existing robotic solutions so that rescue workers can search for people in danger while avoiding preventable injury or death and saving time and resources. Our results showed that we were able to map out a 2D map of the room with updates for robot motion on a display while also showing a live thermal image in front of the system. The system is also capable of taking a digital picture from a triggering event and then displaying it on the computer screen. We discovered that data transfer plays a huge role in making different programs like Arduino and Processing interact with each other. Consequently, this needs to be accounted for when improving our project. In particular our project is wired right now but should deliver data wirelessly to be of any practical use. Furthermore, we dipped our feet into SLAM technologies and if our project were to become autonomous, more research into the algorithms would make this autonomy feasible
- …