24 research outputs found
Requirements engineering for explainable systems
Information systems are ubiquitous in modern life and are powered by evermore complex algorithms that are often difficult to understand. Moreover, since systems are part of almost every aspect of human life, the quality in interaction and communication between humans and machines has become increasingly important. Hence the importance of explainability as an essential element of human-machine communication; it has also become an important quality requirement for modern information systems.
However, dealing with quality requirements has never been a trivial task. To develop quality systems, software professionals have to understand how to transform abstract quality goals into real-world information system solutions. Requirements engineering provides a structured approach that aids software professionals in better comprehending, evaluating, and operationalizing quality requirements. Explainability has recently regained prominence and been acknowledged and established as a quality requirement; however, there is currently no requirements engineering recommendations specifically focused on explainable systems.
To fill this gap, this thesis investigated explainability as a quality requirement and how it relates to the information systems context, with an emphasis on requirements engineering. To this end, this thesis proposes two theories that delineate the role of explainability and establish guidelines for the requirements engineering process of explainable systems. These theories are modeled and shaped through five artifacts. These theories and artifacts should help software professionals 1) to communicate and achieve a shared understanding of the concept of explainability; 2) to comprehend how explainability affects system quality and what role it plays; 3) in translating abstract quality goals into design and evaluation strategies; and 4) to shape the software development process for the development of explainable systems.
The theories and artifacts were built and evaluated through literature studies, workshops, interviews, and a case study. The findings show that the knowledge made available helps practitioners understand the idea of explainability better, facilitating the creation of explainable systems. These results suggest that the proposed theories and artifacts are plausible, practical, and serve as a strong starting point for further extensions and improvements in the search for high-quality explainable systems
31th International Conference on Information Modelling and Knowledge Bases
Information modelling is becoming more and more important topic for researchers, designers, and users of information systems.The amount and complexity of information itself, the number of abstractionlevels of information, and the size of databases and knowledge bases arecontinuously growing. Conceptual modelling is one of the sub-areas ofinformation modelling. The aim of this conference is to bring together experts from different areas of computer science and other disciplines, who have a common interest in understanding and solving problems on information modelling and knowledge bases, as well as applying the results of research to practice. We also aim to recognize and study new areas on modelling and knowledge bases to which more attention should be paid. Therefore philosophy and logic, cognitive science, knowledge management, linguistics and management science are relevant areas, too. In the conference, there will be three categories of presentations, i.e. full papers, short papers and position papers
Conference on the Programming Environment for Development of Numerical Software
Systematic approaches to numerical software development and testing are presented
Recommended from our members
Design as interactions of problem framing and problem solving: a formal and empirical basis for problem framing in design
In this thesis, I present, illustrate and empirically validate a novel approach to modelling and explaining design process. The main outcome of this work is the formal definition of the problem framing, and the formulation of a recursive model of framing in design. The model (code-named RFD), represents a formalisation of a grey area in the science of design, and sees the design process as a recursive interaction of problem framing and problem solving.
The proposed approach is based upon a phenomenon introduced in cognitive science and known as (reflective) solution talkback. Previously, there were no formalisations of the knowledge interactions occurring within this complex reasoning operation. The recursive model is thus an attempt to express the existing knowledge in a formal and structured manner. In spite of rather abstract, knowledge level on which the model is defined, it is a firm step in the clarification of design process. The RFD model is applied to the knowledge-level description of the conducted experimental study that is annotated and analysed in the defined terminology. Eventually, several schemas implied by the model are identified, exemplified, and elaborated to reflect the empirical results.
The model features the mutual interaction of predicates âspecifiesâ and âsatisfiesâ. The first asserts that a certain set of explicit statements is sufficient for expressing relevant desired states the design is aiming to achieve. The validity of predicate âspecifiesâ might not be provable directly in any problem solving theory. A particular specification can be upheld or rejected only by drawing upon the validity of a complementary predicate âsatisfiesâ and the (un-)acceptability of the considered candidate solution (e.g. technological artefact, product). It is the role of the predicate âsatisfiesâ to find and derive such a candidate solution. The predicates âspecifiesâ and âsatisfiesâ are contextually bound and can be evaluated only within a particular conceptual frame. Thus, a solution to the design problem is sound and admissible with respect to an explicit commitment to a particular specification and design frame. The role of the predicate âacceptableâ is to compare the admissible solutions and frames against the ârealâ design problem. As if it answered the question: âIs this solution really what I wanted/intended?â
Furthermore, I propose a set of principled schemas on the conceptual (knowledge) level with an aim to make the interactive patterns of the design process explicit. These conceptual schemas are elicited from the rigorous experiments that utilised the structured and principled approach to recording the designerâs conceptual reasoning steps and decisions. They include the refinement of an explicit problem specification within a conceptual frame; the refinement of an explicit problem specification using a re-framed reference; and the conceptual re-framing (i.e. the identification and articulation of new conceptual terms)
Since the conceptual schemas reflect the sequence of the âtypicalâ decisions the designer may make during the design process, there is no single, symbol-level method for the implementation of these conceptual patterns. Thus, when one decides to follow the abstract patterns and schemas, this abstract model alone can foster a principled design on the knowledge level. It must be acknowledged that for the purpose of computer-based support, these abstract schemas need to be turned into operational models and consequently suitable methods. However, such operational perspective was beyond the time and resource constraints placed on this research
European Perspectives for Public Administration
Strategies and priorities for the public sector in Europe
The public sector in our society has over the past two decades undergone substantial changes, as has the academic field studying Public Administration (PA). In the next twenty years major shifts are further expected to occur in the way futures are anticipated and different cultures are integrated. Practice will be handled in a relevant way, and more disciplines will be engaging in the field of Public Administration.
The prominent scholars contributing to this book put forward research strategies and focus on priorities in the field of Public Administration. The volume will also give guidance on how to redesign teaching programmes in the field. This book will provide useful insights to compare and contrast European PA with PA in Europe, and with developments in other parts of the world.
Contributors: Geert Bouckaert (KU Leuven), Werner Jann (University of Potsdam), Jana Bertels (University of Potsdam), Paul Joyce (University of Birmingham), Meelis Kitsing (Estonian Business School, Tallinn), Thurid Hustedt (Hertie School of Governance, Berlin), Tiina Randma-Liiv (Tallinn University of Technology), Martin Burgi (Ludwig Maximilians University of Munich), Philippe BezĂšs (Science Po Paris; CNRS), Salvador Parrado (Spanish Distance Learning University (UNED), Madrid), Mark Bovens (Utrecht University; WRR), Roel Jennissen (WRR), Godfried Engbersen (Erasmus University Rotterdam), Meike Bokhorst (WRR), Bogdana Neamtu (Babes Bolyai University, Cluj-Napoca), Christopher Pollitt (KU Leuven), Edoardo Ongaro (Open University UK, Milton Keynes), Raffaella Saporito (Bocconi University, Milan), Per Laegreid (University of Bergen), Philip Marcel KarrĂ© (Erasmus University Rotterdam), Thomas Schillemans (Utrecht University), Martijn Van de Steen (Nederlandse School voor Openbaar Bestuur), Zeger van de Wal (National University of Singapore), Michael Bauer (University of Speyer), Stefan Becker (University of Speyer), Jean-Michel Eymeri-Douzans (UniversitĂ© de Toulouse), Filipe Teles (University of Aveiro), Denita Cepiku (Tor Vergata University of Rome), Marco Meneguzzo (Tor Vergata University of Rome), KĂŒlli Sarapuu (Tallinn University of Technology), Leno Saarniit (Tallinn University of Technology), Gyorgy Hajnal (Corvinus University of Budapest; Centre for Social Research of the Hungarian Academy of Sciences)
Methods for Reducing the Spread of Misinformation on the Web
The significant growth of the internet over the past thirty years reduced the cost of access to information for anyone who has unfettered access to the internet. During this period, internet users were also empowered to create new content that could instantly reach millions of people via social media platforms like Facebook and Twitter. This transformation broke down the traditional ways mass-consumed content was distributed and ultimately ushered in the era of citizen journalism and freeform content. The unrestricted ability to create and distribute information was considered a significant triumph of freedom and liberty. However, the new modes of information exchange posed new challenges for modern societies, namely trust, integrity, and the spread of misinformation.
Before the emergence of the Internet, newsrooms and editorial procedures required minimum standards for published information; today, such requirements are not necessary when posting content on social media platforms. This change led to the proliferation of information that attracts attention but lacks integrity and reliability. There are currently two broad approaches to solving the problem of information integrity on the internet; first, the revival of trusted and reliable sources of information; second, creating new mechanisms for increasing the quality of information published and spread on major social media platforms. These approaches are still in their infancy, each having its pros and cons.
In this thesis, we explore the latter and develop modern machine learning methods that can help identify (un)reliable information and their sources, efficiently prioritize content requiring human fact-checking at scale, and ultimately minimize their harm to the end-users by improving the quality of the news-feeds that users access. This thesis leverages the collaborative dynamics of content creation on Wikipedia to extract a grounded measure of information and source reliability. We also develop a method capable of modifying ranking algorithms used widely on social media platforms such as Facebook and Twitter to minimize the long-term harm posed by the spread of misinformation
The Impossibilities of the Circular Economy
The fifth Factor X publication from the Federal Environment Agency (Umweltbundesamt, UBA), The Impossibilities of the Circular Economy provides an overview of the limits to the circular economy, emphasising the relationship between integrated resource use and more systemic leadership-management approaches.
On a European level, the book ties into the recent European Green Deal and aims to empower actors across sectors and EU member countries to transition from existing linear models of value capture and expression to more systemic-circular solutions of value capture and expression. The volume provides a hands-on contribution towards building the knowledge and skill sets of current and future decision-makers who face these complex-systemic crises in their day-to-day business. The book further provides access to best practices from cutting-edge research and development findings, which will empower decision-makers to develop a more sustainable and equitable economy.
Providing solutions for a more sustainable economy, this book is essential reading for scholars and students of natural resource use, sustainable business, environmental economics and sustainable development, as well as decision-makers and experts from the fields of policy development, industry and civil society
A proposal for a global task planning architecture using the RoboEarth cloud based framework
As robotic systems become more and more capable of assisting in human domains, methods are sought to compose robot executable plans from abstract human instructions. To cope with the semantically rich and highly expressive nature of human instructions, Hierarchical Task Network planning is often being employed along with domain knowledge to solve planning problems in a pragmatic way. Commonly, the domain knowledge is specific to the planning problem at hand, impeding re-use. Therefore this paper conceptualizes a global planning architecture, based on the worldwide accessible RoboEarth cloud framework. This architecture allows environmental state inference and plan monitoring on a global level. To enable plan re-use for future requests, the RoboEarth action language has been adapted to allow semantic matching of robot capabilities with previously composed plans