13,808 research outputs found

    Explanation and trust: what to tell the user in security and AI?

    Get PDF
    There is a common problem in artificial intelligence (AI) and information security. In AI, an expert system needs to be able to justify and explain a decision to the user. In information security, experts need to be able to explain to the public why a system is secure. In both cases, the goal of explanation is to acquire or maintain the users' trust. In this paper, we investigate the relation between explanation and trust in the context of computing science. This analysis draws on literature study and concept analysis, using elements from system theory as well as actor-network theory. We apply the conceptual framework to both AI and information security, and show the benefit of the framework for both fields by means of examples. The main focus is on expert systems (AI) and electronic voting systems (security). Finally, we discuss consequences of our analysis for ethics in terms of (un)informed consent and dissent, and the associated division of responsibilities

    Machine Understanding of Human Behavior

    Get PDF
    A widely accepted prediction is that computing will move to the background, weaving itself into the fabric of our everyday living spaces and projecting the human user into the foreground. If this prediction is to come true, then next generation computing, which we will call human computing, should be about anticipatory user interfaces that should be human-centered, built for humans based on human models. They should transcend the traditional keyboard and mouse to include natural, human-like interactive functions including understanding and emulating certain human behaviors such as affective and social signaling. This article discusses a number of components of human behavior, how they might be integrated into computers, and how far we are from realizing the front end of human computing, that is, how far are we from enabling computers to understand human behavior

    Use Cases for Abnormal Behaviour Detection in Smart Homes

    Get PDF
    While people have many ideas about how a smart home should react to particular behaviours from their inhabitant, there seems to have been relatively little attempt to organise this systematically. In this paper, we attempt to rectify this in consideration of context awareness and novelty detection for a smart home that monitors its inhabitant for illness and unexpected behaviour. We do this through the concept of the Use Case, which is used in software engineering to specify the behaviour of a system. We describe a set of scenarios and the possible outputs that the smart home could give and introduce the SHMUC Repository of Smart Home Use Cases. Based on this, we can consider how probabilistic and logic-based reasoning systems would produce different capabilities

    On environments as systemic exoskeletons: Crosscutting optimizers and antifragility enablers

    Full text link
    Classic approaches to General Systems Theory often adopt an individual perspective and a limited number of systemic classes. As a result, those classes include a wide number and variety of systems that result equivalent to each other. This paper introduces a different approach: First, systems belonging to a same class are further differentiated according to five major general characteristics. This introduces a "horizontal dimension" to system classification. A second component of our approach considers systems as nested compositional hierarchies of other sub-systems. The resulting "vertical dimension" further specializes the systemic classes and makes it easier to assess similarities and differences regarding properties such as resilience, performance, and quality-of-experience. Our approach is exemplified by considering a telemonitoring system designed in the framework of Flemish project "Little Sister". We show how our approach makes it possible to design intelligent environments able to closely follow a system's horizontal and vertical organization and to artificially augment its features by serving as crosscutting optimizers and as enablers of antifragile behaviors.Comment: Accepted for publication in the Journal of Reliable Intelligent Environments. Extends conference papers [10,12,15]. The final publication is available at Springer via http://dx.doi.org/10.1007/s40860-015-0006-

    Microservices and Machine Learning Algorithms for Adaptive Green Buildings

    Get PDF
    In recent years, the use of services for Open Systems development has consolidated and strengthened. Advances in the Service Science and Engineering (SSE) community, promoted by the reinforcement of Web Services and Semantic Web technologies and the presence of new Cloud computing techniques, such as the proliferation of microservices solutions, have allowed software architects to experiment and develop new ways of building open and adaptable computer systems at runtime. Home automation, intelligent buildings, robotics, graphical user interfaces are some of the social atmosphere environments suitable in which to apply certain innovative trends. This paper presents a schema for the adaptation of Dynamic Computer Systems (DCS) using interdisciplinary techniques on model-driven engineering, service engineering and soft computing. The proposal manages an orchestrated microservices schema for adapting component-based software architectural systems at runtime. This schema has been developed as a three-layer adaptive transformation process that is supported on a rule-based decision-making service implemented by means of Machine Learning (ML) algorithms. The experimental development was implemented in the Solar Energy Research Center (CIESOL) applying the proposed microservices schema for adapting home architectural atmosphere systems on Green Buildings
    • …
    corecore