41 research outputs found

    Optimising user engagement in highly automated virtual assistants to improve energy management and consumption

    Get PDF
    This paper presents a multi-dimensional taxonomy of levels of automation and reparation specifically adapted to Virtual Assistants (VAs) in the context of Human-Human-Interaction (HHI). Building from this framework, the main output of this study provides a method of calculation which helps to generate a trust rating by which this score can be used to optimise users' engagement. The authors believe that this framework could play a critical role in optimising energy efficiency in both management and consumption, particular attention has been given to the relevance of contextual events and dynamism in enhancing trust. For instance by understanding that trust formation is a dynamic process that starts before the user's first contact with the system, and continues long thereafter. Furthermore, following the evolving nature of the system, factors affecting trust and the system itself change during user interactions over time; thus, systems need to be able to adapt and evolve. Present work is being dedicated to further understanding of how contexts and its derivative unintended consequences affect trust in highly automated VAs in the area of energy consumption

    Addressing accountability in highly autonomous virtual assistants

    Get PDF
    Building from a survey specifically developed to address the rising concerns of highly autonomous virtual assistants; this paper presents a multi-level taxonomy of accountability levels specifically adapted to virtual assistants in the context of Human-Human-Interaction (HHI). Based on research findings, the authors recommend the integration of the variable of accountability as capital in the development of future applications around highly automated systems. This element inserts a sense of balance in terms of integrity between users and developers enhancing trust in the interactive process. Ongoing work is being dedicated to further understand to which extent different contexts affect accountability in virtual assistants

    Inverse Abstraction of Neural Networks Using Symbolic Interpolation

    Get PDF
    Neural networks in real-world applications have to satisfy critical properties such as safety and reliability. The analysis of such properties typically requires extracting information through computing pre-images of the network transformations, but it is well-known that explicit computation of pre-images is intractable. We introduce new methods for computing compact symbolic abstractions of pre-images by computing their overapproximations and underapproximations through all layers. The abstraction of pre-images enables formal analysis and knowledge extraction without affecting standard learning algorithms. We use inverse abstractions to automatically extract simple control laws and compact representations for pre-images corresponding to unsafe outputs. We illustrate that the extracted abstractions are interpretable and can be used for analyzing complex properties

    Inverse Abstraction of Neural Networks Using Symbolic Interpolation

    Get PDF
    Neural networks in real-world applications have to satisfy critical properties such as safety and reliability. The analysis of such properties typically requires extracting information through computing pre-images of the network transformations, but it is well-known that explicit computation of pre-images is intractable. We introduce new methods for computing compact symbolic abstractions of pre-images by computing their overapproximations and underapproximations through all layers. The abstraction of pre-images enables formal analysis and knowledge extraction without affecting standard learning algorithms. We use inverse abstractions to automatically extract simple control laws and compact representations for pre-images corresponding to unsafe outputs. We illustrate that the extracted abstractions are interpretable and can be used for analyzing complex properties

    Neuro-Symbolic Verification of Deep Neural Networks

    Get PDF

    ReluDiff: Differential Verification of Deep Neural Networks

    Full text link
    As deep neural networks are increasingly being deployed in practice, their efficiency has become an important issue. While there are compression techniques for reducing the network's size, energy consumption and computational requirement, they only demonstrate empirically that there is no loss of accuracy, but lack formal guarantees of the compressed network, e.g., in the presence of adversarial examples. Existing verification techniques such as Reluplex, ReluVal, and DeepPoly provide formal guarantees, but they are designed for analyzing a single network instead of the relationship between two networks. To fill the gap, we develop a new method for differential verification of two closely related networks. Our method consists of a fast but approximate forward interval analysis pass followed by a backward pass that iteratively refines the approximation until the desired property is verified. We have two main innovations. During the forward pass, we exploit structural and behavioral similarities of the two networks to more accurately bound the difference between the output neurons of the two networks. Then in the backward pass, we leverage the gradient differences to more accurately compute the most beneficial refinement. Our experiments show that, compared to state-of-the-art verification tools, our method can achieve orders-of-magnitude speedup and prove many more properties than existing tools.Comment: Extended version of ICSE 2020 paper. This version includes an appendix with proofs for some of the content in section 4.
    corecore