47 research outputs found
A Temporal Logic for Modelling Activities of Daily Living
Behaviour support technology is aimed at assisting people in organizing their Activities of Daily Living (ADLs). Numerous frameworks have been developed for activity recognition and for generating specific types of support actions, such as reminders. The main goal of our research is to develop a generic formal framework for representing and reasoning about ADLs and their temporal relations. This framework should facilitate modelling and reasoning about 1) durative activities, 2) relations between higher-level activities and subactivities, 3) activity instances, and 4) activity duration. In this paper we present a temporal logic as an extension of the logic TPTL for specification of real-time systems. Our logic TPTL_{bih} is defined over Behaviour Identification Hierarchies (BIHs) for representing ADL structure and typical activity duration. To model execution of ADLs, states of the temporal traces in TPTL_{bih} comprise information about the start, stop and current execution of activities. We provide a number of constraints on these traces that we stipulate are desired for the accurate representation of ADL execution, and investigate corresponding validities in the logic. To evaluate the expressivity of the logic, we give a formal definition for the notion of Coherence for (complex) activities, by which we mean that an activity is done without interruption and in a timely fashion. We show that the definition is satisfiable in our framework. In this way the logic forms the basis for a generic monitoring and reasoning framework for ADLs
Using Psychological Characteristics of Situations for Social Situation Comprehension in Support Agents
Support agents that help users in their daily lives need to take into account
not only the user's characteristics, but also the social situation of the user.
Existing work on including social context uses some type of situation cue as an
input to information processing techniques in order to assess the expected
behavior of the user. However, research shows that it is important to also
determine the meaning of a situation, a step which we refer to as social
situation comprehension. We propose using psychological characteristics of
situations, which have been proposed in social science for ascribing meaning to
situations, as the basis for social situation comprehension. Using data from
user studies, we evaluate this proposal from two perspectives. First, from a
technical perspective, we show that psychological characteristics of situations
can be used as input to predict the priority of social situations, and that
psychological characteristics of situations can be predicted from the features
of a social situation. Second, we investigate the role of the comprehension
step in human-machine meaning making. We show that psychological
characteristics can be successfully used as a basis for explanations given to
users about the decisions of an agenda management personal assistant agent.Comment: 21 page
Context-Sensitive Sharedness Criteria for Teamwork (Extended Abstract)
ABSTRACT Teamwork between humans and intelligent systems gains importance with the maturing of agent and robot technology. In the social sciences, sharedness of mental models is used to explain and understand teamwork. To use this concept for developing teams that include agents, we propose contextsensitive sharedness criteria. These criteria specify how much, what, and among whom knowledge in a team should be shared
Automated multi-level governance compliance checking
An institution typically comprises constitutive rules, which give shape and meaning to social interactions and regulative rules, which prescribe agent behaviour in the society. Regulative rules guide social interaction, in particular when they are coupled with reward and punishment regulations that are enforced for (non-)compliance. Institution examples include legislation and contracts. Formal institutional reasoning frameworks automate ascribing social meaning to agent interaction and determining whether those actions have social meanings that comprise (non-)compliant behaviour. Yet, institutions do not just govern societies. Rather, in what is called multi-level governance, institutional designs at lower governance levels (e.g., national legislation at the national level) are governed by higher level institutions (e.g., directives, human rights charters and supranational agreements). When an institution design is found to be non-compliant, punishments can be issued by annulling the legislation or imposing fines on the responsible designers (i.e., government). In order to enforce multi-level governance, higher governance levels (e.g., courts applying human rights) must check lower level institution designs (e.g., national legislation) for compliance; in order to avoid punishment, lower governance levels (e.g., national governments) must check their institution designs are compliant with higher-level institutions before enactment. However, checking non-compliance of institution designs in multi-level governance is non-trivial. In particular, because institutions in multi-level governance operate at different levels of abstraction. Lower level institutions govern with concrete regulations whilst higher level institutions typically comprise increasingly vague and abstract regulations. To address this issue, in this paper we propose a formal framework with a novel semantics that defines compliance between concrete lower level institutions and abstract higher level institutions. The formal framework is complemented by a sound and complete computational framework that automates compliance checking, which we apply to a real-world case study
Separating Agent-Functioning and Inter-Agent Coordination by Activated Modules: The DECOMAS Architecture
The embedding of self-organizing inter-agent processes in distributed
software applications enables the decentralized coordination system elements,
solely based on concerted, localized interactions. The separation and
encapsulation of the activities that are conceptually related to the
coordination, is a crucial concern for systematic development practices in
order to prepare the reuse and systematic integration of coordination processes
in software systems. Here, we discuss a programming model that is based on the
externalization of processes prescriptions and their embedding in Multi-Agent
Systems (MAS). One fundamental design concern for a corresponding execution
middleware is the minimal-invasive augmentation of the activities that affect
coordination. This design challenge is approached by the activation of agent
modules. Modules are converted to software elements that reason about and
modify their host agent. We discuss and formalize this extension within the
context of a generic coordination architecture and exemplify the proposed
programming model with the decentralized management of (web) service
infrastructures
A Research Agenda for Hybrid Intelligence:Augmenting Human Intellect With Collaborative, Adaptive, Responsible, and Explainable Artificial Intelligence
We define hybrid intelligence (HI) as the combination of human and machine intelligence, augmenting human intellect and capabilities instead of replacing them and achieving goals that were unreachable by either humans or machines. HI is an important new research focus for artificial intelligence, and we set a research agenda for HI by formulating four challenges
Intimate Computing: Abstract presented at the Philosophy Conference "Dimensions of Vulnerability" (Vienna, April 2018)
Digital information technologies become more and more intimately interwoven with our society and individuals' daily lives. Through developments in sensor technology and material sciences, technologies such as wearables, high-tech clothing, smart objects, and assistive technologies become embedded in our environments and on our bodies. These digital technologies have many (potential) benefits, e.g., regarding health, efficiency, safety, and human connection. Yet they also raise concerns about how we might be affected as human beings, in particular combined with the power of Artificial Intelligence and Data Science. In this abstract I posit that the 'intimate' nature of these technologies that collect and respond to us based on highly personal data gives rise to new human vulnerabilities that affect physical, psychological and social aspects of our identity. I argue that we should account for these vulnerabilities in engineering intimate technologies: intimate computing is computing with vulnerability