1,872 research outputs found
Multimodal agent interfaces and system architectures for health and fitness companions
Multimodal conversational spoken dialogues using physical and virtual agents provide a potential interface to motivate and support users in the domain of health and fitness. In this paper we present how such multimodal conversational Companions can be implemented to support their owners in various pervasive and mobile settings. In particular, we focus on different forms of multimodality and system architectures for such interfaces
Ubiquitous Integration and Temporal Synchronisation (UbilTS) framework : a solution for building complex multimodal data capture and interactive systems
Contemporary Data Capture and Interactive Systems (DCIS) systems are tied in with various
technical complexities such as multimodal data types, diverse hardware and software
components, time synchronisation issues and distributed deployment configurations. Building
these systems is inherently difficult and requires addressing of these complexities before the
intended and purposeful functionalities can be attained. The technical issues are often
common and similar among diverse applications.
This thesis presents the Ubiquitous Integration and Temporal Synchronisation (UbiITS)
framework, a generic solution to address the technical complexities in building DCISs. The
proposed solution is an abstract software framework that can be extended and customised to
any application requirements. UbiITS includes all fundamental software components,
techniques, system level layer abstractions and reference architecture as a collection to enable
the systematic construction of complex DCISs.
This work details four case studies to showcase the versatility and extensibility of UbiITS
frameworkâs functionalities and demonstrate how it was employed to successfully solve a
range of technical requirements. In each case UbiITS operated as the core element of each
application. Additionally, these case studies are novel systems by themselves in each of their
domains. Longstanding technical issues such as flexibly integrating and interoperating
multimodal tools, precise time synchronisation, etc., were resolved in each application by
employing UbiITS. The framework enabled establishing a functional system infrastructure in
these cases, essentially opening up new lines of research in each discipline where these
research approaches would not have been possible without the infrastructure provided by the
framework. The thesis further presents a sample implementation of the framework on a
device firmware exhibiting its capability to be directly implemented on a hardware platform.
Summary metrics are also produced to establish the complexity, reusability, extendibility,
implementation and maintainability characteristics of the framework.Engineering and Physical Sciences Research Council (EPSRC) grants - EP/F02553X/1, 114433 and 11394
A Smart Kitchen for Ambient Assisted Living
The kitchen environment is one of the scenarios in the home where users can benefit from Ambient Assisted Living (AAL) applications. Moreover, it is the place where old people suffer from most domestic injuries. This paper presents a novel design, implementation and assessment of a Smart Kitchen which provides Ambient Assisted Living services; a smart environment that increases elderly and disabled peopleâs autonomy in their kitchen-related activities through context and user awareness, appropriate user interaction and artificial intelligence. It is based on a modular architecture which integrates a wide variety of home technology (household appliances, sensors, user interfaces, etc.) and associated communication standards and media (power line, radio frequency, infrared and cabled). Its software architecture is based on the Open Services Gateway initiative (OSGi), which allows building a complex system composed of small modules, each one providing the specific functionalities required, and can be easily scaled to meet our needs. The system has been evaluated by a large number of real users (63) and carers (31) in two living labs in Spain and UK. Results show a large potential of system functionalities combined with good usability and physical, sensory and cognitive accessibility
A knowledge-based approach towards human activity recognition in smart environments
For many years it is known that the population of older persons is on the rise. A recent report estimates that globally, the share of the population aged 65 years or over is expected to increase from 9.3 percent in 2020 to around 16.0 percent in 2050 [1]. This point has been one of the main sources of motivation for active research in the domain of human
activity recognition in smart-homes. The ability to perform ADL without assistance from
other people can be considered as a reference for the estimation of the independent living
level of the older person. Conventionally, this has been assessed by health-care domain
experts via a qualitative evaluation of the ADL. Since this evaluation is qualitative, it can
vary based on the person being monitored and the caregiver\u2019s experience. A significant
amount of research work is implicitly or explicitly aimed at augmenting the health-care
domain expert\u2019s qualitative evaluation with quantitative data or knowledge obtained from
HAR. From a medical perspective, there is a lack of evidence about the technology readiness
level of smart home architectures supporting older persons by recognizing ADL [2]. We
hypothesize that this may be due to a lack of effective collaboration between smart-home
researchers/developers and health-care domain experts, especially when considering HAR.
We foresee an increase in HAR systems being developed in close collaboration with caregivers
and geriatricians to support their qualitative evaluation of ADL with explainable quantitative
outcomes of the HAR systems. This has been a motivation for the work in this thesis. The
recognition of human activities \u2013 in particular ADL \u2013 may not only be limited to support
the health and well-being of older people. It can be relevant to home users in general. For
instance, HAR could support digital assistants or companion robots to provide contextually
relevant and proactive support to the home users, whether young adults or old. This has also
been a motivation for the work in this thesis.
Given our motivations, namely, (i) facilitation of iterative development and ease in collaboration between HAR system researchers/developers and health-care domain experts in ADL,
and (ii) robust HAR that can support digital assistants or companion robots. There is a need
for the development of a HAR framework that at its core is modular and flexible to facilitate
an iterative development process [3], which is an integral part of collaborative work that involves develop-test-improve phases. At the same time, the framework should be intelligible
for the sake of enriched collaboration with health-care domain experts. Furthermore, it
should be scalable, online, and accurate for having robust HAR, which can enable many
smart-home applications. The goal of this thesis is to design and evaluate such a framework.
This thesis contributes to the domain of HAR in smart-homes. Particularly the contribution can be divided into three parts. The first contribution is Arianna+, a framework to develop
networks of ontologies - for knowledge representation and reasoning - that enables smart
homes to perform human activity recognition online. The second contribution is OWLOOP,
an API that supports the development of HAR system architectures based on Arianna+. It
enables the usage of Ontology Web Language (OWL) by the means of Object-Oriented
Programming (OOP). The third contribution is the evaluation and exploitation of Arianna+
using OWLOOP API. The exploitation of Arianna+ using OWLOOP API has resulted in four
HAR system implementations. The evaluations and results of these HAR systems emphasize
the novelty of Arianna+
A High-Fidelity Open Embodied Avatar with Lip Syncing and Expression Capabilities
Embodied avatars as virtual agents have many applications and provide
benefits over disembodied agents, allowing non-verbal social and interactional
cues to be leveraged, in a similar manner to how humans interact with each
other. We present an open embodied avatar built upon the Unreal Engine that can
be controlled via a simple python programming interface. The avatar has lip
syncing (phoneme control), head gesture and facial expression (using either
facial action units or cardinal emotion categories) capabilities. We release
code and models to illustrate how the avatar can be controlled like a puppet or
used to create a simple conversational agent using public application
programming interfaces (APIs). GITHUB link:
https://github.com/danmcduff/AvatarSimComment: International Conference on Multimodal Interaction (ICMI 2019
Bringing together commercial and academic perspectives for the development of intelligent AmI interfaces
The users of Ambient Intelligence systems expect an intelligent behavior from their environment, receiving adapted and easily accessible services and functionality. This can only be possible if the communication between the user and the system is carried out through an interface that is simple (i.e. which does not have a steep learning curve), fluid (i.e. the communication takes place rapidly and effectively), and robust (i.e. the system understands the user correctly). Natural language interfaces such as dialog systems combine the previous three requisites, as they are based on a spoken conversation between the user and the system that resembles human communication. The current industrial development of commercial dialog systems deploys robust interfaces in strictly defined application domains. However, commercial systems have not yet adopted the new perspective proposed in the academic settings, which would allow straightforward adaptation of these interfaces to various application domains. This would be highly beneficial for their use in AmI settings as the same interface could be used in varying environments. In this paper, we propose a new approach to bridge the gap between the academic and industrial perspectives in order to develop dialog systems using an academic paradigm while employing the industrial standards, which makes it possible to obtain new generation interfaces without the need for changing the already existing commercial infrastructures. Our proposal has been evaluated with the successful development of a real dialog system that follows our proposed approach to manage dialog and generates code compliant with the industry-wide standard VoiceXML.Research funded by projects CICYT TIN2011-28620-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485), and DPS2008- 07029-C02-02.Publicad
- âŠ