169,987 research outputs found
dWatch: a Personal Wrist Watch for Smart Environments
Intelligent environments, such as smart homes or domotic systems, have the potential to support people in many of their ordinary activities, by allowing complex control strategies for managing various capabilities of a house or a building: lights, doors, temperature, power and energy, music, etc. Such environments, typically, provide these control strategies by means of computers, touch screen panels, mobile phones, tablets, or In-House Displays. An unobtrusive and typically wearable device, like a bracelet or a wrist watch, that lets users perform various operations in their homes and to receive notifications from the environment, could strenghten the interaction with such systems, in particular for those people not accustomed to computer systems (e.g., elderly) or in contexts where they are not in front of a screen. Moreover, such wearable devices reduce the technological gap introduced in the environment by home automation systems, thus permitting a higher level of acceptance in the daily activities and improving the interaction between the environment and its inhabitants. In this paper, we introduce the dWatch, an off-the-shelf personal wearable notification and control device, integrated in an intelligent platform for domotic systems, designed to optimize the way people use the environment, and built as a wrist watch so that it is easily accessible, worn by people on a regular basis and unobtrusiv
Recommended from our members
Integrating Recognition and Decision Making to Close the Interaction Loop for Autonomous Systems
Intelligent systems are becoming increasingly ubiquitous in daily life. Mobile devices are providing machine-generated support to users, robots are coming out of their cages in manufacturing to interact with co-workers, and cars with various degrees of self-driving capabilities operate amongst pedestrians and the driver. However, these interactive intelligent systems\u27 effectiveness depends on their understanding and recognition of human activities and goals, as well as their responses to people in a timely manner. The average person does not follow instructions step-by-step or act in a formulaic manner, but instead varies the order of actions and timing when performing a given task. People explore their surroundings, make mistakes, and may interrupt an activity to handle more urgent matters. The decisions that an autonomous intelligent system makes should account for such noise and variance regardless of the form of interaction, which includes adapting action choices and possibly its own goals.While most people take these aspects of interaction for granted, they are complex and involve many specific tasks that have primarily been studied independently within artificial intelligence. This results in open-loop interactive experiences where the user must perform a fixed input command or the intelligent system performs a hard-coded output response---one of the components of the interaction cannot adapt with respect to the other for longer-term back-and-forth interactions. This dissertation explores how developments in plan recognition, activity recognition, intent recognition, and autonomous planning can work together to develop more adaptive interactive experiences between autonomous intelligent systems and the people around them. In particular, we consider a unifying perspective of recognition algorithms that provides sufficient information to dynamically produce short-term automated planning problems, and we present ways to run these algorithms faster for the real-time needs of interaction. This exploration leads to the introduction of the Planning and Recognition Together Close the Interaction Loop (PReTCIL) framework that serves as a first step towards identifying how we can address the problem of closing the interaction loop, in addition to new questions that need to be considered
Crossroads: Interactive Music Systems Transforming Performance, Production and Listening
date-added: 2017-12-22 18:26:58 +0000 date-modified: 2017-12-22 18:38:33 +0000 keywords: mood-based interaction, intelligent music production, HCI local-url: https://qmro.qmul.ac.uk/xmlui/handle/123456789/12502 publisher-url: http://mcl.open.ac.uk/music-chi/uploads/19/HCIMUSIC_2016_paper_15.pdf bdsk-url-1: https://qmro.qmul.ac.uk/xmlui/bitstream/handle/123456789/12502/Barthet%20Crossroads%3A%20Interactive%20Music%20Systems%202016%20Accepted.pdfdate-added: 2017-12-22 18:26:58 +0000 date-modified: 2017-12-22 18:38:33 +0000 keywords: mood-based interaction, intelligent music production, HCI local-url: https://qmro.qmul.ac.uk/xmlui/handle/123456789/12502 publisher-url: http://mcl.open.ac.uk/music-chi/uploads/19/HCIMUSIC_2016_paper_15.pdf bdsk-url-1: https://qmro.qmul.ac.uk/xmlui/bitstream/handle/123456789/12502/Barthet%20Crossroads%3A%20Interactive%20Music%20Systems%202016%20Accepted.pdfdate-added: 2017-12-22 18:26:58 +0000 date-modified: 2017-12-22 18:38:33 +0000 keywords: mood-based interaction, intelligent music production, HCI local-url: https://qmro.qmul.ac.uk/xmlui/handle/123456789/12502 publisher-url: http://mcl.open.ac.uk/music-chi/uploads/19/HCIMUSIC_2016_paper_15.pdf bdsk-url-1: https://qmro.qmul.ac.uk/xmlui/bitstream/handle/123456789/12502/Barthet%20Crossroads%3A%20Interactive%20Music%20Systems%202016%20Accepted.pdfWe discuss several state-of-the-art systems that propose new paradigms and user workflows for music composition, production, performance, and listening. We focus on a selection of systems that exploit recent advances in semantic and affective computing, music information retrieval (MIR) and semantic web, as well as insights from fields such as mobile computing and information visualisation. These systems offer the potential to provide transformative experiences for users, which is manifested in creativity, engagement, efficiency, discovery and affect
Towards connecting people, locations and real-world events in a cellular network
The success of personal mobile communication technologies has led an emerging expansion of the telecommunication infrastructure but also to an explosion to mobile broadband data traffic as more and more people completely rely on their mobile devices, either for work or entertainment. The continuously interaction of their mobile devices with the mobile network infrastructure creates digital traces that can be easily logged by the network operators. These digital traces can be further used, apart from billing and resource management, for large-scale population monitoring using mobile traffic analysis. They could be integrated into intelligent systems that could help at detecting exceptional events such as riots, protests or even at disaster preventions with minimal costs and improve people safety and security, or even save lives. In this paper we study the use of fully anonymized and highly aggregate cellular network data, like Call Detail Records (CDRs) to analyze the telecommunication traffic and connect people, locations and events. The results show that by analyzing the CDR data exceptional spatio-temporal patterns of mobile data can be correlated to real-world events. For example, high user network activity was mapped to religious festivals, such as Ramadan, Le Grand Magal de Touba and the Tivaouane Maouloud festival. During the Ramadan period it was noticed that the communication pattern doubled during the night with a slow start during the morning and along the day. Furthermore, a peak increase in the number of voice calls and voice calls duration in the area of Kafoutine was mapped to the Casamance Conflict in the area which resulted in four deaths. Thus, these observations could be further used to develop an intelligent system that detects exceptional events in real-time from CDRs data monitoring. Such system could be used in intelligent transportation management, urban planning, emergency situations, network resource allocation and performance optimization, etc
Incorporating android conversational agents in m-learning apps
Smart Mobile Devices Have Fostered New Learning Scenarios That Demand Sophisticated Interfaces. Multimodal Conversational Agents Have Became A Strong Alternative To Develop Human-Machine Interfaces That Provide A More Engaging And Human-Like Relationship Between Students And The System. The Main Developers Of Operating Systems For Such Devices Have Provided Application Programming Interfaces For Developers To Implement Their Own Applications, Including Different Solutions For Developing Graphical Interfaces, Sensor Control And Voice Interaction. Despite The Usefulness Of Such Resources, There Are No Strategies Defined For Coupling The Multimodal Interface With The Possibilities That These Devices Offer To Enhance Mobile Educative Apps With Intelligent Communicative Capabilities And Adaptation To The User Needs. In This Paper, We Present A Practical M-Learning Application That Integrates Features Of Android Application Programming Interfaces On A Modular Architecture That Emphasizes Interaction Management And Context-Awareness To Foster User-Adaptively, Robustness And Maintainability.This work was supported in part by Projects MINECO TEC2012-37832-C02-01, CICYT TEC2011-28626-C02-02, CAM CONTEXTS (S2009/TIC-1485
- …