2,947 research outputs found

    Acquiring in situ training data for context-aware ubiquitous computing applications

    Full text link
    Ubiquitous, context-aware computer systems may ultimately enable computer applications that naturally and usefully respond to a user's everyday activity. Although new algorithms that can automatically detect context from wearable and environmental sensor systems show promise, many of the most flexible and robust systems use probabilistic detection algorithms that require extensive libraries of training data with labeled examples. In this paper, we describe the need for such training data and some challenges we have identified when trying to collect it while testing three contextdetection systems for ubiquitous computing and mobile applications. Author Keywords Context-aware, ubiquitous, computing, supervised learning, experience sampling, user interface design ACM Classification Keywords H5.m Information interfaces and presentation (e.g. HCI): Miscellaneous

    Survey of context provisioning middleware

    Get PDF
    In the scope of ubiquitous computing, one of the key issues is the awareness of context, which includes diverse aspects of the user's situation including his activities, physical surroundings, location, emotions and social relations, device and network characteristics and their interaction with each other. This contextual knowledge is typically acquired from physical, virtual or logical sensors. To overcome problems of heterogeneity and hide complexity, a significant number of middleware approaches have been proposed for systematic and coherent access to manifold context parameters. These frameworks deal particularly with context representation, context management and reasoning, i.e. deriving abstract knowledge from raw sensor data. This article surveys not only related work in these three categories but also the required evaluation principles. © 2009-2012 IEEE

    Interoperable services based on activity monitoring in ambient assisted living environments

    Get PDF
    Ambient Assisted Living (AAL) is considered as the main technological solution that will enable the aged and people in recovery to maintain their independence and a consequent high quality of life for a longer period of time than would otherwise be the case. This goal is achieved by monitoring human’s activities and deploying the appropriate collection of services to set environmental features and satisfy user preferences in a given context. However, both human monitoring and services deployment are particularly hard to accomplish due to the uncertainty and ambiguity characterising human actions, and heterogeneity of hardware devices composed in an AAL system. This research addresses both the aforementioned challenges by introducing 1) an innovative system, based on Self Organising Feature Map (SOFM), for automatically classifying the resting location of a moving object in an indoor environment and 2) a strategy able to generate context-aware based Fuzzy Markup Language (FML) services in order to maximize the users’ comfort and hardware interoperability level. The overall system runs on a distributed embedded platform with a specialised ceiling- mounted video sensor for intelligent activity monitoring. The system has the ability to learn resting locations, to measure overall activity levels, to detect specific events such as potential falls and to deploy the right sequence of fuzzy services modelled through FML for supporting people in that particular context. Experimental results show less than 20% classification error in monitoring human activities and providing the right set of services, showing the robustness of our approach over others in literature with minimal power consumption

    An Empirical Study Comparing Unobtrusive Physiological Sensors for Stress Detection in Computer Work.

    Get PDF
    Several unobtrusive sensors have been tested in studies to capture physiological reactions to stress in workplace settings. Lab studies tend to focus on assessing sensors during a specific computer task, while in situ studies tend to offer a generalized view of sensors' efficacy for workplace stress monitoring, without discriminating different tasks. Given the variation in workplace computer activities, this study investigates the efficacy of unobtrusive sensors for stress measurement across a variety of tasks. We present a comparison of five physiological measurements obtained in a lab experiment, where participants completed six different computer tasks, while we measured their stress levels using a chest-band (ECG, respiration), a wristband (PPG and EDA), and an emerging thermal imaging method (perinasal perspiration). We found that thermal imaging can detect increased stress for most participants across all tasks, while wrist and chest sensors were less generalizable across tasks and participants. We summarize the costs and benefits of each sensor stream, and show how some computer use scenarios present usability and reliability challenges for stress monitoring with certain physiological sensors. We provide recommendations for researchers and system builders for measuring stress with physiological sensors during workplace computer use

    Situation inference and context recognition for intelligent mobile sensing applications

    Get PDF
    The usage of smart devices is an integral element in our daily life. With the richness of data streaming from sensors embedded in these smart devices, the applications of ubiquitous computing are limitless for future intelligent systems. Situation inference is a non-trivial issue in the domain of ubiquitous computing research due to the challenges of mobile sensing in unrestricted environments. There are various advantages to having robust and intelligent situation inference from data streamed by mobile sensors. For instance, we would be able to gain a deeper understanding of human behaviours in certain situations via a mobile sensing paradigm. It can then be used to recommend resources or actions for enhanced cognitive augmentation, such as improved productivity and better human decision making. Sensor data can be streamed continuously from heterogeneous sources with different frequencies in a pervasive sensing environment (e.g., smart home). It is difficult and time-consuming to build a model that is capable of recognising multiple activities. These activities can be performed simultaneously with different granularities. We investigate the separability aspect of multiple activities in time-series data and develop OPTWIN as a technique to determine the optimal time window size to be used in a segmentation process. As a result, this novel technique reduces need for sensitivity analysis, which is an inherently time consuming task. To achieve an effective outcome, OPTWIN leverages multi-objective optimisation by minimising the impurity (the number of overlapped windows of human activity labels on one label space over time series data) while maximising class separability. The next issue is to effectively model and recognise multiple activities based on the user's contexts. Hence, an intelligent system should address the problem of multi-activity and context recognition prior to the situation inference process in mobile sensing applications. The performance of simultaneous recognition of human activities and contexts can be easily affected by the choices of modelling approaches to build an intelligent model. We investigate the associations of these activities and contexts at multiple levels of mobile sensing perspectives to reveal the dependency property in multi-context recognition problem. We design a Mobile Context Recognition System, which incorporates a Context-based Activity Recognition (CBAR) modelling approach to produce effective outcome from both multi-stage and multi-target inference processes to recognise human activities and their contexts simultaneously. Upon our empirical evaluation on real-world datasets, the CBAR modelling approach has significantly improved the overall accuracy of simultaneous inference on transportation mode and human activity of mobile users. The accuracy of activity and context recognition can also be influenced progressively by how reliable user annotations are. Essentially, reliable user annotation is required for activity and context recognition. These annotations are usually acquired during data capture in the world. We research the needs of reducing user burden effectively during mobile sensor data collection, through experience sampling of these annotations in-the-wild. To this end, we design CoAct-nnotate --- a technique that aims to improve the sampling of human activities and contexts by providing accurate annotation prediction and facilitates interactive user feedback acquisition for ubiquitous sensing. CoAct-nnotate incorporates a novel multi-view multi-instance learning mechanism to perform more accurate annotation prediction. It also includes a progressive learning process (i.e., model retraining based on co-training and active learning) to improve its predictive performance over time. Moving beyond context recognition of mobile users, human activities can be related to essential tasks that the users perform in daily life. Conversely, the boundaries between the types of tasks are inherently difficult to establish, as they can be defined differently from the individuals' perspectives. Consequently, we investigate the implication of contextual signals for user tasks in mobile sensing applications. To define the boundary of tasks and hence recognise them, we incorporate such situation inference process (i.e., task recognition) into the proposed Intelligent Task Recognition (ITR) framework to learn users' Cyber-Physical-Social activities from their mobile sensing data. By recognising the engaged tasks accurately at a given time via mobile sensing, an intelligent system can then offer proactive supports to its user to progress and complete their tasks. Finally, for robust and effective learning of mobile sensing data from heterogeneous sources (e.g., Internet-of-Things in a mobile crowdsensing scenario), we investigate the utility of sensor data in provisioning their storage and design QDaS --- an application agnostic framework for quality-driven data summarisation. This allows an effective data summarisation by performing density-based clustering on multivariate time series data from a selected source (i.e., data provider). Thus, the source selection process is determined by the measure of data quality. Nevertheless, this framework allows intelligent systems to retain comparable predictive results by its effective learning on the compact representations of mobile sensing data, while having a higher space saving ratio. This thesis contains novel contributions in terms of the techniques that can be employed for mobile situation inference and context recognition, especially in the domain of ubiquitous computing and intelligent assistive technologies. This research implements and extends the capabilities of machine learning techniques to solve real-world problems on multi-context recognition, mobile data summarisation and situation inference from mobile sensing. We firmly believe that the contributions in this research will help the future study to move forward in building more intelligent systems and applications

    Workload-aware systems and interfaces for cognitive augmentation

    Get PDF
    In today's society, our cognition is constantly influenced by information intake, attention switching, and task interruptions. This increases the difficulty of a given task, adding to the existing workload and leading to compromised cognitive performances. The human body expresses the use of cognitive resources through physiological responses when confronted with a plethora of cognitive workload. This temporarily mobilizes additional resources to deal with the workload at the cost of accelerated mental exhaustion. We predict that recent developments in physiological sensing will increasingly create user interfaces that are aware of the user’s cognitive capacities, hence able to intervene when high or low states of cognitive workload are detected. In this thesis, we initially focus on determining opportune moments for cognitive assistance. Subsequently, we investigate suitable feedback modalities in a user-centric design process which are desirable for cognitive assistance. We present design requirements for how cognitive augmentation can be achieved using interfaces that sense cognitive workload. We then investigate different physiological sensing modalities to enable suitable real-time assessments of cognitive workload. We provide empirical evidence that the human brain is sensitive to fluctuations in cognitive resting states, hence making cognitive effort measurable. Firstly, we show that electroencephalography is a reliable modality to assess the mental workload generated during the user interface operation. Secondly, we use eye tracking to evaluate changes in eye movements and pupil dilation to quantify different workload states. The combination of machine learning and physiological sensing resulted in suitable real-time assessments of cognitive workload. The use of physiological sensing enables us to derive when cognitive augmentation is suitable. Based on our inquiries, we present applications that regulate cognitive workload in home and work settings. We deployed an assistive system in a field study to investigate the validity of our derived design requirements. Finding that workload is mitigated, we investigated how cognitive workload can be visualized to the user. We present an implementation of a biofeedback visualization that helps to improve the understanding of brain activity. A final study shows how cognitive workload measurements can be used to predict the efficiency of information intake through reading interfaces. Here, we conclude with use cases and applications which benefit from cognitive augmentation. This thesis investigates how assistive systems can be designed to implicitly sense and utilize cognitive workload for input and output. To do so, we measure cognitive workload in real-time by collecting behavioral and physiological data from users and analyze this data to support users through assistive systems that adapt their interface according to the currently measured workload. Our overall goal is to extend new and existing context-aware applications by the factor cognitive workload. We envision Workload-Aware Systems and Workload-Aware Interfaces as an extension in the context-aware paradigm. To this end, we conducted eight research inquiries during this thesis to investigate how to design and create workload-aware systems. Finally, we present our vision of future workload-aware systems and workload-aware interfaces. Due to the scarce availability of open physiological data sets, reference implementations, and methods, previous context-aware systems were limited in their ability to utilize cognitive workload for user interaction. Together with the collected data sets, we expect this thesis to pave the way for methodical and technical tools that integrate workload-awareness as a factor for context-aware systems.Tagtäglich werden unsere kognitiven Fähigkeiten durch die Verarbeitung von unzähligen Informationen in Anspruch genommen. Dies kann die Schwierigkeit einer Aufgabe durch mehr oder weniger Arbeitslast beeinflussen. Der menschliche Körper drückt die Nutzung kognitiver Ressourcen durch physiologische Reaktionen aus, wenn dieser mit kognitiver Arbeitsbelastung konfrontiert oder überfordert wird. Dadurch werden weitere Ressourcen mobilisiert, um die Arbeitsbelastung vorübergehend zu bewältigen. Wir prognostizieren, dass die derzeitige Entwicklung physiologischer Messverfahren kognitive Leistungsmessungen stets möglich machen wird, um die kognitive Arbeitslast des Nutzers jederzeit zu messen. Diese sind in der Lage, einzugreifen wenn eine zu hohe oder zu niedrige kognitive Belastung erkannt wird. Wir konzentrieren uns zunächst auf die Erkennung passender Momente für kognitive Unterstützung welche sich der gegenwärtigen kognitiven Arbeitslast bewusst sind. Anschließend untersuchen wir in einem nutzerzentrierten Designprozess geeignete Feedbackmechanismen, die zur kognitiven Assistenz beitragen. Wir präsentieren Designanforderungen, welche zeigen wie Schnittstellen eine kognitive Augmentierung durch die Messung kognitiver Arbeitslast erreichen können. Anschließend untersuchen wir verschiedene physiologische Messmodalitäten, welche Bewertungen der kognitiven Arbeitsbelastung in Realzeit ermöglichen. Zunächst validieren wir empirisch, dass das menschliche Gehirn auf kognitive Arbeitslast reagiert. Es zeigt sich, dass die Ableitung der kognitiven Arbeitsbelastung über Elektroenzephalographie eine geeignete Methode ist, um den kognitiven Anspruch neuartiger Assistenzsysteme zu evaluieren. Anschließend verwenden wir Eye-Tracking, um Veränderungen in den Augenbewegungen und dem Durchmesser der Pupille unter verschiedenen Intensitäten kognitiver Arbeitslast zu bewerten. Das Anwenden von maschinellem Lernen führt zu zuverlässigen Echtzeit-Bewertungen kognitiver Arbeitsbelastung. Auf der Grundlage der bisherigen Forschungsarbeiten stellen wir Anwendungen vor, welche die Kognition im häuslichen und beruflichen Umfeld unterstützen. Die physiologischen Messungen stellen fest, wann eine kognitive Augmentierung sich als günstig erweist. In einer Feldstudie setzen wir ein Assistenzsystem ein, um die erhobenen Designanforderungen zur Reduktion kognitiver Arbeitslast zu validieren. Unsere Ergebnisse zeigen, dass die Arbeitsbelastung durch den Einsatz von Assistenzsystemen reduziert wird. Im Anschluss untersuchen wir, wie kognitive Arbeitsbelastung visualisiert werden kann. Wir stellen eine Implementierung einer Biofeedback-Visualisierung vor, die das Nutzerverständnis zum Verlauf und zur Entstehung von kognitiver Arbeitslast unterstützt. Eine abschließende Studie zeigt, wie Messungen kognitiver Arbeitslast zur Vorhersage der aktuellen Leseeffizienz benutzt werden können. Wir schließen hierbei mit einer Reihe von Applikationen ab, welche sich kognitive Arbeitslast als Eingabe zunutze machen. Die vorliegende wissenschaftliche Arbeit befasst sich mit dem Design von Assistenzsystemen, welche die kognitive Arbeitslast der Nutzer implizit erfasst und diese bei der Durchführung alltäglicher Aufgaben unterstützt. Dabei werden physiologische Daten erfasst, um Rückschlüsse in Realzeit auf die derzeitige kognitive Arbeitsbelastung zu erlauben. Anschließend werden diese Daten analysiert, um dem Nutzer strategisch zu assistieren. Das Ziel dieser Arbeit ist die Erweiterung neuartiger und bestehender kontextbewusster Benutzerschnittstellen um den Faktor kognitive Arbeitslast. Daher werden in dieser Arbeit arbeitslastbewusste Systeme und arbeitslastbewusste Benutzerschnittstellen als eine zusätzliche Dimension innerhalb des Paradigmas kontextbewusster Systeme präsentiert. Wir stellen acht Forschungsstudien vor, um die Designanforderungen und die Implementierung von kognitiv arbeitslastbewussten Systemen zu untersuchen. Schließlich stellen wir unsere Vision von zukünftigen kognitiven arbeitslastbewussten Systemen und Benutzerschnittstellen vor. Durch die knappe Verfügbarkeit öffentlich zugänglicher Datensätze, Referenzimplementierungen, und Methoden, waren Kontextbewusste Systeme in der Auswertung kognitiver Arbeitslast bezüglich der Nutzerinteraktion limitiert. Ergänzt durch die in dieser Arbeit gesammelten Datensätze erwarten wir, dass diese Arbeit den Weg für methodische und technische Werkzeuge ebnet, welche kognitive Arbeitslast als Faktor in das Kontextbewusstsein von Computersystemen integriert
    • …
    corecore