49 research outputs found

    A multi-perspective analysis of social context and personal factors in office settings for the design of an effective mobile notification system

    Get PDF
    In this study, we investigate the effects of social context, personal and mobile phone usage on the inference of work engagement/challenge levels of knowledge workers and their responsiveness to well-being related notifications. Our results show that mobile application usage is associated to the responsiveness and work engagement/challenge levels of knowledge workers. We also developed multi-level (within- and between-subjects) models for the inference of attentional states and engagement/challenge levels with mobile application usage indicators as inputs, such as the number of applications used prior to notifications, the number of switches between applications, and application category usage. The results of our analysis show that the following features are effective for the inference of attentional states and engagement/challenge levels: the number of switches between mobile applications in the last 45 minutes and the duration of application usage in the last 5 minutes before users' response to ESM messages

    A framework for intelligent mobile notifications

    Get PDF
    Mobile notifications provide a unique mechanism for real-time information delivery systems to users in order to increase its effectiveness. However, real-time notification delivery to users via mobile phones does not always translate into users' awareness about the delivered information because these alerts might arrive at inappropriate times and situations. Moreover, notifications that demand users' attention at inopportune moments are more likely to have adverse effects and become a cause of potential disruption rather than proving beneficial to users. In order to address these challenges it is of paramount importance to devise intelligent notification mechanisms that monitor and learn users' behaviour for maximising their receptivity to the delivered information and adapt accordingly. The central goal of this dissertation is to build a framework for intelligent notifications that relies on the awareness of users' context and preferences. More specifically, we firstly investigate the impact of physical and cognitive contextual features on users' attentiveness and receptivity to notifications. Secondly, we construct and evaluate a series of models for predicting opportune moments to deliver notifications and mining users' notification delivery preferences in different situations. Finally, we design and evaluate a model for anticipating the right device notifications in cross-platform environments

    Sensing and Interactive Intelligence in Mobile Context Aware Systems

    Get PDF
    The ever increasing capabilities of mobile devices such as smartphones and their ubiquity in daily life has resulted in a large and interesting body of research into context awareness { the `awareness of a situation' { and how it could make people's lives easier. There are, however, diculties involved in realising and implementing context aware systems in the real world; particularly in a mobile environment. To address these diculties, this dissertation tackles the broad problem of designing and implementing mobile context aware systems in the eld. Spanning the elds of Articial Intelligence (AI) and Human Computer Interaction (HCI), the problem is broken down and scoped into two key areas: context sensing and interactive intelligence. Using a simple design model, the dissertation makes a series of contributions within each area in order to improve the knowledge of mobile context aware systems engineering. At the sensing level, we review mobile sensing capabilities and use a case study to show that the everyday calendar is a noisy `sensor' of context. We also show that its `signal', i.e. useful context, can be extracted using logical data fusion with context supplied by mobile devices. For interactive intelligence, there are two fundamental components: the intelligence, which is concerned with context inference and machine learning; and the interaction, which is concerned with user interaction. For the intelligence component, we use the case of semantic place awareness to address the problems of real time context inference and learning on mobile devices. We show that raw device motion { a common metric used in activity recognition research { is a poor indicator of transition between semantically meaningful places, but real time transition detection performance can be improved with the application of basic machine learning and time series processing techniques. We also develop a context inference and learning algorithm that incorporates user feedback into the inference process { a form of active machine learning. We compare various implementations of the algorithm for the semantic place awareness use case, and observe its performance using a simulation study of user feedback. For the interaction component, we study various approaches for eliciting user feedback in the eld. We deploy the mobile semantic place awareness system in the eld and show how dierent elicitation approaches aect user feedback behaviour. Moreover, we report on the user experience of interacting with the intelligent system and show how performance in the eld compares with the earlier simulation. We also analyse the resource usage of the system and report on the use of a simple SMS place awareness application that uses our system. The dissertation presents original research on key components for designing and implementing mobile context aware systems, and contributes new knowledge to the eld of mobile context awareness.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Automatic Difficulty Detection

    Get PDF
    Previous work has suggested that the productivity of developers increases when they help each other and as distance increases, help is offered less. One way to make the amount of help independent of distance is to develop a system that automatically determines and communicates developers' difficulty. It is our thesis that automatic difficulty detection is possible and useful. To provide evidence to support this thesis, we developed six novel components: * programming-activity difficulty-detection * multimodal difficulty-detection * integrated workspace-difficulty awareness * difficulty-level detection * barrier detection * reusable difficulty-detection framework Programming-activity difficulty-detection mines developers' interactions. It is based on the insight that when developers are having difficulty their edit ratio decreases while other ratios such as the debug and navigation ratios increase. This component has a low false positive rate but a high false negative rate. The high false negative rate limitation is addressed by multimodal difficulty-detection. This component mines both programmers' interactions and Kinect camera data. It is based on the insight that when developers are having difficulty, both edit ratios and postures often change. Integrated workspace-difficulty awareness combines continuous knowledge of remote users' workspace with continuous knowledge of when developers are having difficulty. Two variations of this component are possible based on whether potential helpers can replay developers' screen recordings. One limitation of this component is that sometimes, potential helpers spend a large amount of time trying to determine if they can offer help. Difficulty-level and barrier detection address this limitation. The former is based on the insight that when developers are having surmountable difficulties they tend to perform a cycle of editing and debugging their code; and when they are having insurmountable difficulties they tend to spend a large amount of time a) between actions and b) outside of the programming environment. Barrier detection infers two kinds of difficulties: incorrect output and design. This component is based the insight that when developers have incorrect output, their debug ratios increase; and when they have difficulty designing algorithms, they spend a large amount of time outside of the programming environment. The reusable difficulty-detection framework uses standard design patterns to enable programming-activity difficulty-detection to be used in two programming environments, Eclipse and Visual Studio. These components have been validated using lab and/or field studies.Doctor of Philosoph

    Improving availability awareness with relationship filtering

    Get PDF
    Awareness servers provide information about a person to help observers determine whether a person is available for contact. A trade -off exists in these systems: more sources of information, and higher fidelity in those sources, can improve people’s decisions, but each increase in information reduces privacy. In this thesis, we look at whether the type of relationship between the observer and the person being observed can be used to manage this trade-off. We conducted a survey that asked people what amount of information from different sources that they would disclose to seven different relationship types. We found that in more than half of the cases, people would give different amounts of information to different relationships. We then constructed a prototype system and conducted a Wizard of Oz experiment where we took the system into the real world and observed individuals using it. Our results suggest that awareness servers can be improved by allowing finer-grained control than what is currently available

    Mobile Service Awareness via Auditory Notifications

    Get PDF
    Placed within the realms of Human Computer Interaction, this thesis contributes towards the goals of Ubiquitous Computing, where mobile devices can provide anywhere, anytime support to people’s everyday activities. With interconnected computing devices distributed in our habitat, services relevant to any situation may be always available to address our needs. However, despite the enhanced capabilities of mobile phones, users had been reluctant to adopt any services other than calls and messaging. This has been changing more recently, especially since the launch of the iPhone, with users getting access to hundreds of services. The original question motivating the research presented in this thesis “How can we improve mobile service usage?” is in the interest of enthusiasts of mobile services as well as slow adopters. We propose the concept of ‘mobile service awareness’ and operationalise it through the more focused research question: “How can we design for non-intrusive yet informative auditory mobile service notifications?” We design and conduct a series of surveys, laboratory experiments and longitudinal field studies to address this question. Our results, also informed by literature on context-aware computing, awareness, notification systems and auditory interface design, produce two distinct major contributions. First, we provide a set of conclusions on the relative efficiency of auditory icons and earcons as auditory notifications. Second, we produce a set of design guidelines for the two types of notifications, based on the critical evaluation of the methodologies we develop and adapt from the literature. Although these contributions were made with mobile service notification in mind, they are arguably useful for designers of any auditory interfaces conveying complex concepts (such as mobile services) and are used in attention demanding contexts.EThOS - Electronic Theses Online ServiceGBUnited Kingdo

    Situation inference and context recognition for intelligent mobile sensing applications

    Get PDF
    The usage of smart devices is an integral element in our daily life. With the richness of data streaming from sensors embedded in these smart devices, the applications of ubiquitous computing are limitless for future intelligent systems. Situation inference is a non-trivial issue in the domain of ubiquitous computing research due to the challenges of mobile sensing in unrestricted environments. There are various advantages to having robust and intelligent situation inference from data streamed by mobile sensors. For instance, we would be able to gain a deeper understanding of human behaviours in certain situations via a mobile sensing paradigm. It can then be used to recommend resources or actions for enhanced cognitive augmentation, such as improved productivity and better human decision making. Sensor data can be streamed continuously from heterogeneous sources with different frequencies in a pervasive sensing environment (e.g., smart home). It is difficult and time-consuming to build a model that is capable of recognising multiple activities. These activities can be performed simultaneously with different granularities. We investigate the separability aspect of multiple activities in time-series data and develop OPTWIN as a technique to determine the optimal time window size to be used in a segmentation process. As a result, this novel technique reduces need for sensitivity analysis, which is an inherently time consuming task. To achieve an effective outcome, OPTWIN leverages multi-objective optimisation by minimising the impurity (the number of overlapped windows of human activity labels on one label space over time series data) while maximising class separability. The next issue is to effectively model and recognise multiple activities based on the user's contexts. Hence, an intelligent system should address the problem of multi-activity and context recognition prior to the situation inference process in mobile sensing applications. The performance of simultaneous recognition of human activities and contexts can be easily affected by the choices of modelling approaches to build an intelligent model. We investigate the associations of these activities and contexts at multiple levels of mobile sensing perspectives to reveal the dependency property in multi-context recognition problem. We design a Mobile Context Recognition System, which incorporates a Context-based Activity Recognition (CBAR) modelling approach to produce effective outcome from both multi-stage and multi-target inference processes to recognise human activities and their contexts simultaneously. Upon our empirical evaluation on real-world datasets, the CBAR modelling approach has significantly improved the overall accuracy of simultaneous inference on transportation mode and human activity of mobile users. The accuracy of activity and context recognition can also be influenced progressively by how reliable user annotations are. Essentially, reliable user annotation is required for activity and context recognition. These annotations are usually acquired during data capture in the world. We research the needs of reducing user burden effectively during mobile sensor data collection, through experience sampling of these annotations in-the-wild. To this end, we design CoAct-nnotate --- a technique that aims to improve the sampling of human activities and contexts by providing accurate annotation prediction and facilitates interactive user feedback acquisition for ubiquitous sensing. CoAct-nnotate incorporates a novel multi-view multi-instance learning mechanism to perform more accurate annotation prediction. It also includes a progressive learning process (i.e., model retraining based on co-training and active learning) to improve its predictive performance over time. Moving beyond context recognition of mobile users, human activities can be related to essential tasks that the users perform in daily life. Conversely, the boundaries between the types of tasks are inherently difficult to establish, as they can be defined differently from the individuals' perspectives. Consequently, we investigate the implication of contextual signals for user tasks in mobile sensing applications. To define the boundary of tasks and hence recognise them, we incorporate such situation inference process (i.e., task recognition) into the proposed Intelligent Task Recognition (ITR) framework to learn users' Cyber-Physical-Social activities from their mobile sensing data. By recognising the engaged tasks accurately at a given time via mobile sensing, an intelligent system can then offer proactive supports to its user to progress and complete their tasks. Finally, for robust and effective learning of mobile sensing data from heterogeneous sources (e.g., Internet-of-Things in a mobile crowdsensing scenario), we investigate the utility of sensor data in provisioning their storage and design QDaS --- an application agnostic framework for quality-driven data summarisation. This allows an effective data summarisation by performing density-based clustering on multivariate time series data from a selected source (i.e., data provider). Thus, the source selection process is determined by the measure of data quality. Nevertheless, this framework allows intelligent systems to retain comparable predictive results by its effective learning on the compact representations of mobile sensing data, while having a higher space saving ratio. This thesis contains novel contributions in terms of the techniques that can be employed for mobile situation inference and context recognition, especially in the domain of ubiquitous computing and intelligent assistive technologies. This research implements and extends the capabilities of machine learning techniques to solve real-world problems on multi-context recognition, mobile data summarisation and situation inference from mobile sensing. We firmly believe that the contributions in this research will help the future study to move forward in building more intelligent systems and applications
    corecore