343,151 research outputs found

    Towards a Practical Pedestrian Distraction Detection Framework using Wearables

    Full text link
    Pedestrian safety continues to be a significant concern in urban communities and pedestrian distraction is emerging as one of the main causes of grave and fatal accidents involving pedestrians. The advent of sophisticated mobile and wearable devices, equipped with high-precision on-board sensors capable of measuring fine-grained user movements and context, provides a tremendous opportunity for designing effective pedestrian safety systems and applications. Accurate and efficient recognition of pedestrian distractions in real-time given the memory, computation and communication limitations of these devices, however, remains the key technical challenge in the design of such systems. Earlier research efforts in pedestrian distraction detection using data available from mobile and wearable devices have primarily focused only on achieving high detection accuracy, resulting in designs that are either resource intensive and unsuitable for implementation on mainstream mobile devices, or computationally slow and not useful for real-time pedestrian safety applications, or require specialized hardware and less likely to be adopted by most users. In the quest for a pedestrian safety system that achieves a favorable balance between computational efficiency, detection accuracy, and energy consumption, this paper makes the following main contributions: (i) design of a novel complex activity recognition framework which employs motion data available from users' mobile and wearable devices and a lightweight frequency matching approach to accurately and efficiently recognize complex distraction related activities, and (ii) a comprehensive comparative evaluation of the proposed framework with well-known complex activity recognition techniques in the literature with the help of data collected from human subject pedestrians and prototype implementations on commercially-available mobile and wearable devices

    MHARS: A mobile system for human activity recognition and inference of health situations in ambient assisted living

    Get PDF
    This paper presents MHARS (Mobile Human Activity Recognition System), a mobile system designed to monitor patients in the context of Ambient Assisted Living (AAL), which allows the recognition of the activities performed by the user as well as the detection of the activities intensity in real time. MHARS was designed to be able to gather data from different sensors, to recognize the activities and measure their intensity in different user mobility scenarios. The system allows the inference of situations regarding the health status of the patient and provides support for executing actions, reacting to events that deserve attention from the patient’s caregivers and family members. Experiments demonstrate that MHARS presents good accuracy and has an affordable consumption of mobile resources.Keywords: Ambient Assisted Living, Human Activity Recognition, situation inference, mobile computing

    Activity recognition in a Physical Interactive RoboGame

    Get PDF
    In this paper, we investigate the possibility of human physical activity recognition in a robot game scenario. Being able to recognize types of activity is essential to enable robot behavior adaptation to support player engagement. Also, the introduction of this recognition system will allow for development of better models for prediction, planning and problem solving in PIRGs that can foster human-robot interaction. The experiments reported on this paper were performed on data collected from real in-game activity, where a human player faces a mobile robot. We use a custom single tri-axial accelerometer module attached to the player’s chest in order to capture motion information. The main characteristic of our approach is the extraction of features from patterns found on the motion variance rather than on raw data. Furthermore, we allow for the recognition of unconstrained motion given that we do not ask the players to perform target activities before hand: all detectable activities are derived from the free player motion during the game itself. To the best of our knowledge, this is the first paper to consider activity recognition in a physical interactive robogame

    Human activity recognition on smartphones using a multiclass hardware-friendly support vector machine

    Get PDF
    Activity-Based Computing aims to capture the state of the user and its environment by exploiting heterogeneous sensors in order to provide adaptation to exogenous computing resources. When these sensors are attached to the subject’s body, they permit continuous monitoring of numerous physiological signals. This has appealing use in healthcare applications, e.g. the exploitation of Ambient Intelligence (AmI) in daily activity monitoring for elderly people. In this paper, we present a system for human physical Activity Recognition (AR) using smartphone inertial sensors. As these mobile phones are limited in terms of energy and computing power, we propose a novel hardware-friendly approach for multiclass classification. This method adapts the standard Support Vector Machine (SVM) and exploits fixed-point arithmetic for computational cost reduction. A comparison with the traditional SVM shows a significant improvement in terms of computational costs while maintaining similar accuracy, which can contribute to develop more sustainable systems for AmI.Peer ReviewedPostprint (author's final draft

    Situation inference and context recognition for intelligent mobile sensing applications

    Get PDF
    The usage of smart devices is an integral element in our daily life. With the richness of data streaming from sensors embedded in these smart devices, the applications of ubiquitous computing are limitless for future intelligent systems. Situation inference is a non-trivial issue in the domain of ubiquitous computing research due to the challenges of mobile sensing in unrestricted environments. There are various advantages to having robust and intelligent situation inference from data streamed by mobile sensors. For instance, we would be able to gain a deeper understanding of human behaviours in certain situations via a mobile sensing paradigm. It can then be used to recommend resources or actions for enhanced cognitive augmentation, such as improved productivity and better human decision making. Sensor data can be streamed continuously from heterogeneous sources with different frequencies in a pervasive sensing environment (e.g., smart home). It is difficult and time-consuming to build a model that is capable of recognising multiple activities. These activities can be performed simultaneously with different granularities. We investigate the separability aspect of multiple activities in time-series data and develop OPTWIN as a technique to determine the optimal time window size to be used in a segmentation process. As a result, this novel technique reduces need for sensitivity analysis, which is an inherently time consuming task. To achieve an effective outcome, OPTWIN leverages multi-objective optimisation by minimising the impurity (the number of overlapped windows of human activity labels on one label space over time series data) while maximising class separability. The next issue is to effectively model and recognise multiple activities based on the user's contexts. Hence, an intelligent system should address the problem of multi-activity and context recognition prior to the situation inference process in mobile sensing applications. The performance of simultaneous recognition of human activities and contexts can be easily affected by the choices of modelling approaches to build an intelligent model. We investigate the associations of these activities and contexts at multiple levels of mobile sensing perspectives to reveal the dependency property in multi-context recognition problem. We design a Mobile Context Recognition System, which incorporates a Context-based Activity Recognition (CBAR) modelling approach to produce effective outcome from both multi-stage and multi-target inference processes to recognise human activities and their contexts simultaneously. Upon our empirical evaluation on real-world datasets, the CBAR modelling approach has significantly improved the overall accuracy of simultaneous inference on transportation mode and human activity of mobile users. The accuracy of activity and context recognition can also be influenced progressively by how reliable user annotations are. Essentially, reliable user annotation is required for activity and context recognition. These annotations are usually acquired during data capture in the world. We research the needs of reducing user burden effectively during mobile sensor data collection, through experience sampling of these annotations in-the-wild. To this end, we design CoAct-nnotate --- a technique that aims to improve the sampling of human activities and contexts by providing accurate annotation prediction and facilitates interactive user feedback acquisition for ubiquitous sensing. CoAct-nnotate incorporates a novel multi-view multi-instance learning mechanism to perform more accurate annotation prediction. It also includes a progressive learning process (i.e., model retraining based on co-training and active learning) to improve its predictive performance over time. Moving beyond context recognition of mobile users, human activities can be related to essential tasks that the users perform in daily life. Conversely, the boundaries between the types of tasks are inherently difficult to establish, as they can be defined differently from the individuals' perspectives. Consequently, we investigate the implication of contextual signals for user tasks in mobile sensing applications. To define the boundary of tasks and hence recognise them, we incorporate such situation inference process (i.e., task recognition) into the proposed Intelligent Task Recognition (ITR) framework to learn users' Cyber-Physical-Social activities from their mobile sensing data. By recognising the engaged tasks accurately at a given time via mobile sensing, an intelligent system can then offer proactive supports to its user to progress and complete their tasks. Finally, for robust and effective learning of mobile sensing data from heterogeneous sources (e.g., Internet-of-Things in a mobile crowdsensing scenario), we investigate the utility of sensor data in provisioning their storage and design QDaS --- an application agnostic framework for quality-driven data summarisation. This allows an effective data summarisation by performing density-based clustering on multivariate time series data from a selected source (i.e., data provider). Thus, the source selection process is determined by the measure of data quality. Nevertheless, this framework allows intelligent systems to retain comparable predictive results by its effective learning on the compact representations of mobile sensing data, while having a higher space saving ratio. This thesis contains novel contributions in terms of the techniques that can be employed for mobile situation inference and context recognition, especially in the domain of ubiquitous computing and intelligent assistive technologies. This research implements and extends the capabilities of machine learning techniques to solve real-world problems on multi-context recognition, mobile data summarisation and situation inference from mobile sensing. We firmly believe that the contributions in this research will help the future study to move forward in building more intelligent systems and applications

    Culture as a Sensor? A Novel Perspective on Human Activity Recognition

    Get PDF
    Human Activity Recognition (HAR) systems are devoted to identifying, amidst the sensory stream provided by one or more sensors located so that they can monitor the actions of a person, portions related to the execution of a number of a-priori defined activities of interest. Improving the performance of systems for Human Activity Recognition is a long-standing research goal: solutions include more accurate sensors, more sophisticated algorithms for the extraction and analysis of relevant information from the sensory data, and the enhancement of the sensory analysis with general or person-specific knowledge about the execution of the activities of interest. Following the latter trend, in this article we propose the association and enhancement of the sensory data analysis with cultural information, that can be seen as an estimate of person-specific information, relieved of the burden of a long/complex setup phase. We propose a culture-aware Human Activity Recognition system which associates the recognition response provided by a state-of-the-art, culture-unaware HAR system with culture-specific information about where and when activities are most likely performed in different cultures, encoded in an ontology. The merging of the cultural information with the culture-unaware responses is done by a Bayesian Network, whose probabilistic approach allows for avoiding stereotypical representations. Experiments performed offline and online, using images acquired by a mobile robot in an apartment, show that the culture-aware HAR system consistently outperforms the culture-unaware HAR system

    Inferring Transportation Mode and Human Activity from Mobile Sensing in Daily Life

    Get PDF
    In this paper, we focus on simultaneous inference of transportation modes and human activities in daily life via modelling and inference from multivariate time series data, which are streamed from off-the- shelf mobile sensors (e.g. embedded in smartphones) in real-world dynamic environments. The transportation mode will be inferred from the structured hierarchical contexts associated with human activities. Through our mobile context recognition system, an ac- curate and robust solution can be obtained to infer transportation mode, human activity and their associated contexts (e.g. whether the user is in moving or stationary environment) simultaneously. There are many challenges in analysing and modelling human mobility patterns within urban areas due to the ever-changing en- vironments of the mobile users. For instance, a user could stay at a particular location and then travel to various destinations depend- ing on the tasks they carry within a day. Consequently, there is a need to reduce the reliance on location-based sensors (e.g. GPS), since they consume a significant amount of energy on smart de- vices, for the purpose of intelligent mobile sensing (i.e. automatic inference of transportation mode, human activity and associated contexts). Nevertheless, our system is capable of outperforming the simplistic approach that only considers independent classifications of multiple context label sets on data streamed from low energy sensors

    A Novel Energy-Efficient Approach for Human Activity Recognition

    Get PDF
    In this paper, we propose a novel energy-efficient approach for mobile activity recognition system (ARS) to detect human activities. The proposed energy-efficient ARS, using low sampling rates, can achieve high recognition accuracy and low energy consumption. A novel classifier that integrates hierarchical support vector machine and context-based classification (HSVMCC) is presented to achieve a high accuracy of activity recognition when the sampling rate is less than the activity frequency, i.e., the Nyquist sampling theorem is not satisfied. We tested the proposed energy-efficient approach with the data collected from 20 volunteers (14 males and six females) and the average recognition accuracy of around 96.0% was achieved. Results show that using a low sampling rate of 1Hz can save 17.3% and 59.6% of energy compared with the sampling rates of 5 Hz and 50 Hz. The proposed low sampling rate approach can greatly reduce the power consumption while maintaining high activity recognition accuracy. The composition of power consumption in online ARS is also investigated in this paper

    Recognition of activities of daily living

    Get PDF
    Activities of daily living (ADL) are things we normally do in daily living, including any daily activity such as feeding ourselves, bathing, dressing, grooming, work, homemaking, and leisure. The ability or inability to perform ADLs can be used as a very practical measure of human capability in many types of disorder and disability. Oftentimes in a health care facility, with the help of observations by nurses and self-reporting by residents, professional staff manually collect ADL data and enter data into the system. Technologies in smart homes can provide some solutions to detecting and monitoring a resident’s ADL. Typically multiple sensors can be deployed, such as surveillance cameras in the smart home environment, and contacted sensors affixed to the resident’s body. Note that the traditional technologies incur costly and laborious sensor deployment, and cause uncomfortable feeling of contacted sensors with increased inconvenience. This work presents a novel system facilitated via mobile devices to collect and analyze mobile data pertaining to the human users’ ADL. By employing only one smart phone, this system, named ADL recognition system, significantly reduces set-up costs and saves manpower. It encapsulates rather sophisticated technologies under the hood, such as an agent-based information management platform integrating both the mobile end and the cloud, observer patterns and a time-series based motion analysis mechanism over sensory data. As a single-point deployment system, ADL recognition system provides further benefits that enable the replay of users’ daily ADL routines, in addition to the timely assessment of their life habits
    • …
    corecore