350 research outputs found

    Group activity recognition and analysis: concept, model and service architecture

    Full text link
    This thesis presents a new method for representing and recognizing human group activities using sensors, showing how real-world group activity scenarios can be automatically detected and understood by combining sensor data from smartphones, smartwatches, and Internet-of-Things devices. This research also introduces a generic service-oriented architecture for cloud-based group activity recognition

    Multi-view stacking for activity recognition with sound and accelerometer data

    Get PDF
    Many Ambient Intelligence (AmI) systems rely on automatic human activity recognition for getting crucial context information, so that they can provide personalized services based on the current users’ state. Activity recognition provides core functionality to many types of systems including: Ambient Assisted Living, fitness trackers, behavior monitoring, security, and so on. The advent of wearable devices along with their diverse set of embedded sensors opens new opportunities for ubiquitous context sensing. Recently, wearable devices such as smartphones and smart-watches have been used for activity recognition and monitoring. Most of the previous works use inertial sensors (accelerometers, gyroscopes) for activity recognition and combine them using an aggregation approach, i.e., extract features from each sensor and aggregate them to build the final classification model. This is not optimal since each sensor data source has its own statistical properties. In this work, we propose the use of a multi-view stacking method to fuse the data from heterogeneous types of sensors for activity recognition. Specifically, we used sound and accelerometer data collected with a smartphone and a wrist-band while performing home task activities. The proposed method is based on multi-view learning and stacked generalization, and consists of training a model for each of the sensor views and combining them with stacking. Our experimental results showed that the multi-view stacking method outperformed the aggregation approach in terms of accuracy, recall and specificity

    Advanced Internet of Things for Personalised Healthcare System: A Survey

    Get PDF
    As a new revolution of the Internet, Internet of Things (IoT) is rapidly gaining ground as a new research topic in many academic and industrial disciplines, especially in healthcare. Remarkably, due to the rapid proliferation of wearable devices and smartphone, the Internet of Things enabled technology is evolving healthcare from conventional hub based system to more personalised healthcare system (PHS). However, empowering the utility of advanced IoT technology in PHS is still significantly challenging in the area considering many issues, like shortage of cost-effective and accurate smart medical sensors, unstandardized IoT system architectures, heterogeneity of connected wearable devices, multi-dimensionality of data generated and high demand for interoperability. In an effect to understand advance of IoT technologies in PHS, this paper will give a systematic review on advanced IoT enabled PHS. It will review the current research of IoT enabled PHS, and key enabling technologies, major IoT enabled applications and successful case studies in healthcare, and finally point out future research trends and challenges

    Developing a Home Service Robot Platform for Smart Homes

    Get PDF
    The purpose of this work is to develop a testbed for a smart home environment integrated with a home service robot (ASH Testbed) as well as to build home service robot platforms. The architecture of ASH Testbed was proposed and implemented based on ROS (Robot Operating System). In addition, two robot platforms, ASCCHomeBots, were developed using an iRobot Create base and a Pioneer base. They are equipped with capabilities such as mapping, autonomous navigation. They are also equipped with the natural human interfaces including hand-gesture recognition using a RGB-D camera, online speech recognition through cloud computing services provided by Google, and local speech recognition based on PocketSphinx. Furthermore, the Pioneer based ASCCHomeBot was developed along with an open audition system. This allows the robot to serve the elderly living alone at home. We successfully implemented the software for this system that realizes robot services and audition services for high level applications such as telepresence video conference, sound source position estimation, multiple source speech recognition, and human assisted sound classification. Our experimental results validated the proposed framework and the effectiveness of the developed robots as well as the proposed testbed.Electrical Engineerin

    An Analysis of Audio Features to Develop a Human Activity Recognition Model Using Genetic Algorithms, Random Forests, and Neural Networks

    Get PDF
    This work presents a human activity recognition (HAR) model based on audio features. The use of sound as an information source for HAR models represents a challenge because sound wave analyses generate very large amounts of data. However, feature selection techniques may reduce the amount of data required to represent an audio signal sample. Some of the audio features that were analyzed include Mel-frequency cepstral coefficients (MFCC). Although MFCC are commonly used in voice and instrument recognition, their utility within HAR models is yet to be confirmed, and this work validates their usefulness. Additionally, statistical features were extracted from the audio samples to generate the proposed HAR model. The size of the information is necessary to conform a HAR model impact directly on the accuracy of the model. This problem also was tackled in the present work; our results indicate that we are capable of recognizing a human activity with an accuracy of 85% using the HAR model proposed. This means that minimum computational costs are needed, thus allowing portable devices to identify human activities using audio as an information source

    Human centric situational awareness

    Get PDF
    Context awareness is an approach that has been receiving increasing focus in the past years. A context aware device can understand surrounding conditions and adapt its behavior accordingly to meet user demands. Mobile handheld devices offer a motivating platform for context aware applications as a result of their rapidly growing set of features and sensing abilities. This research aims at building a situational awareness model that utilizes multimodal sensor data provided through the various sensing capabilities available on a wide range of current handheld smart phones. The model will make use of seven different virtual and physical sensors commonly available on mobile devices, to gather a large set of parameters that identify the occurrence of a situation for one of five predefined context scenarios: In meeting, Driving, in party, In Theatre and Sleeping. As means of gathering the wisdom of the crowd and in an effort to reach a habitat sensitive awareness model, a survey was conducted to understand the user perception of each context situation. The data collected was used to build the inference engine of a prototype context awareness system utilizing context weights introduced in [39] and the confidence metric in [26] with some variation as a means for reasoning. The developed prototype\u27s results were benchmarked against two existing context awareness platforms Darwin Phones [17] and Smart Profile [11], where the prototype was able to acquire 5% and 7.6% higher accuracy levels than the two systems respectively while performing tasks of higher complexity. The detailed results and evaluation are highlighted further in section 6.4
    corecore