160 research outputs found

    A Lifelogging Platform Towards Detecting Negative Emotions in Everyday Life using Wearable Devices

    Get PDF
    Repeated experiences of negative emotions, such as stress, anger or anxiety, can have long-term consequences for health. These episodes of negative emotion can be associated with inflammatory changes in the body, which are clinically relevant for the development of disease in the long-term. However, the development of effective coping strategies can mediate this causal chain. The proliferation of ubiquitous and unobtrusive sensor technology supports an increased awareness of those physiological states associated with negative emotion and supports the development of effective coping strategies. Smartphone and wearable devices utilise multiple on-board sensors that are capable of capturing daily behaviours in a permanent and comprehensive manner, which can be used as the basis for self-reflection and insight. However, there are a number of inherent challenges in this application, including unobtrusive monitoring, data processing, and analysis. This paper posits a mobile lifelogging platform that utilises wearable technology to monitor and classify levels of stress. A pilot study has been undertaken with six participants, who completed up to ten days of data collection. During this time, they wore a wearable device on the wrist during waking hours to collect instances of heart rate (HR) and Galvanic Skin Resistance (GSR). Preliminary data analysis was undertaken using three supervised machine learning algorithms: Linear Discriminant Analysis (LDA), Quadratic Discriminant Analysis (QDA) and Decision Tree (DT). An accuracy of 70% was achieved using the Decision Tree algorithm

    Detecting Negative Emotions During Real-Life Driving via Dynamically Labelled Physiological Data

    Get PDF
    Driving is an activity that can induce significant levels of negative emotion, such as stress and anger. These negative emotions occur naturally in everyday life, but frequent episodes can be detrimental to cardiovascular health in the long term. The development of monitoring systems to detect negative emotions often rely on labels derived from subjective self-report. However, this approach is burdensome, intrusive, low fidelity (i.e. scales are administered infrequently) and places huge reliance on the veracity of subjective self-report. This paper explores an alternative approach that provides greater fidelity by using psychophysiological data (e.g. heart rate) to dynamically label data derived from the driving task (e.g. speed, road type). A number of different techniques for generating labels for machine learning were compared: 1) deriving labels from subjective self-report and 2) labelling data via psychophysiological activity (e.g. heart rate (HR), pulse transit time (PTT), etc.) to create dynamic labels of high vs. low anxiety for each participant. The classification accuracy associated with both labelling techniques was evaluated using Linear Discriminant Analysis (LDA) and Support Vector Machines (SVM). Results indicated that classification of driving data using subjective labelled data (1) achieved a maximum AUC of 73%, whilst the labels derived from psychophysiological data (2) achieved equivalent performance of 74%. Whilst classification performance was similar, labelling driving data via psychophysiology offers a number of advantages over self-reports, e.g. implicit, dynamic, objective, high fidelity

    IoT Maps : Charting the Internet of Things

    Get PDF
    Internet of Things (IoT) devices are becoming increasingly ubiquitous in our everyday environments. While the number of devices and the degree of connectivity is growing, it is striking that as a society we are increasingly unaware of the locations and purposes of such devices. Indeed, much of the IoT technology being deployed is invisible and does not communicate its presence or purpose to the inhabitants of the spaces within which it is deployed. In this paper, we explore the potential benefits and challenges of constructing IoT maps that record the location of IoT devices. To illustrate the need for such maps, we draw on our experiences from multiple deployments of IoT systems.Peer reviewe

    A smartwater metering deployment based on the fog computing paradigm

    Get PDF
    In this paper, we look into smart water metering infrastructures that enable continuous, on-demand and bidirectional data exchange between metering devices, water flow equipment, utilities and end-users. We focus on the design, development and deployment of such infrastructures as part of larger, smart city, infrastructures. Until now, such critical smart city infrastructures have been developed following a cloud-centric paradigm where all the data are collected and processed centrally using cloud services to create real business value. Cloud-centric approaches need to address several performance issues at all levels of the network, as massive metering datasets are transferred to distant machine clouds while respecting issues like security and data privacy. Our solution uses the fog computing paradigm to provide a system where the computational resources already available throughout the network infrastructure are utilized to facilitate greatly the analysis of fine-grained water consumption data collected by the smart meters, thus significantly reducing the overall load to network and cloud resources. Details of the system's design are presented along with a pilot deployment in a real-world environment. The performance of the system is evaluated in terms of network utilization and computational performance. Our findings indicate that the fog computing paradigm can be applied to a smart grid deployment to reduce effectively the data volume exchanged between the different layers of the architecture and provide better overall computational, security and privacy capabilities to the system

    Predicting Temporal Aspects of Movement for Predictive Replication in Fog Environments

    Full text link
    To fully exploit the benefits of the fog environment, efficient management of data locality is crucial. Blind or reactive data replication falls short in harnessing the potential of fog computing, necessitating more advanced techniques for predicting where and when clients will connect. While spatial prediction has received considerable attention, temporal prediction remains understudied. Our paper addresses this gap by examining the advantages of incorporating temporal prediction into existing spatial prediction models. We also provide a comprehensive analysis of spatio-temporal prediction models, such as Deep Neural Networks and Markov models, in the context of predictive replication. We propose a novel model using Holt-Winter's Exponential Smoothing for temporal prediction, leveraging sequential and periodical user movement patterns. In a fog network simulation with real user trajectories our model achieves a 15% reduction in excess data with a marginal 1% decrease in data availability

    Talk, text, tag? Understanding self-annotation of smart home data from a user’s perspective

    Get PDF
    Delivering effortless interactions and appropriate interventions through pervasive systems requires making sense of multiple streams of sensor data. This is particularly challenging when these concern people’s natural behaviours in the real world. This paper takes a multidisciplinary perspective of annotation and draws on an exploratory study of 12 people, who were encouraged to use a multi-modal annotation app while living in a prototype smart home. Analysis of the app usage data and of semi-structured interviews with the participants revealed strengths and limitations regarding self-annotation in a naturalistic context. Handing control of the annotation process to research participants enabled them to reason about their own data, while generating accounts that were appropriate and acceptable to them. Self-annotation provided participants an opportunity to reflect on themselves and their routines, but it was also a means to express themselves freely and sometimes even a backchannel to communicate playfully with the researchers. However, self-annotation may not be an effective way to capture accurate start and finish times for activities, or location associated with activity information. This paper offers new insights and recommendations for the design of self-annotation tools for deployment in the real world

    Wireless Channel Assessment of Auditoriums for the Deployment of Augmented Reality Systems for Enhanced Show Experience of Impaired Persons

    Get PDF
    [Abstract] Auditoriums and theaters are buildings in which concerts, shows, and conferences are held, offering a diverse and dynamic cultural program to citizens. Unfortunately, people with impairments usually have difficulties in fully experiencing all the provided cultural activities, since such environments are not totally adapted to their necessities. For example, in an auditorium, visually impaired users have to be accompanied to their seats by staff, as well as when the person wants to leave the event in the middle of the show (e.g., to go to the toilet), or when he/she wants to move around during breaks. This work is aimed at improving the autonomy of disabled people within the mentioned kinds of environments, as well as enhancing their show experiences by deploying wireless sensor networks and wireless body area networks connected to an augmented reality device (Microsoft HoloLens smart glasses). For that purpose, intensive measurements have been taken in a real scenario (the Baluarte Congress Center and Auditorium of Navarre) located in the city of Pamplona. The results show that this kind of environment presents high wireless interference at different frequency bands, due to the existing wireless systems deployed within them, such as multiple WiFi access points, wireless microphones, or wireless communication systems used by the show staff. Therefore, radio channel simulations have been also performed with the aim of assessing the potential deployment of the proposed solution. The presented work can lead to the deployment of augmented reality systems within auditoriums and theaters, boosting the development of new applications.Xunta de Galicia; ED431C 2016-045Xunta de Galicia; , ED431G/01Ministerio de Ciencia, Innovación y Universidades; RTI2018-095499-B-C3

    Analysis & Numerical Simulation of Indian Food Image Classification Using Convolutional Neural Network

    Get PDF
    Recognition of Indian food can be assumed to be a fine-grained visual task owing to recognition property of various food classes. It is therefore important to provide an optimized approach to segmentation and classification for different applications based on food recognition. Food computation mainly utilizes a computer science approach which needs food data from various data outlets like real-time images, social flat-forms, food journaling, food datasets etc, for different modalities. In order to consider Indian food images for a number of applications we need a proper analysis of food images with state-of-art-techniques. The appropriate segmentation and classification methods are required to forecast the relevant and upgraded analysis. As accurate segmentation lead to proper recognition and identification, in essence we have considered segmentation of food items from images. Considering the basic convolution neural network (CNN) model, there are edge and shape constraints that influence the outcome of segmentation on the edge side. Approaches that can solve the problem of edges need to be developed; an edge-adaptive As we have solved the problem of food segmentation with CNN, we also have difficulty in classifying food, which has been an important area for various types of applications. Food analysis is the primary component of health-related applications and is needed in our day to day life. It has the proficiency to directly predict the score function from image pixels, input layer to produce the tensor outputs and convolution layer is used for self- learning kernel through back-propagation. In this method, feature extraction and Max-Pooling is considered with multiple layers, and outputs are obtained using softmax functionality. The proposed implementation tests 92.89% accuracy by considering some data from yummly dataset and by own prepared dataset. Consequently, it is seen that some more improvement is needed in food image classification. We therefore consider the segmented feature of EA-CNN and concatenated it with the feature of our custom Inception-V3 to provide an optimized classification. It enhances the capacity of important features for further classification process. In extension we have considered south Indian food classes, with our own collected food image dataset and got 96.27% accuracy. The obtained accuracy for the considered dataset is very well in comparison with our foregoing method and state-of-the-art techniques.

    Recognition of Activities of Daily Living and Environments Using Acoustic Sensors Embedded on Mobile Devices

    Get PDF
    The identification of Activities of Daily Living (ADL) is intrinsic with the user’s environment recognition. This detection can be executed through standard sensors present in every-day mobile devices. On the one hand, the main proposal is to recognize users’ environment and standing activities. On the other hand, these features are included in a framework for the ADL and environment identification. Therefore, this paper is divided into two parts—firstly, acoustic sensors are used for the collection of data towards the recognition of the environment and, secondly, the information of the environment recognized is fused with the information gathered by motion and magnetic sensors. The environment and ADL recognition are performed by pattern recognition techniques that aim for the development of a system, including data collection, processing, fusion and classification procedures. These classification techniques include distinctive types of Artificial Neural Networks (ANN), analyzing various implementations of ANN and choosing the most suitable for further inclusion in the following different stages of the developed system. The results present 85.89% accuracy using Deep Neural Networks (DNN) with normalized data for the ADL recognition and 86.50% accuracy using Feedforward Neural Networks (FNN) with non-normalized data for environment recognition. Furthermore, the tests conducted present 100% accuracy for standing activities recognition using DNN with normalized data, which is the most suited for the intended purpose.This work is funded by FCT/MEC through national funds and co-funded by FEDER-PT2020 partnership agreement under the project UID/EEA/50008/2019
    • …
    corecore