10,105 research outputs found

    Automatic Stress Detection in Working Environments from Smartphones' Accelerometer Data: A First Step

    Full text link
    Increase in workload across many organisations and consequent increase in occupational stress is negatively affecting the health of the workforce. Measuring stress and other human psychological dynamics is difficult due to subjective nature of self- reporting and variability between and within individuals. With the advent of smartphones it is now possible to monitor diverse aspects of human behaviour, including objectively measured behaviour related to psychological state and consequently stress. We have used data from the smartphone's built-in accelerometer to detect behaviour that correlates with subjects stress levels. Accelerometer sensor was chosen because it raises fewer privacy concerns (in comparison to location, video or audio recording, for example) and because its low power consumption makes it suitable to be embedded in smaller wearable devices, such as fitness trackers. 30 subjects from two different organizations were provided with smartphones. The study lasted for 8 weeks and was conducted in real working environments, with no constraints whatsoever placed upon smartphone usage. The subjects reported their perceived stress levels three times during their working hours. Using combination of statistical models to classify self reported stress levels, we achieved a maximum overall accuracy of 71% for user-specific models and an accuracy of 60% for the use of similar-users models, relying solely on data from a single accelerometer.Comment: in IEEE Journal of Biomedical and Health Informatics, 201

    Anticipatory Mobile Computing: A Survey of the State of the Art and Research Challenges

    Get PDF
    Today's mobile phones are far from mere communication devices they were ten years ago. Equipped with sophisticated sensors and advanced computing hardware, phones can be used to infer users' location, activity, social setting and more. As devices become increasingly intelligent, their capabilities evolve beyond inferring context to predicting it, and then reasoning and acting upon the predicted context. This article provides an overview of the current state of the art in mobile sensing and context prediction paving the way for full-fledged anticipatory mobile computing. We present a survey of phenomena that mobile phones can infer and predict, and offer a description of machine learning techniques used for such predictions. We then discuss proactive decision making and decision delivery via the user-device feedback loop. Finally, we discuss the challenges and opportunities of anticipatory mobile computing.Comment: 29 pages, 5 figure

    ConXsense - Automated Context Classification for Context-Aware Access Control

    Full text link
    We present ConXsense, the first framework for context-aware access control on mobile devices based on context classification. Previous context-aware access control systems often require users to laboriously specify detailed policies or they rely on pre-defined policies not adequately reflecting the true preferences of users. We present the design and implementation of a context-aware framework that uses a probabilistic approach to overcome these deficiencies. The framework utilizes context sensing and machine learning to automatically classify contexts according to their security and privacy-related properties. We apply the framework to two important smartphone-related use cases: protection against device misuse using a dynamic device lock and protection against sensory malware. We ground our analysis on a sociological survey examining the perceptions and concerns of users related to contextual smartphone security and analyze the effectiveness of our approach with real-world context data. We also demonstrate the integration of our framework with the FlaskDroid architecture for fine-grained access control enforcement on the Android platform.Comment: Recipient of the Best Paper Awar

    A survey of comics research in computer science

    Full text link
    Graphical novels such as comics and mangas are well known all over the world. The digital transition started to change the way people are reading comics, more and more on smartphones and tablets and less and less on paper. In the recent years, a wide variety of research about comics has been proposed and might change the way comics are created, distributed and read in future years. Early work focuses on low level document image analysis: indeed comic books are complex, they contains text, drawings, balloon, panels, onomatopoeia, etc. Different fields of computer science covered research about user interaction and content generation such as multimedia, artificial intelligence, human-computer interaction, etc. with different sets of values. We propose in this paper to review the previous research about comics in computer science, to state what have been done and to give some insights about the main outlooks

    Self-Adaptive and Lightweight Real-Time Sleep Recognition With Smartphone

    Get PDF
    It is widely recognized that sleep is a basic phys- iological process having fundamental effects on human health, performance and well-being. Such evidence stimulates the re- search of solutions to foster self-awareness of personal sleeping habits, and correct living environment management policies to encourage sleep. In this context, the use of mobile technologies powered with automatic sleep recognition capabilities can be helpful, and ubiquitous computing devices like smartphones can be leveraged as proxies to unobtrusively analyse the human behaviour. To this aim, we propose a real-time sleep recognition methodology relied on a smartphone equipped with a mobile app that exploits contextual and usage information to infer sleep habits. During an initial training stage, the selected features are processed by k-Nearest Neighbors, Decision Tree, Random Forest, and Support Vector Machine classifiers, to select the best performing one. Moreover, a 1st-order Markov Chain is applied to improve the recognition performance. Experimental results, both offline in a Matlab environment, and online through a fully functional Android app, demonstrate the effectiveness of the proposed approach, achieving acceptable results in term of Precision, Recall, and F1-score

    User-adaptive models for activity and emotion recognition using deep transfer learning and data augmentation

    Get PDF
    Kan bare brukes i forskningssammenheng, ikke kommersielt. Les mer her: https://www.springernature.com/gp/open-research/policies/accepted-manuscript-termsBuilding predictive models for human-interactive systems is a challenging task. Every individual has unique characteristics and behaviors. A generic human–machine system will not perform equally well for each user given the between-user differences. Alternatively, a system built specifically for each particular user will perform closer to the optimum. However, such a system would require more training data for every specific user, thus hindering its applicability for real-world scenarios. Collecting training data can be time consuming and expensive. For example, in clinical applications it can take weeks or months until enough data is collected to start training machine learning models. End users expect to start receiving quality feedback from a given system as soon as possible without having to rely on time consuming calibration and training procedures. In this work, we build and test user-adaptive models (UAM) which are predictive models that adapt to each users’ characteristics and behaviors with reduced training data. Our UAM are trained using deep transfer learning and data augmentation and were tested on two public datasets. The first one is an activity recognition dataset from accelerometer data. The second one is an emotion recognition dataset from speech recordings. Our results show that the UAM have a significant increase in recognition performance with reduced training data with respect to a general model. Furthermore, we show that individual characteristics such as gender can influence the models’ performance.acceptedVersio

    BeCAPTCHA: Behavioral bot detection using touchscreen and mobile sensors benchmarked on HuMIdb

    Full text link
    In this paper we study the suitability of a new generation of CAPTCHA methods based on smartphone interactions. The heterogeneous flow of data generated during the interaction with the smartphones can be used to model human behavior when interacting with the technology and improve bot detection algorithms. For this, we propose BeCAPTCHA, a CAPTCHA method based on the analysis of the touchscreen information obtained during a single drag and drop task in combination with the accelerometer data. The goal of BeCAPTCHA is to determine whether the drag and drop task was realized by a human or a bot. We evaluate the method by generating fake samples synthesized with Generative Adversarial Neural Networks and handcrafted methods. Our results suggest the potential of mobile sensors to characterize the human behavior and develop a new generation of CAPTCHAs. The experiments are evaluated with HuMIdb1 (Human Mobile Interaction database), a novel multimodal mobile database that comprises 14 mobile sensors acquired from 600 users. HuMIdb is freely available to the research communityThis work has been supported by projects: PRIMA, Spain (H2020-MSCA-ITN-2019-860315), TRESPASS-ETN, Spain (H2020-MSCA-ITN-2019-860813), BIBECA RTI2018-101248-B-I00 (MINECO/FEDER), and BioGuard, Spain (Ayudas Fundación BBVA a Equipos de Investigación Científica 2017). Spanish Patent Application P20203006
    corecore