48 research outputs found

    The Internet of Things Will Thrive by 2025

    Get PDF
    This report is the latest research report in a sustained effort throughout 2014 by the Pew Research Center Internet Project to mark the 25th anniversary of the creation of the World Wide Web by Sir Tim Berners-LeeThis current report is an analysis of opinions about the likely expansion of the Internet of Things (sometimes called the Cloud of Things), a catchall phrase for the array of devices, appliances, vehicles, wearable material, and sensor-laden parts of the environment that connect to each other and feed data back and forth. It covers the over 1,600 responses that were offered specifically about our question about where the Internet of Things would stand by the year 2025. The report is the next in a series of eight Pew Research and Elon University analyses to be issued this year in which experts will share their expectations about the future of such things as privacy, cybersecurity, and net neutrality. It includes some of the best and most provocative of the predictions survey respondents made when specifically asked to share their views about the evolution of embedded and wearable computing and the Internet of Things

    Cost aware Inference for IoT Devices

    Full text link
    Networked embedded devices (IoTs) of limitedCPU, memory and power resources are revo-lutionizing data gathering, remote monitoringand planning in many consumer and businessapplications. Nevertheless, resource limita-tions place a significant burden on their ser-vice life and operation, warranting cost-awaremethods that are capable of distributivelyscreening redundancies in device informationand transmitting informative data. We pro-pose to train a decentralized gated networkthat, given an observed instance at test-time,allows for activation of select devices to trans-mit information to a central node, which thenperforms inference. We analyze our proposedgradient descent algorithm for Gaussian fea-tures and establish convergence guaranteesunder good initialization. We conduct exper-iments on a number of real-world datasetsarising in IoT applications and show that ourmodel results in over 1.5X service life withnegligible accuracy degradation relative to aperformance achievable by a neural network.http://proceedings.mlr.press/v89/zhu19d/zhu19d.pdfPublished versio

    Statistical models for meal-level estimation of mass and energy intake using features derived from video observation and a chewing sensor

    Get PDF
    Accurate and objective assessment of energy intake remains an ongoing problem. We used features derived from annotated video observation and a chewing sensor to predict mass and energy intake during a meal without participant self-report. 30 participants each consumed 4 different meals in a laboratory setting and wore a chewing sensor while being videotaped. Subject-independent models were derived from bite, chew, and swallow features obtained from either video observation or information extracted from the chewing sensor. With multiple regression analysis, a forward selection procedure was used to choose the best model. The best estimates of meal mass and energy intake had (mean ± standard deviation) absolute percentage errors of 25.2% ± 18.9% and 30.1% ± 33.8%, respectively, and mean ± standard deviation estimation errors of −17.7 ± 226.9 g and −6.1 ± 273.8 kcal using features derived from both video observations and sensor data. Both video annotation and sensor-derived features may be utilized to objectively quantify energy intake.DK10079604 - Foundation for the National Institutes of Health (Foundation for the National Institutes of Health, Inc.); DK10079604 - Foundation for the National Institutes of Health (Foundation for the National Institutes of Health, Inc.); DK10079604 - Foundation for the National Institutes of Health (Foundation for the National Institutes of Health, Inc.); DK10079604 - Foundation for the National Institutes of Health (Foundation for the National Institutes of Health, Inc.); DK10079604 - Foundation for the National Institutes of Health (Foundation for the National Institutes of Health, Inc.); DK10079604 - Foundation for the National Institutes of Health (Foundation for the National Institutes of Health, Inc.)Published versio

    New HCI techniques for better living through technology

    Get PDF
    In the Human Computer Interaction community, researchers work on many projects that investigate the efficacy of new technologies for better living, but unlike other research fields, these researchers must have an approach that is typically multi-disciplinary. Technology is always developing thus improving our lives in many ways like education, health and communication. This due to the fact that it is supposed to make life easier. This dissertation explores three main aspects: the first is learning with new technologies, the second is the improvement of real life by using innovative devices while the third is the usage of mobile devices in combination with image processing algorithms and computer graphics techniques. We firstly describe the progress on the state of the art and related work that have been necessary to implement such tools on commodity hardware and deploy them in both mobile and desktop settings. We propose the usage of different technologies in different settings, comparing these solutions for enhancing the interaction experience by introducing virtual/augmented reality tools for supporting this kind of activities. We also applied well-known gamification techniques coming from different mobile applications for demonstrating how users can be entertained and motivated in their working out. We describe our design and prototype of several integrated systems created to improve the educational process, to enhance the shopping experience, to provide new experiences for travellers and even to improve fitness and wellness activities. Finally, we discuss our findings and frame them in the broader context of better living thanks to technology, drawing the lessons learnt from each work while also proposing relative future work

    Dietary Monitoring Through Sensing Mastication Dynamics

    Get PDF
    Unhealthy dietary habits (such as eating disorders, eating too fast, excessive energy intake, and chewing side preference) are major causes of some chronic diseases, including obesity, heart disease, digestive system disease, and diabetes. Dietary monitoring is necessary and important for patients to change their unhealthy diet and eating habits. However, the existing monitoring methods are either intrusive or not accurate enough. In this dissertation, we present our efforts to use wearable motion sensors to sense mastication dynamics for continuous dietary monitoring. First, we study how to detect a subject\u27s eating activity and count the number of chews. We observe that during eating the mastication muscles contract and hence bulge to some degree. This bulge of the mastication muscles has the same frequency as chewing. These observations motivate us to detect eating activity and count chews through attaching a triaxial accelerometer on the temporalis. The proposed method does not record any private personal information (audio, video, etc.). Because the accelerometer is embedded into a headband, this method is comparatively less intrusive for the user\u27s daily living than previously-used methods. Experiments are conducted and the results are promising. For eating activity detection, the average accuracy and F-score of five classifiers are 94.4% and 87.2%, respectively, in 10-fold cross-validation test using only 5 seconds of acceleration data. For chew counts, the average error rate of four users is 12.2%. Second, we study how to recognize different food types. We observe that each type of food has its own intrinsic properties, such as hardness, elasticity, fracturability, adhesiveness, and size, which result in different mastication dynamics. Accordingly, we propose to use wearable motion sensors to sense mastication dynamics and infer food types. We specifically define six mastication dynamics parameters to represent these food properties. They are chewing speed, the number of chews, chewing time, chewing force, chewing cycle duration, and skull vibration. We embed motion sensors in a headband worn over the temporalis muscles to sense mastication dynamics accurately and less intrusively than other methods. In addition, we extract 37 hand-crafted features from each chewing sequence to explicitly characterize the mastication dynamics using motion sensor data. A real-world evaluation dataset of 11 food categories (20 types of food in total) is collected from 15 human subjects. The average recognition accuracy reaches 74.3%. The highest recognition accuracy for a single subject is up to 86.7%. Third, we study how to detect chewing sides. We observe that the temporalis muscle bulge and skull vibration of the chewing side are different from those of the non-chewing side. This observation motivates us to deploy motion sensors on the left and right temporalis muscles to detect chewing sides. We utilize a heuristic-rules based method to exclude non-chewing data and segment each chew accurately. Then, the relative difference series of the left and right sensors are calculated to characterize the difference of muscle bulge and skull vibration between the chewing side and the non-chewing side. To accurately detect chewing sides, we train a two-class classifier using long short-term memory (LSTM), an artificial recurrent neural network that is especially suitable for temporal data with unequal input lengths. A real-world evaluation dataset of eight food types is collected from eight human subjects. The average detection accuracy reaches 84.8%. The highest detection accuracy for a single subject is up to 97.4%

    Methods for monitoring the human circadian rhythm in free-living

    Get PDF
    Our internal clock, the circadian clock, determines at which time we have our best cognitive abilities, are physically strongest, and when we are tired. Circadian clock phase is influenced primarily through exposure to light. A direct pathway from the eyes to the suprachiasmatic nucleus, where the circadian clock resides, is used to synchronise the circadian clock to external light-dark cycles. In modern society, with the ability to work anywhere at anytime and a full social agenda, many struggle to keep internal and external clocks synchronised. Living against our circadian clock makes us less efficient and poses serious health impact, especially when exercised over a long period of time, e.g. in shift workers. Assessing circadian clock phase is a cumbersome and uncomfortable task. A common method, dim light melatonin onset testing, requires a series of eight saliva samples taken in hourly intervals while the subject stays in dim light condition from 5 hours before until 2 hours past their habitual bedtime. At the same time, sensor-rich smartphones have become widely available and wearable computing is on the rise. The hypothesis of this thesis is that smartphones and wearables can be used to record sensor data to monitor human circadian rhythms in free-living. To test this hypothesis, we conducted research on specialised wearable hardware and smartphones to record relevant data, and developed algorithms to monitor circadian clock phase in free-living. We first introduce our smart eyeglasses concept, which can be personalised to the wearers head and 3D-printed. Furthermore, hardware was integrated into the eyewear to recognise typical activities of daily living (ADLs). A light sensor integrated into the eyeglasses bridge was used to detect screen use. In addition to wearables, we also investigate if sleep-wake patterns can be revealed from smartphone context information. We introduce novel methods to detect sleep opportunity, which incorporate expert knowledge to filter and fuse classifier outputs. Furthermore, we estimate light exposure from smartphone sensor and weather in- formation. We applied the Kronauer model to compare the phase shift resulting from head light measurements, wrist measurements, and smartphone estimations. We found it was possible to monitor circadian phase shift from light estimation based on smartphone sensor and weather information with a weekly error of 32±17min, which outperformed wrist measurements in 11 out of 12 participants. Sleep could be detected from smartphone use with an onset error of 40±48 min and wake error of 42±57 min. Screen use could be detected smart eyeglasses with 0.9 ROC AUC for ambient light intensities below 200lux. Nine clusters of ADLs were distinguished using Gaussian mixture models with an average accuracy of 77%. In conclusion, a combination of the proposed smartphones and smart eyeglasses applications could support users in synchronising their circadian clock to the external clocks, thus living a healthier lifestyle

    Deep Multi Temporal Scale Networks for Human Motion Analysis

    Get PDF
    The movement of human beings appears to respond to a complex motor system that contains signals at different hierarchical levels. For example, an action such as ``grasping a glass on a table'' represents a high-level action, but to perform this task, the body needs several motor inputs that include the activation of different joints of the body (shoulder, arm, hand, fingers, etc.). Each of these different joints/muscles have a different size, responsiveness, and precision with a complex non-linearly stratified temporal dimension where every muscle has its temporal scale. Parts such as the fingers responds much faster to brain input than more voluminous body parts such as the shoulder. The cooperation we have when we perform an action produces smooth, effective, and expressive movement in a complex multiple temporal scale cognitive task. Following this layered structure, the human body can be described as a kinematic tree, consisting of joints connected. Although it is nowadays well known that human movement and its perception are characterised by multiple temporal scales, very few works in the literature are focused on studying this particular property. In this thesis, we will focus on the analysis of human movement using data-driven techniques. In particular, we will focus on the non-verbal aspects of human movement, with an emphasis on full-body movements. The data-driven methods can interpret the information in the data by searching for rules, associations or patterns that can represent the relationships between input (e.g. the human action acquired with sensors) and output (e.g. the type of action performed). Furthermore, these models may represent a new research frontier as they can analyse large masses of data and focus on aspects that even an expert user might miss. The literature on data-driven models proposes two families of methods that can process time series and human movement. The first family, called shallow models, extract features from the time series that can help the learning algorithm find associations in the data. These features are identified and designed by domain experts who can identify the best ones for the problem faced. On the other hand, the second family avoids this phase of extraction by the human expert since the models themselves can identify the best set of features to optimise the learning of the model. In this thesis, we will provide a method that can apply the multi-temporal scales property of the human motion domain to deep learning models, the only data-driven models that can be extended to handle this property. We will ask ourselves two questions: what happens if we apply knowledge about how human movements are performed to deep learning models? Can this knowledge improve current automatic recognition standards? In order to prove the validity of our study, we collected data and tested our hypothesis in specially designed experiments. Results support both the proposal and the need for the use of deep multi-scale models as a tool to better understand human movement and its multiple time-scale nature
    corecore