23 research outputs found

    Convolutional neural network-based classification system design with compressed wireless sensor network images.

    No full text
    With the introduction of various advanced deep learning algorithms, initiatives for image classification systems have transitioned over from traditional machine learning algorithms (e.g., SVM) to Convolutional Neural Networks (CNNs) using deep learning software tools. A prerequisite in applying CNN to real world applications is a system that collects meaningful and useful data. For such purposes, Wireless Image Sensor Networks (WISNs), that are capable of monitoring natural environment phenomena using tiny and low-power cameras on resource-limited embedded devices, can be considered as an effective means of data collection. However, with limited battery resources, sending high-resolution raw images to the backend server is a burdensome task that has direct impact on network lifetime. To address this problem, we propose an energy-efficient pre- and post- processing mechanism using image resizing and color quantization that can significantly reduce the amount of data transferred while maintaining the classification accuracy in the CNN at the backend server. We show that, if well designed, an image in its highly compressed form can be well-classified with a CNN model trained in advance using adequately compressed data. Our evaluation using a real image dataset shows that an embedded device can reduce the amount of transmitted data by ∼71% while maintaining a classification accuracy of ∼98%. Under the same conditions, this process naturally reduces energy consumption by ∼71% compared to a WISN that sends the original uncompressed images

    On-Device Filter Design for Self-Identifying Inaccurate Heart Rate Readings on Wrist-Worn PPG Sensors

    No full text
    The ubiquitous deployment of smart wearable devices brings promises for an effective implementation of various healthcare applications in our everyday living environments. However, given that these applications ask for accurate and reliable sensing results of vital signs, there is a need to understand the accuracy of commercial-off-the-shelf wearable devices’ healthcare sensing components (e.g., heart rate sensors). This work presents a thorough investigation on the accuracy of heart rate sensors equipped on three different widely used smartwatch platforms. We show that heart rate readings can easily diverge from the ground truth when users are actively moving. Moreover, we show that the accelerometer is not an effective secondary sensing modality of predicting the accuracy of such smartwatch-embedded sensors. Instead, we show that the photoplethysmography (PPG) sensor’s light intensity readings are an plausible indicator for determining the accuracy of optical sensor-based heart rate readings. Based on such observations, this work presents a light-weight Viterbi-algorithm-based Hidden Markov Model to design a filter that identifies reliable heart rate measurements using only the limited computational resources available on smartwatches. Our evaluations with data collected from four participants show that the accuracy of our proposed scheme can be as high as 98%. By enabling the smartwatch to self-filter misleading measurements from being healthcare application inputs, we see this work as an essential module for catalyzing novel ubiquitous healthcare applications. © 1991 BMJ Publishing Group. All rights reserved.1

    Poster Abstract: Accurately measuring heart rate using smart watch

    No full text
    Smart watches are increasingly being used in various appli-cations to monitor heart rate for exercise and health care purposes. It is crucial that the readings from these devices are accurate so that users can take proper actions accord-ing to the intensity of the heart rate. Taking actions from inaccurate readings can negatively impact the health of the user. In this work, we run a preliminary study that ver-ifies the accuracy of wearable platforms by comparing the measurements with a clinically-grade device. © 2016 Copyright held by the owner/author(s)

    SSIM of sample images after color quantization with 32 colors, 16 colors, 8 colors on four classes of images: (a) Bird Presence (b) Empty nest (c) Egg Laying (d) Child Bird.

    No full text
    <p>The size images for each cases are, Original: 40KB, 32 colors: 25KB, 16 colors: 20KB and 8 colors: 15KB. Note: Original images are from [<a href="http://www.plosone.org/article/info:doi/10.1371/journal.pone.0196251#pone.0196251.ref007" target="_blank">7</a>].</p

    Characteristic of famous convolutional neural networks architectures.

    No full text
    <p>Characteristic of famous convolutional neural networks architectures.</p

    Operation time and energy usage measurements on wireless transmission of images with images sizes ranging from 200 × 200 to 100 × 100.

    No full text
    <p>Operation time and energy usage measurements on wireless transmission of images with images sizes ranging from 200 × 200 to 100 × 100.</p

    Image transmission latency and energy usage for differently compressed images.

    No full text
    <p>Image transmission latency and energy usage for differently compressed images.</p

    Mean SSIM between original images and modified images after color quantization.

    No full text
    <p>Mean SSIM between original images and modified images after color quantization.</p

    Number image samples in the training, validation and test set for each class.

    No full text
    <p>Number image samples in the training, validation and test set for each class.</p
    corecore