96 research outputs found

    Breathing feedback system with wearable textile sensors

    Get PDF
    Breathing exercises form an essential part of the treatment for respiratory illnesses such as cystic fibrosis. Ideally these exercises should be performed on a daily basis. This paper presents an interactive system using a wearable textile sensor to monitor breathing patterns. A graphical user interface provides visual real-time feedback to patients. The aim of the system is to encourage the correct performance of prescribed breathing exercises by monitoring the rate and the depth of breathing. The system is straightforward to use, low-cost and can be installed easily within a clinical setting or in the home. Monitoring the user with a wearable sensor gives real-time feedback to the user as they perform the exercise, allowing them to perform the exercises independently. There is also potential for remote monitoring where the user’s overall performance over time can be assessed by a clinician

    A machine learning approach towards detecting dementia based on its modifiable risk factors

    Get PDF
    Dementia is considered one of the greatest global health and social care challenges in the 21st century. Fortunately, dementia can be delayed or possibly prevented by changes in lifestyle as dictated through known modifiable risk factors. These risk factors include low education, hypertension, obesity, hearing loss, depression, diabetes, physical inactivity, smoking, and social isolation. Other risk factors are non- modifiable and include aging and genetics. The main goal of this study is to demonstrate how machine learning methods can help predict dementia based on an individual’s modifiable risk factors profile. We use publicly available datasets for training algorithms to predict participant’ s cognitive state diagnosis, as cognitive normal or mild cognitive impairment or dementia. Several approaches were implemented using data from the Alzheimer’s Disease Neuroimaging Initiative (ADNI) longitudinal study. The best classification results were obtained using both the Lancet and the Libra risk factor lists via longitudinal datasets, which outperformed cross-sectional baseline datasets. Moreover, using only data of the most recent visits provided even better results than using the complete longitudinal set. A binary classification (dementia vs non- dementia) yielded approximately 92% accuracy, while the full multi-class prediction performance yielded to a 77% accuracy using logistic regression, followed by random forest with 92% and 70% respectively. The results demonstrate the utility of machine learning in the prediction of cognitive impairment based on modifiable risk factors and may encourage interventions to reduce the prevalence or severity of the condition in large populations

    Web-based sensor streaming wearable for respiratory monitoring applications.

    Get PDF
    This paper presents a system for remote monitoring of respiration of individuals that can detect respiration rate, mode of breathing and identify coughing events. It comprises a series of polymer fabric-sensors incorporated into a sports vest, a wearable data acquisition platform and a novel rich internet application (RIA) which together enable remote real-time monitoring of untethered wearable systems for respiratory rehabilitation. This system will, for the first time, allow therapists to monitor and guide the respiratory efforts of patients in real-time through a web browser. Changes in abdomen expansion and contraction associated with respiration are detected by the fabric sensors and transmitted wirelessly via a Bluetooth-based solution to a standard computer. The respiratory signals are visualized locally through the RIA and subsequently published to a sensor streaming cloud-based server. A web-based signal streaming protocol makes the signals available as real-time streams to authorized subscribers over standard browsers. We demonstrate real-time streaming of a six-sensor shirt rendered remotely at 40 samples/s per sensor with perceptually acceptable latency (<0.5s) over realistic network conditions

    An investigation of triggering approaches for the rapid serial visual presentation paradigm in brain computer interfacing

    Get PDF
    The rapid serial visual presentation (RSVP) paradigm is a method that can be used to extend the P300 based brain computer interface (BCI) approach to enable high throughput target image recognition applications. The method requires high temporal resolution and hence, generating reliable and accurate stimulus triggers is critical for high performance execution. The traditional RSVP paradigm is normally deployed on two computers where software triggers generated at runtime by the image presentation software on a presentation computer are acquired along with the raw electroencephalography (EEG) signals by a dedicated data acquisition system connected to a second computer. It is often assumed that the stimulus pre- sentation timing as acquired via events arising in the stimulus presentation code is an accurate reflection of the physical stimulus presentation. This is not necessarily the case due to various and variable latencies that may arise in the overall system. This paper describes a study to investigate in a representative RSVP implementation whether or not software-derived stimulus timing can be considered an accurate reflection of the physical stimuli timing. To investigate this, we designed a simple circuit consisting of a light diode resistor comparator circuit (LDRCC) for recording the physical presentation of stimuli and which in turn generates what we refer to as hardware triggered events. These hardware-triggered events constitute a measure of ground truth and are captured along with the corresponding stimulus presentation command timing events for comparison. Our experimental results show that using software-derived timing only may introduce uncertainty as to the true presentation times of the stimuli and this uncertainty itself is highly variable at least in the representative implementation described here. For BCI protocols such as those utilizing RSVP, the uncertainly introduced will cause impairment of performance and we recommend the use of additional circuitry to capture the physical presentation of stimuli and that these hardware-derived triggers should instead constitute the event markers to be used for subsequent analysis of the EEG

    Overview of NTCIR-13 NAILS task

    Get PDF
    In this paper we review the NTCIR-13 NAILS (Neurally Augmented Image Labelling Strategies) pilot task at NTCIR-13. We describe a first-of-its-kind RSVP (Rapid Serial Visual Presentation) - EEG (Electroencephalography) dataset released as part of the NTCIR-13 participation conference and the results of the participating organisations who benchmarked machine-learning strategies against each other using the provided unlabelled test data

    Wearable sensors and feedback system to improve breathing technique

    Get PDF
    Breathing is an important factor in our well-being as it oxygenates the body, revitalizes organs, cells and tissues. It is a unique physiological system in that it is both voluntary and involuntary. By breathing in a slow, deep and regular manner, the heartbeat become smooth and regular, blood pressure normalizes, stress hormones drop, and muscles relax. Breathing techniques are important for athletes to improve performance and reduce anxiety during competitions. Patients with respiratory illnesses often tend to take shallow short breaths causing chest muscle weakness, reduced oxygen circulation, shortness of breath and fatigue. Proper breathing exercises can help to reduce these symptoms as well as strengthen muscles, improve posture and mental ability. This work presents a wearable system which monitors breathing technique and provides straightforward feedback to the user through a graphical interface

    A machine vision approach to human activity recognition using photoplethysmograph sensor data

    Get PDF
    Human activity recognition (HAR) is an active area of research concerned with the classification of human motion. Cameras are the gold standard used in this area, but they are proven to have scalability and privacy issues. HAR studies have also been conducted with wearable devices consisting of inertial sensors. Perhaps the most common wearable, smart watches, comprising of inertial and optical sensors, allow for scalable, non-obtrusive studies. We are seeking to simplify this wearable approach further by determining if wrist-mounted optical sensing, usually used for heart rate determination, can also provide useful data for relevant activity recognition. If successful, this could eliminate the need for the inertial sensor, and so simplify the technological requirements in wearable HAR. We adopt a machine vision approach for activity recognition based on plots of the optical signals so as to produce classifications that are easily explainable and interpretable by non-technical users. Specifically, time-series images of photoplethysmography signals are used to retrain the penultimate layer of a pretrained convolutional neural network leveraging the concept of transfer learning. Our results demonstrate an average accuracy of 75.8%. This illustrates the feasibility of implementing an optical sensor-only solution for a coarse activity and heart rate monitoring system. Implementing an optical sensor only in the design of these wearables leads to a trade off in classification performance, but in turn, grants the potential to simplify the overall design of activity monitoring and classification systems in the future

    Predicting media memorability using ensemble models

    Get PDF
    Memorability, defined as the quality of being worth remembering, is a pressing issue in media as we struggle to organize and retrieve digital content and make it more useful in our daily lives. The Predicting Media Memorability task in MediaEval 2019 tackles this problem by creating a challenge to automatically predict memorability scores building on the work developed in 2018. Our team ensembled transfer learning approaches with video captions using embeddings and our own pre-computed features which outperformed Medieval 2018’s state-of-the-art architectures

    An interpretable machine vision approach to human activity recognition using photoplethysmograph sensor data

    Get PDF
    The current gold standard for human activity recognition (HAR) is based on the use of cameras. However, the poor scalability of camera systems renders them impractical in pursuit of the goal of wider adoption of HAR in mobile computing contexts. Consequently, researchers instead rely on wearable sensors and in particular inertial sensors. A particularly prevalent wearable is the smart watch which due to its integrated inertial and optical sensing capabilities holds great potential for realising better HAR in a non-obtrusive way. This paper seeks to simplify the wearable approach to HAR through determining if the wrist-mounted optical sensor alone typically found in a smartwatch or similar device can be used as a useful source of data for activity recognition. The approach has the potential to eliminate the need for the inertial sensing element which would in turn reduce the cost of and complexity of smartwatches and fitness trackers. This could potentially commoditise the hardware requirements for HAR while retaining the functionality of both heart rate monitoring and activity capture all from a single optical sensor. Our approach relies on the adoption of machine vision for activity recognition based on suitably scaled plots of the optical signals. We take this approach so as to produce classifications that are easily explainable and interpretable by non-technical users. More specifically, images of photoplethysmography signal time series are used to retrain the penultimate layer of a convolutional neural network which has initially been trained on the ImageNet database. We then use the 2048 dimensional features from the penultimate layer as input to a support vector machine. Results from the experiment yielded an average classification accuracy of 92.3\%. This result outperforms that of an optical and inertial sensor combined (78\%) and illustrates the capability of HAR systems using standalone optical sensing elements which also allows for both HAR and heart rate monitoring. Finally, we demonstrate through the use of tools from research in explainable AI how this machine vision approach lends itself to more interpretable machine learning output
    corecore