5,555 research outputs found
ADMarker: A Multi-Modal Federated Learning System for Monitoring Digital Biomarkers of Alzheimer's Disease
Alzheimer's Disease (AD) and related dementia are a growing global health
challenge due to the aging population. In this paper, we present ADMarker, the
first end-to-end system that integrates multi-modal sensors and new federated
learning algorithms for detecting multidimensional AD digital biomarkers in
natural living environments. ADMarker features a novel three-stage multi-modal
federated learning architecture that can accurately detect digital biomarkers
in a privacy-preserving manner. Our approach collectively addresses several
major real-world challenges, such as limited data labels, data heterogeneity,
and limited computing resources. We built a compact multi-modality hardware
system and deployed it in a four-week clinical trial involving 91 elderly
participants. The results indicate that ADMarker can accurately detect a
comprehensive set of digital biomarkers with up to 93.8% accuracy and identify
early AD with an average of 88.9% accuracy. ADMarker offers a new platform that
can allow AD clinicians to characterize and track the complex correlation
between multidimensional interpretable digital biomarkers, demographic factors
of patients, and AD diagnosis in a longitudinal manner
An Unsupervised Framework for Online Spatiotemporal Detection of Activities of Daily Living by Hierarchical Activity Models
International audienceAutomatic detection and analysis of human activities captured by various sensors (e.g. 1 sequence of images captured by RGB camera) play an essential role in various research fields in order 2 to understand the semantic content of a captured scene. The main focus of the earlier studies has 3 been widely on supervised classification problem, where a label is assigned for a given short clip. 4 Nevertheless, in real-world scenarios, such as in Activities of Daily Living (ADL), the challenge is 5 to automatically browse long-term (days and weeks) stream of videos to identify segments with 6 semantics corresponding to the model activities and their temporal boundaries. This paper proposes 7 an unsupervised solution to address this problem by generating hierarchical models that combine 8 global trajectory information with local dynamics of the human body. Global information helps in 9 modeling the spatiotemporal evolution of long-term activities and hence, their spatial and temporal 10 localization. Moreover, the local dynamic information incorporates complex local motion patterns of 11 daily activities into the models. Our proposed method is evaluated using realistic datasets captured 12 from observation rooms in hospitals and nursing homes. The experimental data on a variety of 13 monitoring scenarios in hospital settings reveals how this framework can be exploited to provide 14 timely diagnose and medical interventions for cognitive disorders such as Alzheimer's disease. The 15 obtained results show that our framework is a promising attempt capable of generating activity 16 models without any supervision. 1
Self-Attention Temporal Convolutional Network for Long-Term Daily Living Activity Detection
International audienceIn this paper, we address the detection of daily living activities in long-term untrimmed videos. The detection of daily living activities is challenging due to their long temporal components, low inter-class variation and high intra-class variation. To tackle these challenges, recent approaches based on Temporal Convolutional Networks (TCNs) have been proposed. Such methods can capture long-term temporal patterns using a hierarchy of temporal convolutional filters, pooling and up sampling steps. However, as one of the important features of con-volutional networks, TCNs process a local neighborhood across time which leads to inefficiency in modeling the long-range dependencies between these temporal patterns of the video. In this paper, we propose Self-Attention-Temporal Convolutional Network (SA-TCN), which is able to capture both complex activity patterns and their dependencies within long-term untrimmed videos. We evaluate our proposed model on DAily Home LIfe Activity Dataset (DAHLIA) and Breakfast datasets. Our proposed method achieves state-of-the-art performance on both DAHLIA and Breakfast dataset
The Evolution of First Person Vision Methods: A Survey
The emergence of new wearable technologies such as action cameras and
smart-glasses has increased the interest of computer vision scientists in the
First Person perspective. Nowadays, this field is attracting attention and
investments of companies aiming to develop commercial devices with First Person
Vision recording capabilities. Due to this interest, an increasing demand of
methods to process these videos, possibly in real-time, is expected. Current
approaches present a particular combinations of different image features and
quantitative methods to accomplish specific objectives like object detection,
activity recognition, user machine interaction and so on. This paper summarizes
the evolution of the state of the art in First Person Vision video analysis
between 1997 and 2014, highlighting, among others, most commonly used features,
methods, challenges and opportunities within the field.Comment: First Person Vision, Egocentric Vision, Wearable Devices, Smart
Glasses, Computer Vision, Video Analytics, Human-machine Interactio
Perceptually-guided deep neural networks for ego-action prediction: Object grasping
We tackle the problem of predicting a grasping action in ego-centric video for the assistance to upper limb amputees. Our work is based on paradigms of neuroscience that state that human gaze expresses intention and anticipates actions. In our scenario, human gaze fixations are recorded by a glass-worn eye-tracker and then used to predict the grasping actions. We have studied two aspects of the problem: which object from a given taxonomy will be grasped, and when is the moment to trigger the grasping action. To recognize objects, we using gaze to guide Convolutional Neural Networks (CNN) to focus on an object-to-grasp area. However, the acquired sequence of fixations is noisy due to saccades toward distractors and visual fatigue, and gaze is not always reliably directed toward the object-of-interest. To deal with this challenge, we use video-level annotations indicating the object to be grasped and a weak loss in Deep CNNs. To detect a moment when a person will take an object we take advantage of the predictive power of Long-Short Term Memory networks to analyze gaze and visual dynamics. Results show that our method achieves better performance than other approaches on a real-life dataset. (C) 2018 Elsevier Ltd. All rights reserved.This work was partially supported by French National Center of Scientific research with grant Suvipp PEPS CNRS-Idex 215-2016, by French National Center of Scientific research with Interdisciplinary project CNRS RoBioVis 2017–2019, the Scientific Council of Labri, University of Bordeaux, and the Spanish Ministry of Economy and Competitiveness under the National Grants TEC2014-53390-P and TEC2014-61729-EXP.Publicad
Recommended from our members
Context-awareness for mobile sensing: a survey and future directions
The evolution of smartphones together with increasing computational power have empowered developers to create innovative context-aware applications for recognizing user related social and cognitive activities in any situation and at any location. The existence and awareness of the context provides the capability of being conscious of physical environments or situations around mobile device users. This allows network services to respond proactively and intelligently based on such awareness. The key idea behind context-aware applications is to encourage users to collect, analyze and share local sensory knowledge in the purpose for a large scale community use by creating a smart network. The desired network is capable of making autonomous logical decisions to actuate environmental objects, and also assist individuals. However, many open challenges remain, which are mostly arisen due to the middleware services provided in mobile devices have limited resources in terms of power, memory and bandwidth. Thus, it becomes critically important to study how the drawbacks can be elaborated and resolved, and at the same time better understand the opportunities for the research community to contribute to the context-awareness. To this end, this paper surveys the literature over the period of 1991-2014 from the emerging concepts to applications of context-awareness in mobile platforms by providing up-to-date research and future research directions. Moreover, it points out the challenges faced in this regard and enlighten them by proposing possible solutions
- …