312 research outputs found
ActiveSelfHAR: Incorporating Self Training into Active Learning to Improve Cross-Subject Human Activity Recognition
Deep learning-based human activity recognition (HAR) methods have shown great
promise in the applications of smart healthcare systems and wireless body
sensor network (BSN). Despite their demonstrated performance in laboratory
settings, the real-world implementation of such methods is still hindered by
the cross-subject issue when adapting to new users. To solve this issue, we
propose ActiveSelfHAR, a framework that combines active learning's benefit of
sparsely acquiring data with actual labels and self- training's benefit of
effectively utilizing unlabeled data to enable the deep model to adapt to the
target domain, i.e., the new users. In this framework, the model trained in the
last iteration or the source domain is first utilized to generate pseudo labels
of the target-domain samples and construct a self-training set based on the
confidence score. Second, we propose to use the spatio-temporal relationships
among the samples in the non-self-training set to augment the core set selected
by active learning. Finally, we combine the self-training set and the augmented
core set to fine-tune the model. We demonstrate our method by comparing it with
state-of-the-art methods on two IMU-based datasets and an EMG-based dataset.
Our method presents similar HAR accuracies with the upper bound, i.e. fully
supervised fine-tuning with less than 1\% labeled data of the target dataset
and significantly improves data efficiency and time cost. Our work highlights
the potential of implementing user-independent HAR methods into smart
healthcare systems and BSN
A comprehensive survey on recent deep learning-based methods applied to surgical data
Minimally invasive surgery is highly operator dependant with a lengthy
procedural time causing fatigue to surgeon and risks to patients such as injury
to organs, infection, bleeding, and complications of anesthesia. To mitigate
such risks, real-time systems are desired to be developed that can provide
intra-operative guidance to surgeons. For example, an automated system for tool
localization, tool (or tissue) tracking, and depth estimation can enable a
clear understanding of surgical scenes preventing miscalculations during
surgical procedures. In this work, we present a systematic review of recent
machine learning-based approaches including surgical tool localization,
segmentation, tracking, and 3D scene perception. Furthermore, we provide a
detailed overview of publicly available benchmark datasets widely used for
surgical navigation tasks. While recent deep learning architectures have shown
promising results, there are still several open research problems such as a
lack of annotated datasets, the presence of artifacts in surgical scenes, and
non-textured surfaces that hinder 3D reconstruction of the anatomical
structures. Based on our comprehensive review, we present a discussion on
current gaps and needed steps to improve the adaptation of technology in
surgery.Comment: This paper is to be submitted to International journal of computer
visio
- …