1,529 research outputs found
Activity Classification Using Unsupervised Domain Transfer from Body Worn Sensors
Activity classification has become a vital feature of wearable health
tracking devices. As innovation in this field grows, wearable devices worn on
different parts of the body are emerging. To perform activity classification on
a new body location, labeled data corresponding to the new locations are
generally required, but this is expensive to acquire. In this work, we present
an innovative method to leverage an existing activity classifier, trained on
Inertial Measurement Unit (IMU) data from a reference body location (the source
domain), in order to perform activity classification on a new body location
(the target domain) in an unsupervised way, i.e. without the need for
classification labels at the new location. Specifically, given an IMU embedding
model trained to perform activity classification at the source domain, we train
an embedding model to perform activity classification at the target domain by
replicating the embeddings at the source domain. This is achieved using
simultaneous IMU measurements at the source and target domains. The replicated
embeddings at the target domain are used by a classification model that has
previously been trained on the source domain to perform activity classification
at the target domain. We have evaluated the proposed methods on three activity
classification datasets PAMAP2, MHealth, and Opportunity, yielding high F1
scores of 67.19%, 70.40% and 68.34%, respectively when the source domain is the
wrist and the target domain is the torso
Recommended from our members
Comparing clothing-mounted sensors with wearable sensors for movement analysis and activity classification
Inertial sensors are a useful instrument for long term monitoring in healthcare. In many cases, inertial sensor devices can be worn as an accessory or integrated into smart textiles. In some situations, it may be beneficial to have data from multiple inertial sensors, rather than relying on a single worn sensor, since this may increase the accuracy of the analysis and better tolerate sensor errors. Integrating multiple sensors into clothing improves the feasibility and practicality of wearing multiple devices every day, in approximately the same location, with less likelihood of incorrect sensor orientation. To facilitate this, the current work investigates the consequences of attaching lightweight sensors to loose clothes. The intention of this paper is to discuss how data from these clothing sensors compare with similarly placed body worn sensors, with additional consideration of the resulting effects on activity recognition. This study compares the similarity between the two signals (body worn and clothing), collected from three different clothing types (slacks, pencil skirt and loose frock), across multiple daily activities (walking, running, sitting, and riding a bus) by calculating correlation coefficients for each sensor pair. Even though the two data streams are clearly different from each other, the results indicate that there is good potential of achieving high classification accuracy when using inertial sensors in clothing
Activity recognition in naturalistic environments using body-worn sensors
Phd ThesisThe research presented in this thesis investigates how deep learning and feature learning
can address challenges that arise for activity recognition systems in naturalistic, ecologically
valid surroundings such as the private home. One of the main aims of ubiquitous
computing is the development of automated recognition systems for human activities
and behaviour that are sufficiently robust to be deployed in realistic, in-the-wild environments.
In most cases, the targeted application scenarios are people’s daily lives,
where systems have to abide by practical usability and privacy constraints. We discuss
how these constraints impact data collection and analysis and demonstrate how common
approaches to the analysis of movement data effectively limit the practical use of
activity recognition systems in every-day surroundings. In light of these issues we develop
a novel approach to the representation and modelling of movement data based on
a data-driven methodology that has applications in activity recognition, behaviour imaging,
and skill assessment in ubiquitous computing. A number of case studies illustrate
the suitability of the proposed methods and outline how study design can be adapted
to maximise the benefit of these techniques, which show promising performance for
clinical applications in particular.SiDE research hu
Mobile health systems and emergence
Changes in the age distribution of the population and increased prevalence of chronic illnesses, together with a shortage of health professionals and other resources, will increasingly challenge the ability of national healthcare systems to meet rising demand for services. Large-scale use of eHealth and mHealth services enabled by advances in ICT are frequently cited as providing part of the solution to this crisis in future provision. As part of this picture, self-monitoring and remote monitoring of patients, for example by means of smartphone apps and body-worn sensors, is on the way to becoming mainstream. In future, each individual’s personal health system may be able to access a large number of devices, including sensors embedded in the environment as well as in-body smart medical implants, in order to provide (semi-)autonomous health-related services to the user. This article presents some examples of mHealth systems based on emerging technologies, including body area networks (BANs), wireless and mobile technologies, miniature body-worn sensors and distributed decision support. Applications are described in the areas of management of chronic illnesses and management of (large- scale) emergency situations. In the latter setting BANs form part of an advanced ICT system proposed for future major incident management; including BANs for monitoring casualties and emergency services personnel during first response. Some challenges and possibilities arising from current and future emerging mHealth technologies, and the question of how emergence theory might have a bearing on understanding these challenges, is discussed here
Acceptability of novel lifelogging technology to determine context of sedentary behaviour in older adults
<strong>Objective:</strong> Lifelogging, using body worn sensors (activity monitors and time lapse photography) has the potential to shed light on the context of sedentary behaviour. The objectives of this study were to examine the acceptability, to older adults, of using lifelogging technology and indicate its usefulness for understanding behaviour.<strong> </strong><strong>Method:</strong> 6 older adults (4 males, mean age: 68yrs) wore the equipment (ActivPAL<sup>TM</sup> and Vicon Revue<sup>TM</sup>/SenseCam<sup>TM</sup>) for 7 consecutive days during free-living activity. The older adults’ perception of the lifelogging technology was assessed through semi-structured interviews, including a brief questionnaire (Likert scale), and reference to the researcher's diary. <strong>Results:</strong> Older adults in this study found the equipment acceptable to wear and it did not interfere with privacy, safety or create reactivity, but they reported problems with the actual technical functioning of the camera. <strong>Conclusion:</strong> This combination of sensors has good potential to provide lifelogging information on the context of sedentary behaviour
Kitchen Task Assessment Dataset for Measuring Errors due to Cognitive Impairments: [research data]
The dataset contains different types of sensor data: acceleration data from different objects and from body-worn sensors as well as annotation. The dataset consists of 12 normal runs and 12 erroneous runs, where the participants simulated typical errors due to dementia. The annotation consists of both action annotation in the form “action_object_object” as well es annotation about the object being manipulated and the hand that is manipulating it
Kinect-ed Piano
We describe a gesturally-controlled improvisation system for an experimental pianist, developed over several laboratory sessions and used during a performance [1] at the 2011 Conference on New Inter- faces for Musical Expression (NIME). We discuss the architecture and performative advantages and limitations of our gesturally-controlled improvisation system, and reflect on the lessons learned throughout its development. KEYWORDS: piano; improvisation; gesture recognition; machine learning
Going beyond the user — the challenges of universal connectivity in IoT
The Internet of Things (IoT) approach to interconnected devices has become a significant topic in recent years, and is likely to be a major influence on future networking standards, such as ongoing work on 5G. IoT introduces connectivity to a much wider range of devices than seen previously, which raises a number of challenges, both technical and ethical. This paper explores some of these challenges which IoT faces, as a result of the personal and confidential information which may be transmitted from body-worn sensors, and the inherent challenges of introducing connectivity to standalone devices, rather than to equipment operated by users
- …