1,057 research outputs found

    Evaluation of Data Processing and Artifact Removal Approaches Used for Physiological Signals Captured Using Wearable Sensing Devices during Construction Tasks

    Get PDF
    Wearable sensing devices (WSDs) have enormous promise for monitoring construction worker safety. They can track workers and send safety-related information in real time, allowing for more effective and preventative decision making. WSDs are particularly useful on construction sites since they can track workers’ health, safety, and activity levels, among other metrics that could help optimize their daily tasks. WSDs may also assist workers in recognizing health-related safety risks (such as physical fatigue) and taking appropriate action to mitigate them. The data produced by these WSDs, however, is highly noisy and contaminated with artifacts that could have been introduced by the surroundings, the experimental apparatus, or the subject’s physiological state. These artifacts are very strong and frequently found during field experiments. So, when there is a lot of artifacts, the signal quality drops. Recently, artifacts removal has been greatly enhanced by developments in signal processing, which has vastly enhanced the performance. Thus, the proposed review aimed to provide an in-depth analysis of the approaches currently used to analyze data and remove artifacts from physiological signals obtained via WSDs during construction-related tasks. First, this study provides an overview of the physiological signals that are likely to be recorded from construction workers to monitor their health and safety. Second, this review identifies the most prevalent artifacts that have the most detrimental effect on the utility of the signals. Third, a comprehensive review of existing artifact-removal approaches were presented. Fourth, each identified artifact detection and removal approach was analyzed for its strengths and weaknesses. Finally, in conclusion, this review provides a few suggestions for future research for improving the quality of captured physiological signals for monitoring the health and safety of construction workers using artifact removal approaches

    Running to Your Own Beat:An Embodied Approach to Auditory Display Design

    Get PDF
    Personal fitness trackers represent a multi-billion-dollar industry, predicated on devices for assisting users in achieving their health goals. However, most current products only offer activity tracking and measurement of performance metrics, which do not ultimately address the need for technique related assistive feedback in a cost-effective way. Addressing this gap in the design space for assistive run training interfaces is also crucial in combating the negative effects of Forward Head Position, a condition resulting from mobile device use, with a rapid growth of incidence in the population. As such, Auditory Displays (AD) offer an innovative set of tools for creating such a device for runners. ADs present the opportunity to design interfaces which allow natural unencumbered motion, detached from the mobile or smartwatch screen, thus making them ideal for providing real-time assistive feedback for correcting head posture during running. However, issues with AD design have centred around overall usability and user-experience, therefore, in this thesis an ecological and embodied approach to AD design is presented as a vehicle for designing an assistive auditory interface for runners, which integrates seamlessly into their everyday environments

    Human Activity Recognition and Fall Detection Using Unobtrusive Technologies

    Full text link
    As the population ages, health issues like injurious falls demand more attention. Wearable devices can be used to detect falls. However, despite their commercial success, most wearable devices are obtrusive, and patients generally do not like or may forget to wear them. In this thesis, a monitoring system consisting of two 24×32 thermal array sensors and a millimetre-wave (mmWave) radar sensor was developed to unobtrusively detect locations and recognise human activities such as sitting, standing, walking, lying, and falling. Data were collected by observing healthy young volunteers simulate ten different scenarios. The optimal installation position of the sensors was initially unknown. Therefore, the sensors were mounted on a side wall, a corner, and on the ceiling of the experimental room to allow performance comparison between these sensor placements. Every thermal frame was converted into an image and a set of features was manually extracted or convolutional neural networks (CNNs) were used to automatically extract features. Applying a CNN model on the infrared stereo dataset to recognise five activities (falling plus lying on the floor, lying in bed, sitting on chair, sitting in bed, standing plus walking), overall average accuracy and F1-score were 97.6%, and 0.935, respectively. The scores for detecting falling plus lying on the floor from the remaining activities were 97.9%, and 0.945, respectively. When using radar technology, the generated point clouds were converted into an occupancy grid and a CNN model was used to automatically extract features, or a set of features was manually extracted. Applying several classifiers on the manually extracted features to detect falling plus lying on the floor from the remaining activities, Random Forest (RF) classifier achieved the best results in overhead position (an accuracy of 92.2%, a recall of 0.881, a precision of 0.805, and an F1-score of 0.841). Additionally, the CNN model achieved the best results (an accuracy of 92.3%, a recall of 0.891, a precision of 0.801, and an F1-score of 0.844), in overhead position and slightly outperformed the RF method. Data fusion was performed at a feature level, combining both infrared and radar technologies, however the benefit was not significant. The proposed system was cost, processing time, and space efficient. The system with further development can be utilised as a real-time fall detection system in aged care facilities or at homes of older people

    Advances and Applications of DSmT for Information Fusion. Collected Works, Volume 5

    Get PDF
    This fifth volume on Advances and Applications of DSmT for Information Fusion collects theoretical and applied contributions of researchers working in different fields of applications and in mathematics, and is available in open-access. The collected contributions of this volume have either been published or presented after disseminating the fourth volume in 2015 in international conferences, seminars, workshops and journals, or they are new. The contributions of each part of this volume are chronologically ordered. First Part of this book presents some theoretical advances on DSmT, dealing mainly with modified Proportional Conflict Redistribution Rules (PCR) of combination with degree of intersection, coarsening techniques, interval calculus for PCR thanks to set inversion via interval analysis (SIVIA), rough set classifiers, canonical decomposition of dichotomous belief functions, fast PCR fusion, fast inter-criteria analysis with PCR, and improved PCR5 and PCR6 rules preserving the (quasi-)neutrality of (quasi-)vacuous belief assignment in the fusion of sources of evidence with their Matlab codes. Because more applications of DSmT have emerged in the past years since the apparition of the fourth book of DSmT in 2015, the second part of this volume is about selected applications of DSmT mainly in building change detection, object recognition, quality of data association in tracking, perception in robotics, risk assessment for torrent protection and multi-criteria decision-making, multi-modal image fusion, coarsening techniques, recommender system, levee characterization and assessment, human heading perception, trust assessment, robotics, biometrics, failure detection, GPS systems, inter-criteria analysis, group decision, human activity recognition, storm prediction, data association for autonomous vehicles, identification of maritime vessels, fusion of support vector machines (SVM), Silx-Furtif RUST code library for information fusion including PCR rules, and network for ship classification. Finally, the third part presents interesting contributions related to belief functions in general published or presented along the years since 2015. These contributions are related with decision-making under uncertainty, belief approximations, probability transformations, new distances between belief functions, non-classical multi-criteria decision-making problems with belief functions, generalization of Bayes theorem, image processing, data association, entropy and cross-entropy measures, fuzzy evidence numbers, negator of belief mass, human activity recognition, information fusion for breast cancer therapy, imbalanced data classification, and hybrid techniques mixing deep learning with belief functions as well

    Augmented Human Inspired Phase Variable Using a Canonical Dynamical System

    Get PDF
    Accurately parameterizing human gait is highly important in the continued development of assistive robotics, including but not limited to lower limb prostheses and exoskeletons. Previous studies introduce the idea of time-invariant real-time gait parameterization via human-inspired phase variables. The phase represents the location or percent of the gait cycle the user has progressed through. This thesis proposes an alternative approach for determining the gait phase leveraging previous methods and a canonical dynamical system. Human subject experiments demonstrate the ability to accurately produce a phase variable corresponding to the human gait progression for various walking configurations. Configurations include changes in incline and speed. Results show an augmented real-time approach capable of adapting to different walking conditions

    Wearable Sensors and Smart Devices to Monitor Rehabilitation Parameters and Sports Performance: An Overview

    Get PDF
    A quantitative evaluation of kinetic parameters, the joint’s range of motion, heart rate, and breathing rate, can be employed in sports performance tracking and rehabilitation monitoring following injuries or surgical operations. However, many of the current detection systems are expensive and designed for clinical use, requiring the presence of a physician and medical staff to assist users in the device’s positioning and measurements. The goal of wearable sensors is to overcome the limitations of current devices, enabling the acquisition of a user’s vital signs directly from the body in an accurate and non–invasive way. In sports activities, wearable sensors allow athletes to monitor performance and body movements objectively, going beyond the coach’s subjective evaluation limits. The main goal of this review paper is to provide a comprehensive overview of wearable technologies and sensing systems to detect and monitor the physiological parameters of patients during post–operative rehabilitation and athletes’ training, and to present evidence that supports the efficacy of this technology for healthcare applications. First, a classification of the human physiological parameters acquired from the human body by sensors attached to sensitive skin locations or worn as a part of garments is introduced, carrying important feedback on the user’s health status. Then, a detailed description of the electromechanical transduction mechanisms allows a comparison of the technologies used in wearable applications to monitor sports and rehabilitation activities. This paves the way for an analysis of wearable technologies, providing a comprehensive comparison of the current state of the art of available sensors and systems. Comparative and statistical analyses are provided to point out useful insights for defining the best technologies and solutions for monitoring body movements. Lastly, the presented review is compared with similar ones reported in the literature to highlight its strengths and novelties

    TicTacToes: Assessing Toe Movements as an Input Modality

    Full text link
    From carrying grocery bags to holding onto handles on the bus, there are a variety of situations where one or both hands are busy, hindering the vision of ubiquitous interaction with technology. Voice commands, as a popular hands-free alternative, struggle with ambient noise and privacy issues. As an alternative approach, research explored movements of various body parts (e.g., head, arms) as input modalities, with foot-based techniques proving particularly suitable for hands-free interaction. Whereas previous research only considered the movement of the foot as a whole, in this work, we argue that our toes offer further degrees of freedom that can be leveraged for interaction. To explore the viability of toe-based interaction, we contribute the results of a controlled experiment with 18 participants assessing the impact of five factors on the accuracy, efficiency and user experience of such interfaces. Based on the findings, we provide design recommendations for future toe-based interfaces.Comment: To appear in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI 23), April 23-28, 2023, Hamburg, Germany. ACM, New York, NY, USA, 17 page

    Blending the Material and Digital World for Hybrid Interfaces

    Get PDF
    The development of digital technologies in the 21st century is progressing continuously and new device classes such as tablets, smartphones or smartwatches are finding their way into our everyday lives. However, this development also poses problems, as these prevailing touch and gestural interfaces often lack tangibility, take little account of haptic qualities and therefore require full attention from their users. Compared to traditional tools and analog interfaces, the human skills to experience and manipulate material in its natural environment and context remain unexploited. To combine the best of both, a key question is how it is possible to blend the material world and digital world to design and realize novel hybrid interfaces in a meaningful way. Research on Tangible User Interfaces (TUIs) investigates the coupling between physical objects and virtual data. In contrast, hybrid interfaces, which specifically aim to digitally enrich analog artifacts of everyday work, have not yet been sufficiently researched and systematically discussed. Therefore, this doctoral thesis rethinks how user interfaces can provide useful digital functionality while maintaining their physical properties and familiar patterns of use in the real world. However, the development of such hybrid interfaces raises overarching research questions about the design: Which kind of physical interfaces are worth exploring? What type of digital enhancement will improve existing interfaces? How can hybrid interfaces retain their physical properties while enabling new digital functions? What are suitable methods to explore different design? And how to support technology-enthusiast users in prototyping? For a systematic investigation, the thesis builds on a design-oriented, exploratory and iterative development process using digital fabrication methods and novel materials. As a main contribution, four specific research projects are presented that apply and discuss different visual and interactive augmentation principles along real-world applications. The applications range from digitally-enhanced paper, interactive cords over visual watch strap extensions to novel prototyping tools for smart garments. While almost all of them integrate visual feedback and haptic input, none of them are built on rigid, rectangular pixel screens or use standard input modalities, as they all aim to reveal new design approaches. The dissertation shows how valuable it can be to rethink familiar, analog applications while thoughtfully extending them digitally. Finally, this thesis’ extensive work of engineering versatile research platforms is accompanied by overarching conceptual work, user evaluations and technical experiments, as well as literature reviews.Die Durchdringung digitaler Technologien im 21. Jahrhundert schreitet stetig voran und neue Geräteklassen wie Tablets, Smartphones oder Smartwatches erobern unseren Alltag. Diese Entwicklung birgt aber auch Probleme, denn die vorherrschenden berührungsempfindlichen Oberflächen berücksichtigen kaum haptische Qualitäten und erfordern daher die volle Aufmerksamkeit ihrer Nutzer:innen. Im Vergleich zu traditionellen Werkzeugen und analogen Schnittstellen bleiben die menschlichen Fähigkeiten ungenutzt, die Umwelt mit allen Sinnen zu begreifen und wahrzunehmen. Um das Beste aus beiden Welten zu vereinen, stellt sich daher die Frage, wie neuartige hybride Schnittstellen sinnvoll gestaltet und realisiert werden können, um die materielle und die digitale Welt zu verschmelzen. In der Forschung zu Tangible User Interfaces (TUIs) wird die Verbindung zwischen physischen Objekten und virtuellen Daten untersucht. Noch nicht ausreichend erforscht wurden hingegen hybride Schnittstellen, die speziell darauf abzielen, physische Gegenstände des Alltags digital zu erweitern und anhand geeigneter Designparameter und Entwurfsräume systematisch zu untersuchen. In dieser Dissertation wird daher untersucht, wie Materialität und Digitalität nahtlos ineinander übergehen können. Es soll erforscht werden, wie künftige Benutzungsschnittstellen nützliche digitale Funktionen bereitstellen können, ohne ihre physischen Eigenschaften und vertrauten Nutzungsmuster in der realen Welt zu verlieren. Die Entwicklung solcher hybriden Ansätze wirft jedoch übergreifende Forschungsfragen zum Design auf: Welche Arten von physischen Schnittstellen sind es wert, betrachtet zu werden? Welche Art von digitaler Erweiterung verbessert das Bestehende? Wie können hybride Konzepte ihre physischen Eigenschaften beibehalten und gleichzeitig neue digitale Funktionen ermöglichen? Was sind geeignete Methoden, um verschiedene Designs zu erforschen? Wie kann man Technologiebegeisterte bei der Erstellung von Prototypen unterstützen? Für eine systematische Untersuchung stützt sich die Arbeit auf einen designorientierten, explorativen und iterativen Entwicklungsprozess unter Verwendung digitaler Fabrikationsmethoden und neuartiger Materialien. Im Hauptteil werden vier Forschungsprojekte vorgestellt, die verschiedene visuelle und interaktive Prinzipien entlang realer Anwendungen diskutieren. Die Szenarien reichen von digital angereichertem Papier, interaktiven Kordeln über visuelle Erweiterungen von Uhrarmbändern bis hin zu neuartigen Prototyping-Tools für intelligente Kleidungsstücke. Um neue Designansätze aufzuzeigen, integrieren nahezu alle visuelles Feedback und haptische Eingaben, um Alternativen zu Standard-Eingabemodalitäten auf starren Pixelbildschirmen zu schaffen. Die Dissertation hat gezeigt, wie wertvoll es sein kann, bekannte, analoge Anwendungen zu überdenken und sie dabei gleichzeitig mit Bedacht digital zu erweitern. Dabei umfasst die vorliegende Arbeit sowohl realisierte technische Forschungsplattformen als auch übergreifende konzeptionelle Arbeiten, Nutzerstudien und technische Experimente sowie die Analyse existierender Forschungsarbeiten

    Autonomous Radar-based Gait Monitoring System

    Get PDF
    Features related to gait are fundamental metrics of human motion [1]. Human gait has been shown to be a valuable and feasible clinical marker to determine the risk of physical and mental functional decline [2], [3]. Technologies that detect changes in people’s gait patterns, especially older adults, could support the detection, evaluation, and monitoring of parameters related to changes in mobility, cognition, and frailty. Gait assessment has the potential to be leveraged as a clinical measurement as it is not limited to a specific health care discipline and is a consistent and sensitive test [4]. A wireless technology that uses electromagnetic waves (i.e., radar) to continually measure gait parameters at home or in a hospital without a clinician’s participation has been proposed as a suitable solution [3], [5]. This approach is based on the interaction between electromagnetic waves with humans and how their bodies impact the surrounding and scattered wireless signals. Since this approach uses wireless waves, people do not need to wear or carry a device on their bodies. Additionally, an electromagnetic wave wireless sensor has no privacy issues because there is no video-based camera. This thesis presents the design and testing of a radar-based contactless system that can monitor people’s gait patterns and recognize their activities in a range of indoor environments frequently and accurately. In this thesis, the use of commercially available radars for gait monitoring is investigated, which offers opportunities to implement unobtrusive and contactless gait monitoring and activity recognition. A novel fast and easy-to-implement gait extraction algorithm that enables an individual’s spatiotemporal gait parameter extraction at each gait cycle using a single FMCW (Frequency Modulated Continuous Wave) radar is proposed. The proposed system detects changes in gait that may be the signs of changes in mobility, cognition, and frailty, particularly for older adults in individual’s homes, retirement homes and long-term care facilities retirement homes. One of the straightforward applications for gait monitoring using radars is in corridors and hallways, which are commonly available in most residential homes, retirement, and long-term care homes. However, walls in the hallway have a strong “clutter” impact, creating multipath due to the wide beam of commercially available radar antennas. The multipath reflections could result in an inaccurate gait measurement because gait extraction algorithms employ the assumption that the maximum reflected signals come from the torso of the walking person (rather than indirect reflections or multipath) [6]. To address the challenges of hallway gait monitoring, two approaches were used: (1) a novel signal processing method and (2) modifying the radar antenna using a hyperbolic lens. For the first approach, a novel algorithm based on radar signal processing, unsupervised learning, and a subject detection, association and tracking method is proposed. This proposed algorithm could be paired with any type of multiple-input multiple-output (MIMO) or single-input multiple-output (SIMO) FMCW radar to capture human gait in a highly cluttered environment without needing radar antenna alteration. The algorithm functionality was validated by capturing spatiotemporal gait values (e.g., speed, step points, step time, step length, and step count) of people walking in a hallway. The preliminary results demonstrate the promising potential of the algorithm to accurately monitor gait in hallways, which increases opportunities for its applications in institutional and home environments. For the second approach, an in-package hyperbola-based lens antenna was designed that can be integrated with a radar module package empowered by the fast and easy-to-implement gait extraction method. The system functionality was successfully validated by capturing the spatiotemporal gait values of people walking in a hallway filled with metallic cabinets. The results achieved in this work pave the way to explore the use of stand-alone radar-based sensors in long hallways for day-to-day long-term monitoring of gait parameters of older adults or other populations. The possibility of the coexistence of multiple walking subjects is high, especially in long-term care facilities where other people, including older adults, might need assistance during walking. GaitRite and wearables are not able to assess multiple people’s gait at the same time using only one device [7], [8]. In this thesis, a novel radar-based algorithm is proposed that is capable of tracking multiple people or extracting walking speed of a participant with the coexistence of other people. To address the problem of tracking and monitoring multiple walking people in a cluttered environment, a novel iterative framework based on unsupervised learning and advanced signal processing was developed and tested to analyze the reflected radio signals and extract walking movements and trajectories in a hallway environment. Advanced algorithms were developed to remove multipath effects or ghosts created due to the interaction between walking subjects and stationary objects, to identify and separate reflected signals of two participants walking at a close distance, and to track multiple subjects over time. This method allows the extraction of walking speed in multiple closely-spaced subjects simultaneously, which is distinct from previous approaches where the speed of only one subject was obtained. The proposed multiple-people gait monitoring was assessed with 22 participants who participated in a bedrest (BR) study conducted at McGill University Health Centre (MUHC). The system functionality also was assessed for in-home applications. In this regard, a cloud-based system is proposed for non-contact, real-time recognition and monitoring of physical activities and walking periods within a domestic environment. The proposed system employs standalone Internet of Things (IoT)-based millimeter wave radar devices and deep learning models to enable autonomous, free-living activity recognition and gait analysis. Range-Doppler maps generated from a dataset of real-life in-home activities are used to train deep learning models. The performance of several deep learning models was evaluated based on accuracy and prediction time, with the gated recurrent network (GRU) model selected for real-time deployment due to its balance of speed and accuracy compared to 2D Convolutional Neural Network Long Short-Term Memory (2D-CNNLSTM) and Long Short-Term Memory (LSTM) models. In addition to recognizing and differentiating various activities and walking periods, the system also records the subject’s activity level over time, washroom use frequency, sleep/sedentary/active/out-of-home durations, current state, and gait parameters. Importantly, the system maintains privacy by not requiring the subject to wear or carry any additional devices

    Co-simulation of human digital twins and wearable inertial sensors to analyse gait event estimation

    Get PDF
    We propose a co-simulation framework comprising biomechanical human body models and wearable inertial sensor models to analyse gait events dynamically, depending on inertial sensor type, sensor positioning, and processing algorithms. A total of 960 inertial sensors were virtually attached to the lower extremities of a validated biomechanical model and shoe model. Walking of hemiparetic patients was simulated using motion capture data (kinematic simulation). Accelerations and angular velocities were synthesised according to the inertial sensor models. A comprehensive error analysis of detected gait events versus reference gait events of each simulated sensor position across all segments was performed. For gait event detection, we considered 1-, 2-, and 4-phase gait models. Results of hemiparetic patients showed superior gait event estimation performance for a sensor fusion of angular velocity and acceleration data with lower nMAEs (9%) across all sensor positions compared to error estimation with acceleration data only. Depending on algorithm choice and parameterisation, gait event detection performance increased up to 65%. Our results suggest that user personalisation of IMU placement should be pursued as a first priority for gait phase detection, while sensor position variation may be a secondary adaptation target. When comparing rotatory and translatory error components per body segment, larger interquartile ranges of rotatory errors were observed for all phase models i.e., repositioning the sensor around the body segment axis was more harmful than along the limb axis for gait phase detection. The proposed co-simulation framework is suitable for evaluating different sensor modalities, as well as gait event detection algorithms for different gait phase models. The results of our analysis open a new path for utilising biomechanical human digital twins in wearable system design and performance estimation before physical device prototypes are deployed
    corecore