587 research outputs found

    Is the timed-up and go test feasible in mobile devices? A systematic review

    Get PDF
    The number of older adults is increasing worldwide, and it is expected that by 2050 over 2 billion individuals will be more than 60 years old. Older adults are exposed to numerous pathological problems such as Parkinson’s disease, amyotrophic lateral sclerosis, post-stroke, and orthopedic disturbances. Several physiotherapy methods that involve measurement of movements, such as the Timed-Up and Go test, can be done to support efficient and effective evaluation of pathological symptoms and promotion of health and well-being. In this systematic review, the authors aim to determine how the inertial sensors embedded in mobile devices are employed for the measurement of the different parameters involved in the Timed-Up and Go test. The main contribution of this paper consists of the identification of the different studies that utilize the sensors available in mobile devices for the measurement of the results of the Timed-Up and Go test. The results show that mobile devices embedded motion sensors can be used for these types of studies and the most commonly used sensors are the magnetometer, accelerometer, and gyroscope available in off-the-shelf smartphones. The features analyzed in this paper are categorized as quantitative, quantitative + statistic, dynamic balance, gait properties, state transitions, and raw statistics. These features utilize the accelerometer and gyroscope sensors and facilitate recognition of daily activities, accidents such as falling, some diseases, as well as the measurement of the subject's performance during the test execution.info:eu-repo/semantics/publishedVersio

    Wearable inertial sensors for human movement analysis

    Get PDF
    Introduction: The present review aims to provide an overview of the most common uses of wearable inertial sensors in the field of clinical human movement analysis.Areas covered: Six main areas of application are analysed: gait analysis, stabilometry, instrumented clinical tests, upper body mobility assessment, daily-life activity monitoring and tremor assessment. Each area is analyzed both from a methodological and applicative point of view. The focus on the methodological approaches is meant to provide an idea of the computational complexity behind a variable/parameter/index of interest so that the reader is aware of the reliability of the approach. The focus on the application is meant to provide a practical guide for advising clinicians on how inertial sensors can help them in their clinical practice.Expert commentary: Less expensive and more easy to use than other systems used in human movement analysis, wearable sensors have evolved to the point that they can be considered ready for being part of routine clinical routine

    Non-Intrusive Gait Recognition Employing Ultra Wideband Signal Detection

    Get PDF
    A self-regulating and non-contact impulse radio ultra wideband (IR-UWB) based 3D human gait analysis prototype has been modeled and developed with the help of supervised machine learning (SML) for this application for the first time. The work intends to provide a rewarding assistive biomedical application which would help doctors and clinicians monitor human gait trait and abnormalities with less human intervention in the fields of physiological examinations, physiotherapy, home assistance, rehabilitation success determination and health diagnostics, etc. The research comprises IR-UWB data gathered from a number of male and female participants in both anechoic chamber and multi-path environments. In total twenty four individuals have been recruited, where twenty individuals were said to have normal gait and four persons complained of knee pain that resulted in compensated spastic walking patterns. A 3D postural model of human movements has been created from the backscattering property of the radar pulses employing understanding of spherical trigonometry and vector fields. This subjective data (height of the body areas from the ground) of an individual have been recorded and implemented to extract the gait trait from associated biomechanical activity and differentiates the lower limb movement patterns from other body areas. Initially, a 2D postural model of human gait is presented from IR-UWB sensing phenomena employing spherical co-ordinate and trigonometry where only two dimensions such as, distance from radar and height of reflection have been determined. There are five pivotal gait parameters; step frequency, cadence, step length, walking speed, total covered distance, and body orientation which have all been measured employing radar principles and short term Fourier transformation (STFT). Subsequently, the proposed gait identification and parameter characterization has been analysed, tested and validated against popularly accepted smartphone applications with resulting variations of less than 5%. Subsequently, the spherical trigonometric model has been elevated to a 3D postural model where the prototype can determine width of motion, distance from radar, and height of reflection. Vector algebra has been incorporated with this 3D model to measure knee angles and hip angles from the extension and flexion of lower limbs to understand the gait behavior throughout the entire range of bipedal locomotion. Simultaneously, the Microsoft Kinect Xbox One has been employed during the experiment to assist in the validation process. The same vector mathematics have been implemented to the skeleton data obtained from Kinect to determine both the hip and knee angles. The outcomes have been compared by statistical graphical approach Bland and Altman (B&A) analysis. Further, the changes of knee angles obtained from the normal gaits have been used to train popular SMLs such as, k-nearest neighbour (kNN) and support vector machines (SVM). The trained model has subsequently been tested with the new data (knee angles extracted from both normal and abnormal gait) to assess the prediction ability of gait abnormality recognition. The outcomes have been validated through standard and wellknown statistical performance metrics with promising results found. The outcomes prove the acceptability of the proposed non-contact IR-UWB gait recognition to detect gait

    Intelligent Biosignal Processing in Wearable and Implantable Sensors

    Get PDF
    This reprint provides a collection of papers illustrating the state-of-the-art of smart processing of data coming from wearable, implantable or portable sensors. Each paper presents the design, databases used, methodological background, obtained results, and their interpretation for biomedical applications. Revealing examples are brain–machine interfaces for medical rehabilitation, the evaluation of sympathetic nerve activity, a novel automated diagnostic tool based on ECG data to diagnose COVID-19, machine learning-based hypertension risk assessment by means of photoplethysmography and electrocardiography signals, Parkinsonian gait assessment using machine learning tools, thorough analysis of compressive sensing of ECG signals, development of a nanotechnology application for decoding vagus-nerve activity, detection of liver dysfunction using a wearable electronic nose system, prosthetic hand control using surface electromyography, epileptic seizure detection using a CNN, and premature ventricular contraction detection using deep metric learning. Thus, this reprint presents significant clinical applications as well as valuable new research issues, providing current illustrations of this new field of research by addressing the promises, challenges, and hurdles associated with the synergy of biosignal processing and AI through 16 different pertinent studies. Covering a wide range of research and application areas, this book is an excellent resource for researchers, physicians, academics, and PhD or master students working on (bio)signal and image processing, AI, biomaterials, biomechanics, and biotechnology with applications in medicine

    Wearables for Movement Analysis in Healthcare

    Get PDF
    Quantitative movement analysis is widely used in clinical practice and research to investigate movement disorders objectively and in a complete way. Conventionally, body segment kinematic and kinetic parameters are measured in gait laboratories using marker-based optoelectronic systems, force plates, and electromyographic systems. Although movement analyses are considered accurate, the availability of specific laboratories, high costs, and dependency on trained users sometimes limit its use in clinical practice. A variety of compact wearable sensors are available today and have allowed researchers and clinicians to pursue applications in which individuals are monitored in their homes and in community settings within different fields of study, such movement analysis. Wearable sensors may thus contribute to the implementation of quantitative movement analyses even during out-patient use to reduce evaluation times and to provide objective, quantifiable data on the patients’ capabilities, unobtrusively and continuously, for clinical purposes

    A spatiotemporal deep learning approach for automatic pathological Gait classification

    Get PDF
    Human motion analysis provides useful information for the diagnosis and recovery assessment of people suffering from pathologies, such as those affecting the way of walking, i.e., gait. With recent developments in deep learning, state-of-the-art performance can now be achieved using a single 2D-RGB-camera-based gait analysis system, offering an objective assessment of gait-related pathologies. Such systems provide a valuable complement/alternative to the current standard practice of subjective assessment. Most 2D-RGB-camera-based gait analysis approaches rely on compact gait representations, such as the gait energy image, which summarize the characteristics of a walking sequence into one single image. However, such compact representations do not fully capture the temporal information and dependencies between successive gait movements. This limitation is addressed by proposing a spatiotemporal deep learning approach that uses a selection of key frames to represent a gait cycle. Convolutional and recurrent deep neural networks were combined, processing each gait cycle as a collection of silhouette key frames, allowing the system to learn temporal patterns among the spatial features extracted at individual time instants. Trained with gait sequences from the GAIT-IT dataset, the proposed system is able to improve gait pathology classification accuracy, outperforming state-of-the-art solutions and achieving improved generalization on cross-dataset tests.info:eu-repo/semantics/publishedVersio

    Gait analysis in neurological populations: Progression in the use of wearables

    Get PDF
    Gait assessment is an essential tool for clinical applications not only to diagnose different neurological conditions but also to monitor disease progression as it contributes to the understanding of underlying deficits. There are established methods and models for data collection and interpretation of gait assessment within different pathologies. This narrative review aims to depict the evolution of gait assessment from observation and rating scales to wearable sensors and laboratory technologies, and provide possible future directions. In this context, we first present an extensive review of current clinical outcomes and gait models. Then, we demonstrate commercially available wearable technologies with their technical capabilities along with their use in gait assessment studies for various neurological conditions. In the next sections, a descriptive knowledge for existing inertial based algorithms and a sign based guide that shows the outcomes of previous neurological gait assessment studies are presented. Finally, we state a discussion for the use of wearables in gait assessment and speculate the possible research directions by revealing the limitations and knowledge gaps in the literature

    State of the art of audio- and video based solutions for AAL

    Get PDF
    Working Group 3. Audio- and Video-based AAL ApplicationsIt is a matter of fact that Europe is facing more and more crucial challenges regarding health and social care due to the demographic change and the current economic context. The recent COVID-19 pandemic has stressed this situation even further, thus highlighting the need for taking action. Active and Assisted Living (AAL) technologies come as a viable approach to help facing these challenges, thanks to the high potential they have in enabling remote care and support. Broadly speaking, AAL can be referred to as the use of innovative and advanced Information and Communication Technologies to create supportive, inclusive and empowering applications and environments that enable older, impaired or frail people to live independently and stay active longer in society. AAL capitalizes on the growing pervasiveness and effectiveness of sensing and computing facilities to supply the persons in need with smart assistance, by responding to their necessities of autonomy, independence, comfort, security and safety. The application scenarios addressed by AAL are complex, due to the inherent heterogeneity of the end-user population, their living arrangements, and their physical conditions or impairment. Despite aiming at diverse goals, AAL systems should share some common characteristics. They are designed to provide support in daily life in an invisible, unobtrusive and user-friendly manner. Moreover, they are conceived to be intelligent, to be able to learn and adapt to the requirements and requests of the assisted people, and to synchronise with their specific needs. Nevertheless, to ensure the uptake of AAL in society, potential users must be willing to use AAL applications and to integrate them in their daily environments and lives. In this respect, video- and audio-based AAL applications have several advantages, in terms of unobtrusiveness and information richness. Indeed, cameras and microphones are far less obtrusive with respect to the hindrance other wearable sensors may cause to one’s activities. In addition, a single camera placed in a room can record most of the activities performed in the room, thus replacing many other non-visual sensors. Currently, video-based applications are effective in recognising and monitoring the activities, the movements, and the overall conditions of the assisted individuals as well as to assess their vital parameters (e.g., heart rate, respiratory rate). Similarly, audio sensors have the potential to become one of the most important modalities for interaction with AAL systems, as they can have a large range of sensing, do not require physical presence at a particular location and are physically intangible. Moreover, relevant information about individuals’ activities and health status can derive from processing audio signals (e.g., speech recordings). Nevertheless, as the other side of the coin, cameras and microphones are often perceived as the most intrusive technologies from the viewpoint of the privacy of the monitored individuals. This is due to the richness of the information these technologies convey and the intimate setting where they may be deployed. Solutions able to ensure privacy preservation by context and by design, as well as to ensure high legal and ethical standards are in high demand. After the review of the current state of play and the discussion in GoodBrother, we may claim that the first solutions in this direction are starting to appear in the literature. A multidisciplinary 4 debate among experts and stakeholders is paving the way towards AAL ensuring ergonomics, usability, acceptance and privacy preservation. The DIANA, PAAL, and VisuAAL projects are examples of this fresh approach. This report provides the reader with a review of the most recent advances in audio- and video-based monitoring technologies for AAL. It has been drafted as a collective effort of WG3 to supply an introduction to AAL, its evolution over time and its main functional and technological underpinnings. In this respect, the report contributes to the field with the outline of a new generation of ethical-aware AAL technologies and a proposal for a novel comprehensive taxonomy of AAL systems and applications. Moreover, the report allows non-technical readers to gather an overview of the main components of an AAL system and how these function and interact with the end-users. The report illustrates the state of the art of the most successful AAL applications and functions based on audio and video data, namely (i) lifelogging and self-monitoring, (ii) remote monitoring of vital signs, (iii) emotional state recognition, (iv) food intake monitoring, activity and behaviour recognition, (v) activity and personal assistance, (vi) gesture recognition, (vii) fall detection and prevention, (viii) mobility assessment and frailty recognition, and (ix) cognitive and motor rehabilitation. For these application scenarios, the report illustrates the state of play in terms of scientific advances, available products and research project. The open challenges are also highlighted. The report ends with an overview of the challenges, the hindrances and the opportunities posed by the uptake in real world settings of AAL technologies. In this respect, the report illustrates the current procedural and technological approaches to cope with acceptability, usability and trust in the AAL technology, by surveying strategies and approaches to co-design, to privacy preservation in video and audio data, to transparency and explainability in data processing, and to data transmission and communication. User acceptance and ethical considerations are also debated. Finally, the potentials coming from the silver economy are overviewed.publishedVersio
    • …
    corecore