70 research outputs found

    Wearable and BAN Sensors for Physical Rehabilitation and eHealth Architectures

    Get PDF
    The demographic shift of the population towards an increase in the number of elderly citizens, together with the sedentary lifestyle we are adopting, is reflected in the increasingly debilitated physical health of the population. The resulting physical impairments require rehabilitation therapies which may be assisted by the use of wearable sensors or body area network sensors (BANs). The use of novel technology for medical therapies can also contribute to reducing the costs in healthcare systems and decrease patient overflow in medical centers. Sensors are the primary enablers of any wearable medical device, with a central role in eHealth architectures. The accuracy of the acquired data depends on the sensors; hence, when considering wearable and BAN sensing integration, they must be proven to be accurate and reliable solutions. This book is a collection of works focusing on the current state-of-the-art of BANs and wearable sensing devices for physical rehabilitation of impaired or debilitated citizens. The manuscripts that compose this book report on the advances in the research related to different sensing technologies (optical or electronic) and body area network sensors (BANs), their design and implementation, advanced signal processing techniques, and the application of these technologies in areas such as physical rehabilitation, robotics, medical diagnostics, and therapy

    Motor Learning in Virtual Reality: From Motion to Augmented Feedback

    Get PDF
    Hülsmann F. Motor Learning in Virtual Reality: From Motion to Augmented Feedback. Bielefeld: Universität Bielefeld; 2019.Sports and fitness exercises are an important factor in health improvement. The acquisition of new movements - motor learning - and the improvement of techniques for already learned ones are a vital part of sports training. Ideally, this part is supervised and supported by coaches. They know how to correctly perform specific exercises and how to prevent typical movement errors. However, coaches are not always available or do not have enough time to fully supervise training sessions. Virtual reality (VR) is an ideal medium to support motor learning in the absence of coaches. VR systems could supervise performed movements, visualize movement patterns, and identify errors that are performed by a trainee. Further, feedback could be provided that even extends the possibilities of coaching in the real world. Still, core concepts that form the basis of effective coaching applications in VR are not yet fully developed. In order to diminish this gap, we focus on the processing of kinematic data as one of the core components for motor learning. Based on the processing of kinematic data in real-time, a coaching system can supervise a trainee and provide varieties of multi-modal feedback strategies. For motor learning, this thesis explores the development of core concepts based on the usage of kinematic data in three areas. First, the movement that is performed by a trainee must be observed and visualized in real-time. The observation can be achieved by state-of-the-art motion capture techniques. Concerning the visualization, in the real world, trainees can observe their own performance in mirrors. We use a virtual mirror as a paradigm to allow trainees to observe their own movement in a natural way. A well established feedback strategy from real-world coaching, namely improvement via observation of a target performance, is transfered into the virtual mirror paradigm. Second, a system that focuses on motor learning should be able to assess the performance that it observes. For instance, typical errors in a trainee's performance must be detected as soon as possible in order to react in an effective way. Third, the motor learning environment should be able to provide suitable feedback strategies based on detected errors. In this thesis, real-time feedback based on error detection is integrated inside a coaching cycle that is inspired from real-world coaching. In a final evaluation, all the concepts are brought together in a VR coaching system. We demonstrate that this system is able to help trainees in improving their motor performance with respect to specific error patterns. Finally, based on the results throughout the thesis, helpful guidelines in order to develop effective environments for motor learning in VR are proposed

    Behaviour Profiling using Wearable Sensors for Pervasive Healthcare

    Get PDF
    In recent years, sensor technology has advanced in terms of hardware sophistication and miniaturisation. This has led to the incorporation of unobtrusive, low-power sensors into networks centred on human participants, called Body Sensor Networks. Amongst the most important applications of these networks is their use in healthcare and healthy living. The technology has the possibility of decreasing burden on the healthcare systems by providing care at home, enabling early detection of symptoms, monitoring recovery remotely, and avoiding serious chronic illnesses by promoting healthy living through objective feedback. In this thesis, machine learning and data mining techniques are developed to estimate medically relevant parameters from a participant‘s activity and behaviour parameters, derived from simple, body-worn sensors. The first abstraction from raw sensor data is the recognition and analysis of activity. Machine learning analysis is applied to a study of activity profiling to detect impaired limb and torso mobility. One of the advances in this thesis to activity recognition research is in the application of machine learning to the analysis of 'transitional activities': transient activity that occurs as people change their activity. A framework is proposed for the detection and analysis of transitional activities. To demonstrate the utility of transition analysis, we apply the algorithms to a study of participants undergoing and recovering from surgery. We demonstrate that it is possible to see meaningful changes in the transitional activity as the participants recover. Assuming long-term monitoring, we expect a large historical database of activity to quickly accumulate. We develop algorithms to mine temporal associations to activity patterns. This gives an outline of the user‘s routine. Methods for visual and quantitative analysis of routine using this summary data structure are proposed and validated. The activity and routine mining methodologies developed for specialised sensors are adapted to a smartphone application, enabling large-scale use. Validation of the algorithms is performed using datasets collected in laboratory settings, and free living scenarios. Finally, future research directions and potential improvements to the techniques developed in this thesis are outlined

    Human activity classification with miniature inertial sensors

    Get PDF
    Ankara : The Department of Electrical and Electronics Engineering and the Institute of Engineering and Sciences of Bilkent University, 2009.Thesis (Master's) -- Bilkent University, 2009.Includes bibliographical references leaves 79-92.This thesis provides a comparative study on activity recognition using miniature inertial sensors (gyroscopes and accelerometers) and magnetometers worn on the human body. The classification methods used and compared in this study are: a rule-based algorithm (RBA) or decision tree, least-squares method (LSM), k-nearest neighbor algorithm (k-NN), dynamic time warping (DTW- 1 and DTW-2), and support vector machines (SVM). In the first part of this study, eight different leg motions are classified using only two single-axis gyroscopes. In the second part, human activities are classified using five sensor units worn on different parts of the body. Each sensor unit comprises a tri-axial gyroscope, a tri-axial accelerometer and a tri-axial magnetometer. Different feature sets extracted from the raw sensor data and these are used in the classification process. A number of feature extraction and reduction techniques (principal component analysis) as well as different cross-validation techniques have been implemented and compared. A performance comparison of these classification methods is provided in terms of their correct differentiation rates, confusion matrices, pre-processing and training times and classification times. Among the classification techniques we have considered and implemented, SVM, in general, gives the highest correct differentiation rate, followed by k-NN. The classification time for RBA is the shortest, followed by SVM or LSM, k-NN or DTW-1, and DTW-2 methods. SVM requires the longest training time, whereas DTW-2 takes the longest amount of classification time. Although there is not a significant difference between the correct differentiation rates obtained by different crossvalidation techniques, repeated random sub-sampling uses the shortest amount of classification time, whereas leave-one-out requires the longest.Tunçel, OrkunM.S

    A survey of Alzheimer's disease early diagnosis methods for cognitive assessment

    Get PDF
    Dementia is a syndrome that is characterised by the decline of different cognitive abilities. A high rate of deaths and high cost for detection, treatments, and patients care count amongst its consequences. Although there is no cure for dementia, a timely diagnosis helps in obtaining necessary support, appropriate medication, and maintenance, as far as possible, of engagement in intellectual, social, and physical activities. The early detection of Alzheimer Disease (AD) is considered to be of high importance for improving the quality of life of patients and their families. In particular, Virtual Reality (VR) is an expanding tool that can be used in order to assess cognitive abilities while navigating through a Virtual Environment (VE). The paper summarises common AD screening and diagnosis techniques focusing on the latest approaches that are based on Virtual Environments, behaviour analysis, and emotions recognition, aiming to provide more reliable and non-invasive diagnostics at home or in a clinical environment. Furthermore, different AD diagnosis evaluation methods and metrics are presented and discussed together with an overview of the different datasets

    Enforcing Realism and Temporal Consistency for Large-Scale Video Inpainting

    Full text link
    Today, people are consuming more videos than ever before. At the same time, video manipulation has rapidly been gaining traction due to the influence of viral videos, as well as the convenience of editing software. Although video manipulation has legitimate entertainment purposes, it can also be incredibly destructive. In order to understand the positive and negative consequences of media manipulation---as well as to maintain the integrity of mass media---it is important to investigate the capabilities of video manipulation techniques. In this dissertation, we focus on the manipulation task of video inpainting, where the goal is to automatically fill in missing parts of a masked video with semantically relevant content. Inpainting results should possess high visual quality with respect to reconstruction performance, realism, and temporal consistency, i.e., they should faithfully recreate missing contents in a way that resembles the real world and exhibits minimal flickering artifacts. Two major challenges have impeded progress toward improving visual quality: semantic ambiguity and diagnostic evaluation. Semantic ambiguity exists for any masked video due to several plausible explanations of the events in the observed scene; however, prior methods have struggled with ambiguity due to their limited temporal contexts. As for diagnostic evaluation, prior work has overemphasized aggregate analysis on large datasets and underemphasized fine-grained analysis on modern inpainting failure modes; as a result, the expected behaviors of models under specific scenarios have remained poorly understood. Our work improves on both models and evaluation techniques for video inpainting, thereby providing deeper insight into how an inpainting model's design impacts the visual quality of its outputs. To advance state-of-the-art in video inpainting, we propose two novel solutions that improve visual quality by expanding the available temporal context. Our first approach, bi-TAI, intelligently integrates information from multiple frames before and after the desired sequence. It produces more realistic results than prior work, which could only consume limited contextual information. Our second approach, HyperCon, suppresses flickering artifacts from frame-wise processing by identifying and propagating consistencies found in high frame-rate space; we successfully apply it to tasks as disparate as video inpainting and style transfer. Aside from methodological improvements, we also propose two novel evaluation tools to diagnose failure modes of modern video inpainting methods. Our first such contribution is the Moving Symbols dataset, which we use to characterize the sensitivity of a state-of-the-art video prediction model to controllable appearance and motion parameters. Our second contribution is the DEVIL benchmark, which provides a dataset and a comprehensive evaluation scheme to quantify how several semantic properties of the input video and mask affect video inpainting quality. Through models that exploit temporal context---as well as evaluation paradigms that reveal fine-grained failure modes of modern inpainting methods at scale---our contributions enforce better visual quality for video inpainting on a larger scale than prior work. We enable the production of more convincing manipulated videos for data processing and social media needs; we also establish replicable fine-grained analysis techniques to cultivate future progress in the field.PHDComputer Science & EngineeringUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/169785/1/szetor_1.pd
    • …
    corecore