87 research outputs found

    Using the Microsoft Kinect to assess human bimanual coordination

    Get PDF
    Optical marker-based systems are the gold-standard for capturing three-dimensional (3D) human kinematics. However, these systems have various drawbacks including time consuming marker placement, soft tissue movement artifact, and are prohibitively expensive and non-portable. The Microsoft Kinect is an inexpensive, portable, depth camera that can be used to capture 3D human movement kinematics. Numerous investigations have assessed the Kinect\u27s ability to capture postural control and gait, but to date, no study has evaluated it\u27s capabilities for measuring spatiotemporal coordination. In order to investigate human coordination and coordination stability with the Kinect, a well-studied bimanual coordination paradigm (Kelso, 1984, Kelso; Scholz, & Schöner, 1986) was adapted. ^ Nineteen participants performed ten trials of coordinated hand movements in either in-phase or anti-phase patterns of coordination to the beat of a metronome which was incrementally sped up and slowed down. Continuous relative phase (CRP) and the standard deviation of CRP were used to assess coordination and coordination stability, respectively.^ Data from the Kinect were compared to a Vicon motion capture system using a mixed-model, repeated measures analysis of variance and intraclass correlation coefficients (2,1) (ICC(2,1)).^ Kinect significantly underestimated CRP for the the anti-phase coordination pattern (p \u3c.0001) and overestimated the in-phase pattern (p\u3c.0001). However, a high ICC value (r=.097) was found between the systems. For the standard deviation of CRP, the Kinect exhibited significantly higher variability than the Vicon (p \u3c .0001) but was able to distinguish significant differences between patterns of coordination with anti-phase variability being higher than in-phase (p \u3c .0001). Additionally, the Kinect was unable to accurately capture the structure of coordination stability for the anti-phase pattern. Finally, agreement was found between systems using the ICC (r=.37).^ In conclusion, the Kinect was unable to accurately capture mean CRP. However, the high ICC between the two systems is promising and the Kinect was able to distinguish between the coordination stability of in-phase and anti-phase coordination. However, the structure of variability as movement speed increased was dissimilar to the Vicon, particularly for the anti-phase pattern. Some aspects of coordination are nicely captured by the Kinect while others are not. Detecting differences between bimanual coordination patterns and the stability of those patterns can be achieved using the Kinect. However, researchers interested in the structure of coordination stability should exercise caution since poor agreement was found between systems

    Constructing a reference standard for sports science and clinical movement sets using IMU-based motion capture technology

    Get PDF
    Motion analysis has improved greatly over the years through the development of low-cost inertia sensors. Such sensors have shown promising accuracy for both sport and medical applications, facilitating the possibility of a new reference standard to be constructed. Current gold standards within motion capture, such as high-speed camera-based systems and image processing, are not suitable for many movement-sets within both sports science and clinical movement analysis due to restrictions introduced by the movement sets. These restrictions include cost, portability, local environment constraints (such as light level) and poor line of sight accessibility. This thesis focusses on developing a magnetometer-less IMU-based motion capturing system to detect and classify two challenging movement sets: Basic stances during a Shaolin Kung Fu dynamic form, and severity levels from the modified UPDRS (Unified Parkinson’s Disease Rating Scale) analysis tapping exercise. This project has contributed three datasets. The Shaolin Kung Fu dataset is comprised of 5 dynamic movements repeated over 350 times by 8 experienced practitioners. The dataset was labelled by a professional Shaolin Kung Fu master. Two modified UPDRS datasets were constructed, one for each of the two locations measured. The modified UPDRS datasets comprised of 5 severity levels each with 100 self-emulated movement samples. The modified UPDRS dataset was labelled by a researcher in neuropsychological assessment. The errors associated with IMU systems has been reduced significantly through a combination of a Complementary filter and applying the constraints imposed by the range of movements available in human joints. Novel features have been extracted from each dataset. A piecewise feature set based on a moving window approach has been applied to the Shaolin Kung Fu dataset. While a combination of standard statistical features and a Durbin Watson analysis has been extracted from the modified UPDRS measurements. The project has also contributed a comparison of 24 models has been done on all 3 datasets and the optimal model for each dataset has been determined. The resulting models were commensurate with current gold standards. The Shaolin Kung Fu dataset was classified with the computational costly fine decision tree algorithm using 400 splits, resulting in: an accuracy of 98.9%, a precision of 96.9%, a recall value of 99.1%, and a F1-score of 98.0%. A novel approach of using sequential forward feature analysis was used to determine the minimum number of IMU devices required as well as the optimal number of IMU devices. The modified UPDRS datasets were then classified using a support vector machine algorithm requiring various kernels to achieve their highest accuracies. The measurements were repeated with a sensor located on the wrist and finger, with the wrist requiring a linear kernel and the finger a quadratic kernel. Both locations achieved an accuracy, precision, recall, and F1-score of 99.2%. Additionally, the project contributed an evaluation to the effect sensor location has on the proposed models. It was concluded that the IMU-based system has the potential to construct a reference standard both in sports science and clinical movement analysis. Data protection security and communication speeds were limitations in the system constructed due to the measured data being transferred from the devices via Bluetooth Low Energy communication. These limitations were considered and evaluated in the future works of this project

    Optical Synchronization of Time-of-Flight Cameras

    Get PDF
    Time-of-Flight (ToF)-Kameras erzeugen Tiefenbilder (3D-Bilder), indem sie Infrarotlicht aussenden und die Zeit messen, bis die Reflexion des Lichtes wieder empfangen wird. Durch den Einsatz mehrerer ToF-Kameras können ihre vergleichsweise geringere Auflösungen überwunden, das Sichtfeld vergrößert und Verdeckungen reduziert werden. Der gleichzeitige Betrieb birgt jedoch die Möglichkeit von Störungen, die zu fehlerhaften Tiefenmessungen führen. Das Problem der gegenseitigen Störungen tritt nicht nur bei Mehrkamerasystemen auf, sondern auch wenn mehrere unabhängige ToF-Kameras eingesetzt werden. In dieser Arbeit wird eine neue optische Synchronisation vorgestellt, die keine zusätzliche Hardware oder Infrastruktur erfordert, um ein Zeitmultiplexverfahren (engl. Time-Division Multiple Access, TDMA) für die Anwendung mit ToF-Kameras zu nutzen, um so die Störungen zu vermeiden. Dies ermöglicht es einer Kamera, den Aufnahmeprozess anderer ToF-Kameras zu erkennen und ihre Aufnahmezeiten schnell zu synchronisieren, um störungsfrei zu arbeiten. Anstatt Kabel zur Synchronisation zu benötigen, wird nur die vorhandene Hardware genutzt, um eine optische Synchronisation zu erreichen. Dazu wird die Firmware der Kamera um das Synchronisationsverfahren erweitert. Die optische Synchronisation wurde konzipiert, implementiert und in einem Versuchsaufbau mit drei ToF-Kameras verifiziert. Die Messungen zeigen die Wirksamkeit der vorgeschlagenen optischen Synchronisation. Während der Experimente wurde die Bildrate durch das zusätzliche Synchronisationsverfahren lediglich um etwa 1 Prozent reduziert.Time-of-Flight (ToF) cameras produce depth images (three-dimensional images) by measuring the time between the emission of infrared light and the reception of its reflection. A setup of multiple ToF cameras may be used to overcome their comparatively low resolution, increase the field of view, and reduce occlusion. However, the simultaneous operation of multiple ToF cameras introduces the possibility of interference resulting in erroneous depth measurements. The problem of interference is not only related to a collaborative multicamera setup but also to multiple ToF cameras operating independently. In this work, a new optical synchronization for ToF cameras is presented, requiring no additional hardware or infrastructure to utilize a time-division multiple access (TDMA) scheme to mitigate interference. It effectively enables a camera to sense the acquisition process of other ToF cameras and rapidly synchronizes its acquisition times to operate without interference. Instead of requiring cables to synchronize, only the existing hardware is utilized to enable an optical synchronization. To achieve this, the camera’s firmware is extended with the synchronization procedure. The optical synchronization has been conceptualized, implemented, and verified with an experimental setup deploying three ToF cameras. The measurements show the efficacy of the proposed optical synchronization. During the experiments, the frame rate was reduced by only about 1% due to the synchronization procedure

    Dynamic Speed and Separation Monitoring with On-Robot Ranging Sensor Arrays for Human and Industrial Robot Collaboration

    Get PDF
    This research presents a flexible and dynamic implementation of Speed and Separation Monitoring (SSM) safety measure that optimizes the productivity of a task while ensuring human safety during Human-Robot Collaboration (HRC). Unlike the standard static/fixed demarcated 2D safety zones based on 2D scanning LiDARs, this research presents a dynamic sensor setup that changes the safety zones based on the robot pose and motion. The focus of this research is the implementation of a dynamic SSM safety configuration using Time-of-Flight (ToF) laser-ranging sensor arrays placed around the centers of the links of a robot arm. It investigates the viability of on-robot exteroceptive sensors for implementing SSM as a safety measure. Here the implementation of varying dynamic SSM safety configurations based on approaches of measuring human-robot separation distance and relative speeds using the sensor modalities of ToF sensor arrays, a motion-capture system, and a 2D LiDAR is shown. This study presents a comparative analysis of the dynamic SSM safety configurations in terms of safety, performance, and productivity. A system of systems (cyber-physical system) architecture for conducting and analyzing the HRC experiments was proposed and implemented. The robots, objects, and human operators sharing the workspace are represented virtually as part of the system by using a digital-twin setup. This system was capable of controlling the robot motion, monitoring human physiological response, and tracking the progress of the collaborative task. This research conducted experiments with human subjects performing a task while sharing the robot workspace under the proposed dynamic SSM safety configurations. The experiment results showed a preference for the use of ToF sensors and motion capture rather than the 2D LiDAR currently used in the industry. The human subjects felt safe and comfortable using the proposed dynamic SSM safety configuration with ToF sensor arrays. The results for a standard pick and place task showed up to a 40% increase in productivity in comparison to a 2D LiDAR

    Hand Motion Tracking System using Inertial Measurement Units and Infrared Cameras

    Get PDF
    This dissertation presents a novel approach to develop a system for real-time tracking of the position and orientation of the human hand in three-dimensional space, using MEMS inertial measurement units (IMUs) and infrared cameras. This research focuses on the study and implementation of an algorithm to correct the gyroscope drift, which is a major problem in orientation tracking using commercial-grade IMUs. An algorithm to improve the orientation estimation is proposed. It consists of: 1.) Prediction of the bias offset error while the sensor is static, 2.) Estimation of a quaternion orientation from the unbiased angular velocity, 3.) Correction of the orientation quaternion utilizing the gravity vector and the magnetic North vector, and 4.) Adaptive quaternion interpolation, which determines the final quaternion estimate based upon the current conditions of the sensor. The results verified that the implementation of the orientation correction algorithm using the gravity vector and the magnetic North vector is able to reduce the amount of drift in orientation tracking and is compatible with position tracking using infrared cameras for real-time human hand motion tracking. Thirty human subjects participated in an experiment to validate the performance of the hand motion tracking system. The statistical analysis shows that the error of position tracking is, on average, 1.7 cm in the x-axis, 1.0 cm in the y-axis, and 3.5 cm in the z-axis. The Kruskal-Wallis tests show that the orientation correction algorithm using gravity vector and magnetic North vector can significantly reduce the errors in orientation tracking in comparison to fixed offset compensation. Statistical analyses show that the orientation correction algorithm using gravity vector and magnetic North vector and the on-board Kalman-based orientation filtering produced orientation errors that were not significantly different in the Euler angles, Phi, Theta and Psi, with the p-values of 0.632, 0.262 and 0.728, respectively. The proposed orientation correction algorithm represents a contribution to the emerging approaches to obtain reliable orientation estimates from MEMS IMUs. The development of a hand motion tracking system using IMUs and infrared cameras in this dissertation enables future improvements in natural human-computer interactions within a 3D virtual environment

    Elbow exoskeleton mechanism for multistage poststroke rehabilitation.

    Get PDF
    More than three million people are suffering from stroke in England. The process of post-stroke rehabilitation consists of a series of biomechanical exercises- controlled joint movement in acute phase; external assistance in the mid phase; and variable levels of resistance in the last phase. Post-stroke rehabilitation performed by physiotherapist has many limitations including cost, time, repeatability and intensity of exercises. Although a large variety of arm exoskeletons have been developed in the last two decades to substitute the conventional exercises provided by physiotherapist, most of these systems have limitations with structural configuration, sensory data acquisition and control architecture. It is still difficult to facilitate multistage post-stroke rehabilitation to patients sited around hospital bed without expert intervention. To support this, a framework for elbow exoskeleton has been developed that is portable and has the potential to offer all three types of exercises (external force, assistive and resistive) in a single structure. The design enhances torque to weight ratio compared to joint based actuation systems. The structural lengths of the exoskeleton are determined based on the mean anthropometric parameters of healthy users and the lengths of upperarm and forearm are determined to fit a wide range of users. The operation of the exoskeleton is divided into three regions where each type of exercise can be served in a specific way depending on the requirements of users. Electric motor provides power in the first region of operation whereas spring based assistive force is used in the second region and spring based resistive force is applied in the third region. This design concept provides an engineering solution of integrating three phases of post-stroke exercises in a single device. With this strategy, the energy source is only used in the first region to power the motor whereas the other two modes of exercise can work on the stored energy of springs. All these operations are controlled by a single motor and the maximum torque of the motor required is only 5 Nm. However, due to mechanical advantage, the exoskeleton can provide the joint torque up to 10 Nm. To remove the dependency on biosensor, the exoskeleton has been designed with a hardware-based mechanism that can provide assistive and resistive force. All exoskeleton components are integrated into a microcontroller-based circuit for measuring three joint parameters (angle, velocity and torque) and for controlling exercises. A user-friendly, multi-purpose graphical interface has been developed for participants to control the mode of exercise and it can be managed manually or in automatic mode. To validate the conceptual design, a prototype of the exoskeleton has been developed and it has been tested with healthy subjects. The generated assistive torque can be varied up to 0.037 Nm whereas resistive torque can be varied up to 0.057 Nm. The mass of the exoskeleton is approximately 1.8 kg. Two comparative studies have been performed to assess the measurement accuracy of the exoskeleton. In the first study, data collected from two healthy participants after using the exoskeleton and Kinect sensor by keeping Kinect sensor as reference. The mean measurement errors in joint angle are within 5.18 % for participant 1 and 1.66% for participant 2; the errors in torque measurement are within 8.48% and 7.93% respectively. In the next study, the repeatability of joint measurement by exoskeleton is analysed. The exoskeleton has been used by three healthy users in two rotation cycles. It shows a strong correction (correlation coefficient: 0.99) between two consecutive joint angle measurements and standard deviation is calculated to determine the error margin which comes under acceptable range (maximum: 8.897). The research embodied in this thesis presents a design framework of a portable exoskeleton model for providing three modes of exercises, which could provide a potential solution for all stages of post- stroke rehabilitation

    Shear-promoted drug encapsulation into red blood cells: a CFD model and ÎĽ-PIV analysis

    Get PDF
    The present work focuses on the main parameters that influence shear-promoted encapsulation of drugs into erythrocytes. A CFD model was built to investigate the fluid dynamics of a suspension of particles flowing in a commercial micro channel. Micro Particle Image Velocimetry (ÎĽ-PIV) allowed to take into account for the real properties of the red blood cell (RBC), thus having a deeper understanding of the process. Coupling these results with an analytical diffusion model, suitable working conditions were defined for different values of haematocrit

    Wearables for Movement Analysis in Healthcare

    Get PDF
    Quantitative movement analysis is widely used in clinical practice and research to investigate movement disorders objectively and in a complete way. Conventionally, body segment kinematic and kinetic parameters are measured in gait laboratories using marker-based optoelectronic systems, force plates, and electromyographic systems. Although movement analyses are considered accurate, the availability of specific laboratories, high costs, and dependency on trained users sometimes limit its use in clinical practice. A variety of compact wearable sensors are available today and have allowed researchers and clinicians to pursue applications in which individuals are monitored in their homes and in community settings within different fields of study, such movement analysis. Wearable sensors may thus contribute to the implementation of quantitative movement analyses even during out-patient use to reduce evaluation times and to provide objective, quantifiable data on the patients’ capabilities, unobtrusively and continuously, for clinical purposes

    Videos in Context for Telecommunication and Spatial Browsing

    Get PDF
    The research presented in this thesis explores the use of videos embedded in panoramic imagery to transmit spatial and temporal information describing remote environments and their dynamics. Virtual environments (VEs) through which users can explore remote locations are rapidly emerging as a popular medium of presence and remote collaboration. However, capturing visual representation of locations to be used in VEs is usually a tedious process that requires either manual modelling of environments or the employment of specific hardware. Capturing environment dynamics is not straightforward either, and it is usually performed through specific tracking hardware. Similarly, browsing large unstructured video-collections with available tools is difficult, as the abundance of spatial and temporal information makes them hard to comprehend. At the same time, on a spectrum between 3D VEs and 2D images, panoramas lie in between, as they offer the same 2D images accessibility while preserving 3D virtual environments surrounding representation. For this reason, panoramas are an attractive basis for videoconferencing and browsing tools as they can relate several videos temporally and spatially. This research explores methods to acquire, fuse, render and stream data coming from heterogeneous cameras, with the help of panoramic imagery. Three distinct but interrelated questions are addressed. First, the thesis considers how spatially localised video can be used to increase the spatial information transmitted during video mediated communication, and if this improves quality of communication. Second, the research asks whether videos in panoramic context can be used to convey spatial and temporal information of a remote place and the dynamics within, and if this improves users' performance in tasks that require spatio-temporal thinking. Finally, the thesis considers whether there is an impact of display type on reasoning about events within videos in panoramic context. These research questions were investigated over three experiments, covering scenarios common to computer-supported cooperative work and video browsing. To support the investigation, two distinct video+context systems were developed. The first telecommunication experiment compared our videos in context interface with fully-panoramic video and conventional webcam video conferencing in an object placement scenario. The second experiment investigated the impact of videos in panoramic context on quality of spatio-temporal thinking during localization tasks. To support the experiment, a novel interface to video-collection in panoramic context was developed and compared with common video-browsing tools. The final experimental study investigated the impact of display type on reasoning about events. The study explored three adaptations of our video-collection interface to three display types. The overall conclusion is that videos in panoramic context offer a valid solution to spatio-temporal exploration of remote locations. Our approach presents a richer visual representation in terms of space and time than standard tools, showing that providing panoramic contexts to video collections makes spatio-temporal tasks easier. To this end, videos in context are suitable alternative to more difficult, and often expensive solutions. These findings are beneficial to many applications, including teleconferencing, virtual tourism and remote assistance

    The Complete Reference (Volume 4)

    Get PDF
    This is the fourth volume of the successful series Robot Operating Systems: The Complete Reference, providing a comprehensive overview of robot operating systems (ROS), which is currently the main development framework for robotics applications, as well as the latest trends and contributed systems. The book is divided into four parts: Part 1 features two papers on navigation, discussing SLAM and path planning. Part 2 focuses on the integration of ROS into quadcopters and their control. Part 3 then discusses two emerging applications for robotics: cloud robotics, and video stabilization. Part 4 presents tools developed for ROS; the first is a practical alternative to the roslaunch system, and the second is related to penetration testing. This book is a valuable resource for ROS users and wanting to learn more about ROS capabilities and features.info:eu-repo/semantics/publishedVersio
    • …
    corecore