177 research outputs found

    Real-time model-based video stabilization for microaerial vehicles

    Get PDF
    The emerging branch of micro aerial vehicles (MAVs) has attracted a great interest for their indoor navigation capabilities, but they require a high quality video for tele-operated or autonomous tasks. A common problem of on-board video quality is the effect of undesired movements, so different approaches solve it with both mechanical stabilizers or video stabilizer software. Very few video stabilizer algorithms in the literature can be applied in real-time but they do not discriminate at all between intentional movements of the tele-operator and undesired ones. In this paper, a novel technique is introduced for real-time video stabilization with low computational cost, without generating false movements or decreasing the performance of the stabilized video sequence. Our proposal uses a combination of geometric transformations and outliers rejection to obtain a robust inter-frame motion estimation, and a Kalman filter based on an ANN learned model of the MAV that includes the control action for motion intention estimation.Peer ReviewedPostprint (author's final draft

    Development of a Wearable Sensor-Based Framework for the Classification and Quantification of High Knee Flexion Exposures in Childcare

    Get PDF
    Repetitive cyclic and prolonged joint loading in high knee flexion postures has been associated with the progression of degenerative knee joint diseases and knee osteoarthritis (OA). Despite this association, high flexion postures, where the knee angle exceeds 120°, are commonly performed within occupational settings. While work related musculoskeletal disorders have been studied across many occupations, the risk of OA development associated with the adoption of high knee flexion postures in childcare workers has until recently been unexplored; and therefore, occupational childcare has not appeared in any systematic reviews seeking to prove a causal relationship between occupational exposures and the risk of knee OA development. Therefore, the overarching goal of this thesis was to explore the adoption of high flexion postures in childcare settings and to develop a means by which these could be measured using non-laboratory-based technologies. The global objectives of this thesis were to (i) identify the postural demands of occupational childcare as they relate to high flexion exposures at the knee, (ii) apply, extend, and validate sensor to segment alignment algorithms through which lower limb flexion-extension kinematics could be measured in multiple high knee flexion postures using inertial measurement units (IMUs), and (iii) develop a machine learning based classification model capable of identifying each childcare-inspired high knee flexion posture. In-line with these global objectives, four independent studies were conducted.   Study I – Characterization of Postures of High Knee Flexion and Lifting Tasks Associated with Occupational Childcare Background: High knee flexion postures, despite their association with increased incidences of osteoarthritis, are frequently adopted in occupational childcare. High flexion exposure thresholds (based on exposure frequency or cumulative daily exposure) that relate to increased incidences of OA have previously been proposed; yet our understanding of how the specific postural requirements of this childcare compare to these thresholds remains limited. Objectives: This study sought to define and quantify high flexion postures typically adopted in childcare to evaluate any increased likelihood of knee osteoarthritis development. Methods: Video data of eighteen childcare workers caring for infant, toddler, and preschool-aged children over a period of approximately 3.25 hours were obtained for this investigation from a larger cohort study conducted across five daycares in Kingston, Ontario, Canada. Each video was segmented to identify the start and end of potential high knee flexion exposures. Each identified posture was quantified by duration and frequency. An analysis of postural adoption by occupational task was subsequently performed to determine which task(s) might pose the greatest risk for cumulative joint trauma. Results: A total of ten postures involving varying degrees of knee flexion were identified, of which 8 involved high knee flexion. Childcare workers caring for children of all ages were found to adopt high knee flexion postures for durations of 1.45±0.15 hours and frequencies of 128.67±21.45 over the 3.25 hour observation period, exceeding proposed thresholds for incidences of knee osteoarthritis development. Structured activities, playing, and feeding tasks were found to demand the greatest adoption of high flexion postures. Conclusions: Based on the findings of this study, it is likely that childcare workers caring for children of all ages exceed cumulative exposure- and frequency-based thresholds associated with increased incidences of knee OA development within a typical working day. Study II – Evaluating the Robustness of Automatic IMU Calibration for Lower Extremity Motion Analysis in High Knee Flexion Postures Background: While inertial measurement units promise an out- of-the-box, minimally intrusive means of objectively measuring body segment kinematics in any setting, in practice their implementation requires complex calculations in order to align each sensor with the coordinate system of the segment to which they are attached. Objectives: This study sought to apply and extend previously proposed alignment algorithms to align inertial sensors with the segments on which they are attached in order to calculate flexion-extension angles for the ankle, knee, and hip during multiple childcare-inspired postures. Methods: The Seel joint axis algorithm and the Constrained Seel Knee Axis (CSKA) algorithm were implemented for the sensor to segment calibration of acceleration and angular velocity data from IMUs mounted on the lower limbs and pelvis, based on a series of calibration movements about each joint. Further, the Iterative Seel spherical axis (ISSA) extension to this implementation was proposed for the calibration of sensors about the ankle and hip. The performance of these algorithms was validated across fifty participants during ten childcare-inspired movements performed by comparing IMU- and gold standard optical-based flexion-extension angle estimates. Results: Strong correlations between the IMU- and optical-based angle estimates were reported for all joints during each high flexion motion with the exception of a moderate correlation reported for the ankle angle estimate during child chair sitting. Mean RMSE between protocols were found to be 6.61° ± 2.96° for the ankle, 7.55° ± 5.82° for the knee, and 14.64° ± 6.73° for the hip. Conclusions: The estimation of joint kinematics through the IMU-based CSKA and ISSA algorithms presents an effective solution for the sensor to segment calibration of inertial sensors, allowing for the calculation of lower limb flexion-extension kinematics in multiple childcare-inspired high knee flexion postures. Study III – A Multi-Dimensional Dynamic Time Warping Distance-Based Framework for the Recognition of High Knee Flexion Postures in Inertial Sensor Data Background: The interpretation of inertial measures as they relate to occupational exposures is non-trivial. In order to relate the continuously collected data to the activities or postures performed by the sensor wearer, pattern recognition and machine learning based algorithms can be applied. One difficulty in applying these techniques to real-world data lies in the temporal and scale variability of human movements, which must be overcome when seeking to classify data in the time-domain. Objectives: The objective of this study was to develop a sensor-based framework for the detection and measurement of isolated childcare-specific postures (identified in Study I). As a secondary objective, the classification accuracy movements performed under loaded and unloaded conditions were compared in order to assess the sensitivity of the developed model to potential postural variabilities accompanying the presence of a load. Methods: IMU-based joint angle estimates for the ankle, knee, and hip were time and scale normalized prior to being input to a multi-dimensional Dynamic Time Warping (DTW) distance-based Nearest Neighbour algorithm for the identification of twelve childcare inspired postures. Fifty participants performed each posture, when possible, under unloaded and loaded conditions. Angle estimates from thirty-five participants were divided into development and testing data, such that 80% of the trials were segmented into movement templates and the remaining 20% were left as continuous movement sequences. These data were then included in the model building and testing phases while the accuracy of the model was validated based on novel data from fifteen participants. Results: Overall accuracies of 82.3% and 55.6% were reached when classifying postures on testing and validation data respectively. When adjusting for the imbalances between classification groups, mean balanced accuracies increased to 86% and 74.6% for testing and validation data respectively. Sensitivity and specificity values revealed the highest rates of misclassifications occurred between flatfoot squatting, heels-up squatting, and stooping. It was also found that the model was not capable of identifying sequences of walking data based on a single step motion template. No significant differences were found between the classification of loaded and unloaded motion trials. Conclusions: A combination of DTW distances calculated between motion templates and continuous movement sequences of lower limb flexion-extension angles was found to be effective in the identification of isolated postures frequently performed in childcare. The developed model was successful at classifying data from participants both included and precluded from the algorithm building dataset and insensitive to postural variability which might be caused by the presence of a load. Study IV – Evaluating the Feasibility of Applying the Developed Multi-Dimensional Dynamic Time Warping Distance-Based Framework to the Measurement and Recognition of High Knee Flexion Postures in a Simulated Childcare Environment Background: While the simulation of high knee flexion postures in isolation (in Study III) provided a basis for the development of a multi-dimensional Dynamic Time Warping based nearest neighbour algorithm for the identification of childcare-inspired postures, it is unlikely that the postures adopted in childcare settings would be performed in isolation. Objectives: This study sought to explore the feasibility of extending the developed classification algorithm to identify and measure postures frequently adopted when performing childcare specific tasks within a simulated childcare environment. Methods: Lower limb inertial motion data was recorded from twelve participants as they interacted with their child during a series of tasks inspired by those identified in Study I as frequently occurring in childcare settings. In order to reduce the error associated with gyroscopic drift over time, joint angles for each trial were calculated over 60 second increments and concatenated across the duration of each trial. Angle estimates from ten participants were time windowed in order to create the inputs for the development and testing of two model designs wherein: (A) the model development data included all templates generated from Study III as well as continuous motion windows here collected, or (B) only the model development data included only windows of continuous motion data. The division of data into the development and testing datasets for each 5-fold cross-validated classification model was performed in one of two ways wherein the data was divided: (a) through stratified randomized partitioning of windows such that 80% were assigned to model development and the remaining 20% were reserved for testing, or (b) by partitioning all windows from a single trial of a single participant for testing while all remaining windows were assigned to the model development dataset. When the classification of continuously collected windows was tested (using division strategy b), a logic-based correction module was introduced to eliminate any erroneous predictions. Each model design (A and B) was developed and tested using both data division strategies (a and b) and subsequently their performance was evaluated based on the classification of all data windows from the two subjects reserved for validation. Results: Classification accuracies of 42.2% and 42.5% were achieved when classifying the testing data separated through stratified random partitioning (division strategy a) using models that included (model A, 159 classes) or excluded (model B, 149 classes) the templates generated from Study III, respectively. This classification accuracy was found to decrease when classifying a test partition which included all windows of a single trial (division strategy b) to 35.4% when using model A (where templates from Study III were included in the model development dataset); however, this same trial was classified with an accuracy of 80.8% when using model B (whose development dataset included only windows of continuous motion data). This accuracy was however found to be highly dependent on the motions performed in a given trial and logic-based corrections were not found to improve classification accuracies. When validating each model by identifying postures performed by novel subjects, classification accuracies of 24.0% and 26.6% were obtained using development data which included (model A) and excluded (model B) templates from Study III, respectively. Across all novel data, the highest classification accuracies were observed when identifying static postures, which is unsurprising given that windows of these postures were most prevalent in the model development datasets. Conclusions: While classification accuracies above those achievable by chance were achieved, the classification models evaluated in this study were incapable of accurately identifying the postures adopted during simulated childcare tasks to a level that could be considered satisfactory to accurately report on the postures assumed in a childcare environment. The success of the classifier was highly dependent on the number of transitions occurring between postures while in high flexion; therefore, more classifier development data is needed to create templates for these novel transition movements. Given the high variability in postural adoption when caring for and interacting with children, additional movement templates based on continuously collected data would be required for the successful identification of postures in occupational settings. Global Conclusions Childcare workers exceed previously reported thresholds for high knee flexion postures based on cumulative exposure and frequency of adoption associated with increased incidences of knee OA development within a typical working day. Inertial measurement units provide a unique means of objectively measuring postures frequently adopted when caring for children which may ultimately permit the quantification of high knee flexion exposures in childcare settings and further study of the relationship between these postures and the risk of OA development in occupational childcare. While the results of this thesis demonstrate that IMU based measures of lower limb kinematics can be used to identify these postures in isolation, further work is required to expand the classification model and enable the identification of such postures from continuously collected data

    Low-Cost Sensors and Biological Signals

    Get PDF
    Many sensors are currently available at prices lower than USD 100 and cover a wide range of biological signals: motion, muscle activity, heart rate, etc. Such low-cost sensors have metrological features allowing them to be used in everyday life and clinical applications, where gold-standard material is both too expensive and time-consuming to be used. The selected papers present current applications of low-cost sensors in domains such as physiotherapy, rehabilitation, and affective technologies. The results cover various aspects of low-cost sensor technology from hardware design to software optimization

    Subject-Independent Frameworks for Robotic Devices: Applying Robot Learning to EMG Signals

    Get PDF
    The capability of having human and robots cooperating together has increased the interest in the control of robotic devices by means of physiological human signals. In order to achieve this goal it is crucial to be able to catch the human intention of movement and to translate it in a coherent robot action. Up to now, the classical approach when considering physiological signals, and in particular EMG signals, is to focus on the specific subject performing the task since the great complexity of these signals. This thesis aims to expand the state of the art by proposing a general subject-independent framework, able to extract the common constraints of human movement by looking at several demonstration by many different subjects. The variability introduced in the system by multiple demonstrations from many different subjects allows the construction of a robust model of human movement, able to face small variations and signal deterioration. Furthermore, the obtained framework could be used by any subject with no need for long training sessions. The signals undergo to an accurate preprocessing phase, in order to remove noise and artefacts. Following this procedure, we are able to extract significant information to be used in online processes. The human movement can be estimated by using well-established statistical methods in Robot Programming by Demonstration applications, in particular the input can be modelled by using a Gaussian Mixture Model (GMM). The performed movement can be continuously estimated with a Gaussian Mixture Regression (GMR) technique, or it can be identified among a set of possible movements with a Gaussian Mixture Classification (GMC) approach. We improved the results by incorporating some previous information in the model, in order to enriching the knowledge of the system. In particular we considered the hierarchical information provided by a quantitative taxonomy of hand grasps. Thus, we developed the first quantitative taxonomy of hand grasps considering both muscular and kinematic information from 40 subjects. The results proved the feasibility of a subject-independent framework, even by considering physiological signals, like EMG, from a wide number of participants. The proposed solution has been used in two different kinds of applications: (I) for the control of prosthesis devices, and (II) in an Industry 4.0 facility, in order to allow human and robot to work alongside or to cooperate. Indeed, a crucial aspect for making human and robots working together is their mutual knowledge and anticipation of other’s task, and physiological signals are capable to provide a signal even before the movement is started. In this thesis we proposed also an application of Robot Programming by Demonstration in a real industrial facility, in order to optimize the production of electric motor coils. The task was part of the European Robotic Challenge (EuRoC), and the goal was divided in phases of increasing complexity. This solution exploits Machine Learning algorithms, like GMM, and the robustness was assured by considering demonstration of the task from many subjects. We have been able to apply an advanced research topic to a real factory, achieving promising results

    Investigation of Low-Cost Wearable Internet of Things Enabled Technology for Physical Activity Recognition in the Elderly

    Get PDF
    Technological advances in mobile sensing technologies has produced new opportunities for the monitoring of the elderly in uncontrolled environments by researchers. Sensors have become smaller, cheaper and can be worn on the body, potentially creating a network of sensors. Smart phones are also more common in the average household and can also provide some behavioural analysis due to the built-in sensors. As a result of this, researchers are able to monitor behaviours in a more naturalistic setting, which can lead to more contextually meaningful data. For those suffering with a mental illness, non-invasive and continuous monitoring can be achieved. Applying sensors to real world environments can aid in improving the quality of life of an elderly person with a mental illness and monitor their condition through behavioural analysis. In order to achieve this, selected classifiers must be able to accurately detect when an activity has taken place. In this thesis we aim to provide a framework for the investigation of activity recognition in the elderly using low-cost wearable sensors, which has resulted in the following contributions: 1. Classification of eighteen activities which were broken down into three disparate categories typical in a home setting: dynamic, sedentary and transitional. These were detected using two Shimmer3 IMU devices that we have located on the participants’ wrist and waist to create a low-cost, contextually deployable solution for elderly care monitoring. 2. Through the categorisation of performed Extracted time-domain and frequency-domain features from the Shimmer devices accelerometer and gyroscope were used as inputs, we achieved a high accuracy classification from a Convolutional Neural Network (CNN) model applied to the data set gained from participants recruited to the study through Join Dementia Research. The model was evaluated by variable adjustments to the model, tracking changes in its performance. Performance statistics were generated by the model for comparison and evaluation. Our results indicate that a low epoch of 200 using the ReLu activation function can display a high accuracy of 86% on the wrist data set and 85% on the waist data set, using only two low-cost wearable devices

    Objective and automated assessment of surgical technical skills with IoT systems: A systematic literature review

    Get PDF
    The assessment of surgical technical skills to be acquired by novice surgeons has been traditionally done by an expert surgeon and is therefore of a subjective nature. Nevertheless, the recent advances on IoT, the possibility of incorporating sensors into objects and environments in order to collect large amounts of data, and the progress on machine learning are facilitating a more objective and automated assessment of surgical technical skills. This paper presents a systematic literature review of papers published after 2013 discussing the objective and automated assessment of surgical technical skills. 101 out of an initial list of 537 papers were analyzed to identify: 1) the sensors used; 2) the data collected by these sensors and the relationship between these data, surgical technical skills and surgeons' levels of expertise; 3) the statistical methods and algorithms used to process these data; and 4) the feedback provided based on the outputs of these statistical methods and algorithms. Particularly, 1) mechanical and electromagnetic sensors are widely used for tool tracking, while inertial measurement units are widely used for body tracking; 2) path length, number of sub-movements, smoothness, fixation, saccade and total time are the main indicators obtained from raw data and serve to assess surgical technical skills such as economy, efficiency, hand tremor, or mind control, and distinguish between two or three levels of expertise (novice/intermediate/advanced surgeons); 3) SVM (Support Vector Machines) and Neural Networks are the preferred statistical methods and algorithms for processing the data collected, while new opportunities are opened up to combine various algorithms and use deep learning; and 4) feedback is provided by matching performance indicators and a lexicon of words and visualizations, although there is considerable room for research in the context of feedback and visualizations, taking, for example, ideas from learning analytics.This work was supported in part by the FEDER/Ministerio de Ciencia, Innovación y Universidades;Agencia Estatal de Investigación, through the Smartlet Project under Grant TIN2017-85179-C3-1-R, and in part by the Madrid Regional Government through the e-Madrid-CM Project under Grant S2018/TCS-4307, a project which is co-funded by the European Structural Funds (FSE and FEDER). Partial support has also been received from the European Commission through Erasmus + Capacity Building in the Field of Higher Education projects, more specifically through projects LALA (586120-EPP-1-2017-1-ES-EPPKA2-CBHE-JP), InnovaT (598758-EPP-1-2018-1-AT-EPPKA2-CBHE-JP), and PROF-XXI (609767-EPP-1-2019-1-ES-EPPKA2-CBHE-JP)

    Deep Learning-Based Action Recognition

    Get PDF
    The classification of human action or behavior patterns is very important for analyzing situations in the field and maintaining social safety. This book focuses on recent research findings on recognizing human action patterns. Technology for the recognition of human action pattern includes the processing technology of human behavior data for learning, technology of expressing feature values ​​of images, technology of extracting spatiotemporal information of images, technology of recognizing human posture, and technology of gesture recognition. Research on these technologies has recently been conducted using general deep learning network modeling of artificial intelligence technology, and excellent research results have been included in this edition

    Dynamic motion coupling of body movement for input control

    Get PDF
    Touchless gestures are used for input when touch is unsuitable or unavailable, such as when interacting with displays that are remote, large, public, or when touch is prohibited for hygienic reasons. Traditionally user input is spatially or semantically mapped to system output, however, in the context of touchless gestures these interaction principles suffer from several disadvantages including memorability, fatigue, and ill-defined mappings. This thesis investigates motion correlation as the third interaction principle for touchless gestures, which maps user input to system output based on spatiotemporal matching of reproducible motion. We demonstrate the versatility of motion correlation by using movement as the primary sensing principle, relaxing the restrictions on how a user provides input. Using TraceMatch, a novel computer vision-based system, we show how users can provide effective input through investigation of input performance with different parts of the body, and how users can switch modes of input spontaneously in realistic application scenarios. Secondly, spontaneous spatial coupling shows how motion correlation can bootstrap spatial input, allowing any body movement, or movement of tangible objects, to be appropriated for ad hoc touchless pointing on a per interaction basis. We operationalise the concept in MatchPoint, and demonstrate the unique capabilities through an exploration of the design space with application examples. Finally, we explore how users synchronise with moving targets in the context of motion correlation, revealing how simple harmonic motion leads to better synchronisation. Using the insights gained we explore the robustness of algorithms used for motion correlation, showing how it is possible to successfully detect a user's intent to interact whilst suppressing accidental activations from common spatial and semantic gestures. Finally, we look across our work to distil guidelines for interface design, and further considerations of how motion correlation can be used, both in general and for touchless gestures

    Tracking for Mobile 3D Augmented Reality Applications

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    AI and IoT Meet Mobile Machines: Towards a Smart Working Site

    Get PDF
    Infrastructure construction is society's cornerstone and economics' catalyst. Therefore, improving mobile machinery's efficiency and reducing their cost of use have enormous economic benefits in the vast and growing construction market. In this thesis, I envision a novel concept smart working site to increase productivity through fleet management from multiple aspects and with Artificial Intelligence (AI) and Internet of Things (IoT)
    corecore