566 research outputs found

    Fusion of wearable and visual sensors for human motion analysis

    No full text
    Human motion analysis is concerned with the study of human activity recognition, human motion tracking, and the analysis of human biomechanics. Human motion analysis has applications within areas of entertainment, sports, and healthcare. For example, activity recognition, which aims to understand and identify different tasks from motion can be applied to create records of staff activity in the operating theatre at a hospital; motion tracking is already employed in some games to provide an improved user interaction experience and can be used to study how medical staff interact in the operating theatre; and human biomechanics, which is the study of the structure and function of the human body, can be used to better understand athlete performance, pathologies in certain patients, and assess the surgical skill of medical staff. As health services strive to improve the quality of patient care and meet the growing demands required to care for expanding populations around the world, solutions that can improve patient care, diagnosis of pathology, and the monitoring and training of medical staff are necessary. Surgical workflow analysis, for example, aims to assess and optimise surgical protocols in the operating theatre by evaluating the tasks that staff perform and measurable outcomes. Human motion analysis methods can be used to quantify the activities and performance of staff for surgical workflow analysis; however, a number of challenges must be overcome before routine motion capture of staff in an operating theatre becomes feasible. Current commercial human motion capture technologies have demonstrated that they are capable of acquiring human movement with sub-centimetre accuracy; however, the complicated setup procedures, size, and embodiment of current systems make them cumbersome and unsuited for routine deployment within an operating theatre. Recent advances in pervasive sensing have resulted in camera systems that can detect and analyse human motion, and small wear- able sensors that can measure a variety of parameters from the human body, such as heart rate, fatigue, balance, and motion. The work in this thesis investigates different methods that enable human motion to be more easily, reliably, and accurately captured through ambient and wearable sensor technologies to address some of the main challenges that have limited the use of motion capture technologies in certain areas of study. Sensor embodiment and accuracy of activity recognition is one of the challenges that affect the adoption of wearable devices for monitoring human activity. Using a single inertial sensor, which captures the movement of the subject, a variety of motion characteristics can be measured. For patients, wearable inertial sensors can be used in long-term activity monitoring to better understand the condition of the patient and potentially identify deviations from normal activity. For medical staff, inertial sensors can be used to capture tasks being performed for automated workflow analysis, which is useful for staff training, optimisation of existing processes, and early indications of complications within clinical procedures. Feature extraction and classification methods are introduced in thesis that demonstrate motion classification accuracies of over 90% for five different classes of walking motion using a single ear-worn sensor. To capture human body posture, current capture systems generally require a large number of sensors or reflective reference markers to be worn on the body, which presents a challenge for many applications, such as monitoring human motion in the operating theatre, as they may restrict natural movements and make setup complex and time consuming. To address this, a method is proposed, which uses a regression method to estimate motion using a subset of fewer wearable inertial sensors. This method is demonstrated using three sensors on the upper body and is shown to achieve mean estimation accuracies as low as 1.6cm, 1.1cm, and 1.4cm for the hand, elbow, and shoulders, respectively, when compared with the gold standard optical motion capture system. Using a subset of three sensors, mean errors for hand position reach 15.5cm. Unlike human motion capture systems that rely on vision and reflective reference point markers, commonly known as marker-based optical motion capture, wearable inertial sensors are prone to inaccuracies resulting from an accumulation of inaccurate measurements, which becomes increasingly prevalent over time. Two methods are introduced in this thesis, which aim to solve this challenge using visual rectification of the assumed state of the subject. Using a ceiling-mounted camera, a human detection and human motion tracking method is introduced to improve the average mean accuracy of tracking to within 5.8cm in a laboratory of 3m × 5m. To improve the accuracy of capturing the position of body parts and posture for human biomechanics, a camera is also utilised to track the body part movements and provide visual rectification of human pose estimates from inertial sensing. For most subjects, deviations of less than 10% from the ground truth are achieved for hand positions, which exhibit the greatest error, and the occurrence of sources of other common visual and inertial estimation errors, such as measurement noise, visual occlusion, and sensor calibration are shown to be reduced.Open Acces

    Vision-Aided Navigation for GPS-Denied Environments Using Landmark Feature Identification

    Get PDF
    In recent years, unmanned autonomous vehicles have been used in diverse applications because of their multifaceted capabilities. In most cases, the navigation systems for these vehicles are dependent on Global Positioning System (GPS) technology. Many applications of interest, however, entail operations in environments in which GPS is intermittent or completely denied. These applications include operations in complex urban or indoor environments as well as missions in adversarial environments where GPS might be denied using jamming technology. This thesis investigate the development of vision-aided navigation algorithms that utilize processed images from a monocular camera as an alternative to GPS. The vision-aided navigation approach explored in this thesis entails defining a set of inertial landmarks, the locations of which are known within the environment, and employing image processing algorithms to detect these landmarks in image frames collected from an onboard monocular camera. These vision-based landmark measurements effectively serve as surrogate GPS measurements that can be incorporated into a navigation filter. Several image processing algorithms were considered for landmark detection and this thesis focuses in particular on two approaches: the continuous adaptive mean shift (CAMSHIFT) algorithm and the adaptable compressive (ADCOM) tracking algorithm. These algorithms are discussed in detail and applied for the detection and tracking of landmarks in monocular camera images. Navigation filters are then designed that employ sensor fusion of accelerometer and rate gyro data from an inertial measurement unit (IMU) with vision-based measurements of the centroids of one or more landmarks in the scene. These filters are tested in simulated navigation scenarios subject to varying levels of sensor and measurement noise and varying number of landmarks. Finally, conclusions and recommendations are provided regarding the implementation of this vision-aided navigation approach for autonomous vehicle navigation systems

    Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-opera- tive morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilites by observ- ing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted in- struments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D opti- cal imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions

    Robust Modular Feature-Based Terrain-Aided Visual Navigation and Mapping

    Get PDF
    The visual feature-based Terrain-Aided Navigation (TAN) system presented in this thesis addresses the problem of constraining inertial drift introduced into the location estimate of Unmanned Aerial Vehicles (UAVs) in GPS-denied environment. The presented TAN system utilises salient visual features representing semantic or human-interpretable objects (roads, forest and water boundaries) from onboard aerial imagery and associates them to a database of reference features created a-priori, through application of the same feature detection algorithms to satellite imagery. Correlation of the detected features with the reference features via a series of the robust data association steps allows a localisation solution to be achieved with a finite absolute bound precision defined by the certainty of the reference dataset. The feature-based Visual Navigation System (VNS) presented in this thesis was originally developed for a navigation application using simulated multi-year satellite image datasets. The extension of the system application into the mapping domain, in turn, has been based on the real (not simulated) flight data and imagery. In the mapping study the full potential of the system, being a versatile tool for enhancing the accuracy of the information derived from the aerial imagery has been demonstrated. Not only have the visual features, such as road networks, shorelines and water bodies, been used to obtain a position ’fix’, they have also been used in reverse for accurate mapping of vehicles detected on the roads into an inertial space with improved precision. Combined correction of the geo-coding errors and improved aircraft localisation formed a robust solution to the defense mapping application. A system of the proposed design will provide a complete independent navigation solution to an autonomous UAV and additionally give it object tracking capability

    Laser-Based Detection and Tracking of Moving Obstacles to Improve Perception of Unmanned Ground Vehicles

    Get PDF
    El objetivo de esta tesis es desarrollar un sistema que mejore la etapa de percepción de vehículos terrestres no tripulados (UGVs) heterogéneos, consiguiendo con ello una navegación robusta en términos de seguridad y ahorro energético en diferentes entornos reales, tanto interiores como exteriores. La percepción debe tratar con obstáculos estáticos y dinámicos empleando sensores heterogéneos, tales como, odometría, sensor de distancia láser (LIDAR), unidad de medida inercial (IMU) y sistema de posicionamiento global (GPS), para obtener la información del entorno con la precisión más alta, permitiendo mejorar las etapas de planificación y evitación de obstáculos. Para conseguir este objetivo, se propone una etapa de mapeado de obstáculos dinámicos (DOMap) que contiene la información de los obstáculos estáticos y dinámicos. La propuesta se basa en una extensión del filtro de ocupación bayesiana (BOF) incluyendo velocidades no discretizadas. La detección de velocidades se obtiene con Flujo Óptico sobre una rejilla de medidas LIDAR discretizadas. Además, se gestionan las oclusiones entre obstáculos y se añade una etapa de seguimiento multi-hipótesis, mejorando la robustez de la propuesta (iDOMap). La propuesta ha sido probada en entornos simulados y reales con diferentes plataformas robóticas, incluyendo plataformas comerciales y la plataforma (PROPINA) desarrollada en esta tesis para mejorar la colaboración entre equipos de humanos y robots dentro del proyecto ABSYNTHE. Finalmente, se han propuesto métodos para calibrar la posición del LIDAR y mejorar la odometría con una IMU

    Posture Risk Assessment in an Automotive Assembly Line using Inertial Sensors

    Get PDF
    Publisher Copyright: AuthorMusculoskeletal disorders (MSD) are a highly prevalent work-related health problem. Biomechanical exposure to hazardous postures during work is a risk factor for the development of MSD. This study focused on developing an inertial sensor-based approach to evaluate posture in industrial contexts, particularly in automotive assembly lines. The analysis was divided into two stages: 1) a comparative study of joint angles calculated during movements of the upper body segments using the proposed motion tracking framework and the ones provided by a state-of-the-art inertial motion capture system and 2) a work-related posture risk evaluation of operators working in an automative assembly line. For the comparative study, we selected data collected in laboratory (N = 8 participants) and assembly line settings (N = 9 participants), while for the work-related posture risk evaluation, we only considered data acquired within the automotive assembly line. The results revealed that the proposed framework could be applied to track industrial tasks movements performed on the sagittal plane, and the posture evaluation uncovered posture risk differences among different operators that are not considered in traditional posture risk assessment instruments.publishersversionepub_ahead_of_prin

    Hand-finger pose tracking using inertial and magnetic sensors

    Get PDF

    Processing and tracking human motions using optical, inertial, and depth sensors

    Get PDF
    The processing of human motion data constitutes an important strand of research with many applications in computer animation, sport science and medicine. Currently, there exist various systems for recording human motion data that employ sensors of different modalities such as optical, inertial and depth sensors. Each of these sensor modalities have intrinsic advantages and disadvantages that make them suitable for capturing specific aspects of human motions as, for example, the overall course of a motion, the shape of the human body, or the kinematic properties of motions. In this thesis, we contribute with algorithms that exploit the respective strengths of these different modalities for comparing, classifying, and tracking human motion in various scenarios. First, we show how our proposed techniques can be employed, e.g., for real-time motion reconstruction using efficient cross-modal retrieval techniques. Then, we discuss a practical application of inertial sensors-based features to the classification of trampoline motions. As a further contribution, we elaborate on estimating the human body shape from depth data with applications to personalized motion tracking. Finally, we introduce methods to stabilize a depth tracker in challenging situations such as in presence of occlusions. Here, we exploit the availability of complementary inertial-based sensor information.Die Verarbeitung menschlicher Bewegungsdaten stellt einen wichtigen Bereich der Forschung dar mit vielen Anwendungsmöglichkeiten in Computer-Animation, Sportwissenschaften und Medizin. Zurzeit existieren diverse Systeme für die Aufnahme von menschlichen Bewegungsdaten, welche unterschiedliche Sensor-Modalitäten, wie optische Sensoren, Trägheits- oder Tiefen-Sensoren, einsetzen. Alle diese Sensor-Modalitäten haben intrinsische Vor- und Nachteile, welche sie befähigen, spezifische Aspekte menschlicher Bewegungen, wie zum Beispiel den groben Verlauf von Bewegungen, die Form des menschlichen Körpers oder die kinetischen Eigenschaften von Bewegungen, einzufangen. In dieser Arbeit tragen wir mit Algorithmen bei, welche die jeweiligen Vorteile dieser verschiedenen Modalitäten ausnutzen, um menschliche Bewegungen in unterschiedlichen Szenarien zu vergleichen, zu klassifizieren und zu verfolgen. Zuerst zeigen wir, wie unsere vorgeschlagenen Techniken angewandt werden können, um z.B. in Echtzeit Bewegungen mit Hilfe von cross-modalem Suchen zu rekonstruieren. Dann diskutieren wir eine praktische Anwendung von Trägheitssensor-basierten Eigenschaften für die Klassifikation von Trampolinbewegungen. Als einen weiteren Beitrag gehen wir näher auf die Bestimmung der menschlichen Körperform aus Tiefen-Daten mit Anwendung in personalisierter Bewegungsverfolgung ein. Zuletzt führen wir Methoden ein, um einen Tiefen-Tracker in anspruchsvollen Situationen, wie z.B. in Anwesenheit von Verdeckungen, zu stabilisieren. Hier nutzen wir die Verfügbarkeit von komplementären, Trägheits-basierten Sensor-Informationen aus

    Development and Testing of a Self-Contained, Portable Instrumentation System for a Fighter Pilot Helmet

    Get PDF
    A self-contained, portable, inertial and positional measurement system was developed and tested for an HGU-55 model fighter pilot helmet. The system, designated the Portable Helmet Instrumentation System (PHIS), demonstrated the recording of accelerations and rotational rates experienced by the human head in a flight environment. A compact, self-contained, “knee-board” sized computer recorded these accelerations and rotational rates during flight. The present research presents the results of a limited evaluation of this helmet-mounted instrumentation system flown in an Extra 300 fully aerobatic aircraft. The accuracy of the helmet-mounted, inertial head tracker system was compared to the aircraft-mounted referenced system. The ability of the Portable Helmet Instrumentation System to record position, orientation and inertial information in ground and flight conditions was evaluated. The capability of the Portable Helmet Instrumentation System to provide position, orientation and inertial information with sufficient fidelity was evaluated. The concepts demonstrated in this system are: 1) calibration of the inertial sensing element without external equipment 2) the use of differential inertial sensing equipment to remove the accelerations and rotational rates of a moving vehicle from the pilot’s head-tracking measurements 3) the determination of three-dimensional position and orientation from three corresponding points using a range sensor. The range sensor did not operate as planned. The helmet only managed to remain within the range sensor’s field of view for 37% of flight time. Vertical accelerations showed the greatest correlation when comparing helmet measurements to aircraft measurements. The PHIS operated well during level flight

    Cloud point labelling in optical motion capture systems

    Get PDF
    109 p.This Thesis deals with the task of point labeling involved in the overall workflow of Optical Motion Capture Systems. Human motion capture by optical sensors produces at each frame snapshots of the motion as a cloud of points that need to be labeled in order to carry out ensuing motion analysis. The problem of labeling is tackled as a classification problem, using machine learning techniques as AdaBoost or Genetic Search to train a set of weak classifiers, gathered in turn in an ensemble of partial solvers. The result is used to feed an online algorithm able to provide a marker labeling at a target detection accuracy at a reduced computational cost. On the other hand, in contrast to other approaches the use of misleading temporal correlations has been discarded, strengthening the process against failure due to occasional labeling errors. The effectiveness of the approach is demonstrated on a real dataset obtained from the measurement of gait motion of persons, for which the ground truth labeling has been verified manually. In addition to the above, a broad sight regarding the field of Motion Capture and its optical branch is provided to the reader: description, composition, state of the art and related work. Shall it serve as suitable framework to highlight the importance and ease the understanding of the point labeling
    corecore