353 research outputs found

    Fusion of wearable and visual sensors for human motion analysis

    No full text
    Human motion analysis is concerned with the study of human activity recognition, human motion tracking, and the analysis of human biomechanics. Human motion analysis has applications within areas of entertainment, sports, and healthcare. For example, activity recognition, which aims to understand and identify different tasks from motion can be applied to create records of staff activity in the operating theatre at a hospital; motion tracking is already employed in some games to provide an improved user interaction experience and can be used to study how medical staff interact in the operating theatre; and human biomechanics, which is the study of the structure and function of the human body, can be used to better understand athlete performance, pathologies in certain patients, and assess the surgical skill of medical staff. As health services strive to improve the quality of patient care and meet the growing demands required to care for expanding populations around the world, solutions that can improve patient care, diagnosis of pathology, and the monitoring and training of medical staff are necessary. Surgical workflow analysis, for example, aims to assess and optimise surgical protocols in the operating theatre by evaluating the tasks that staff perform and measurable outcomes. Human motion analysis methods can be used to quantify the activities and performance of staff for surgical workflow analysis; however, a number of challenges must be overcome before routine motion capture of staff in an operating theatre becomes feasible. Current commercial human motion capture technologies have demonstrated that they are capable of acquiring human movement with sub-centimetre accuracy; however, the complicated setup procedures, size, and embodiment of current systems make them cumbersome and unsuited for routine deployment within an operating theatre. Recent advances in pervasive sensing have resulted in camera systems that can detect and analyse human motion, and small wear- able sensors that can measure a variety of parameters from the human body, such as heart rate, fatigue, balance, and motion. The work in this thesis investigates different methods that enable human motion to be more easily, reliably, and accurately captured through ambient and wearable sensor technologies to address some of the main challenges that have limited the use of motion capture technologies in certain areas of study. Sensor embodiment and accuracy of activity recognition is one of the challenges that affect the adoption of wearable devices for monitoring human activity. Using a single inertial sensor, which captures the movement of the subject, a variety of motion characteristics can be measured. For patients, wearable inertial sensors can be used in long-term activity monitoring to better understand the condition of the patient and potentially identify deviations from normal activity. For medical staff, inertial sensors can be used to capture tasks being performed for automated workflow analysis, which is useful for staff training, optimisation of existing processes, and early indications of complications within clinical procedures. Feature extraction and classification methods are introduced in thesis that demonstrate motion classification accuracies of over 90% for five different classes of walking motion using a single ear-worn sensor. To capture human body posture, current capture systems generally require a large number of sensors or reflective reference markers to be worn on the body, which presents a challenge for many applications, such as monitoring human motion in the operating theatre, as they may restrict natural movements and make setup complex and time consuming. To address this, a method is proposed, which uses a regression method to estimate motion using a subset of fewer wearable inertial sensors. This method is demonstrated using three sensors on the upper body and is shown to achieve mean estimation accuracies as low as 1.6cm, 1.1cm, and 1.4cm for the hand, elbow, and shoulders, respectively, when compared with the gold standard optical motion capture system. Using a subset of three sensors, mean errors for hand position reach 15.5cm. Unlike human motion capture systems that rely on vision and reflective reference point markers, commonly known as marker-based optical motion capture, wearable inertial sensors are prone to inaccuracies resulting from an accumulation of inaccurate measurements, which becomes increasingly prevalent over time. Two methods are introduced in this thesis, which aim to solve this challenge using visual rectification of the assumed state of the subject. Using a ceiling-mounted camera, a human detection and human motion tracking method is introduced to improve the average mean accuracy of tracking to within 5.8cm in a laboratory of 3m × 5m. To improve the accuracy of capturing the position of body parts and posture for human biomechanics, a camera is also utilised to track the body part movements and provide visual rectification of human pose estimates from inertial sensing. For most subjects, deviations of less than 10% from the ground truth are achieved for hand positions, which exhibit the greatest error, and the occurrence of sources of other common visual and inertial estimation errors, such as measurement noise, visual occlusion, and sensor calibration are shown to be reduced.Open Acces

    Control de robots móviles mediante visión omnidireccional utilizando la geometría de tres vistas

    Get PDF
    Este trabajo trata acerca del control visual de robot móviles. Dentro de este campo tan amplio de investigación existen dos elementos a los que prestaremos especial atención: la visión omnidireccional y los modelos geométricos multi-vista. Las cámaras omnidireccionales proporcionan información angular muy precisa, aunque presentan un grado de distorsión significativo en dirección radial. Su cualidad de poseer un amplio campo de visión hace que dichas cámaras sean apropiadas para tareas de navegación robótica. Por otro lado, el uso de los modelos geométricos que relacionan distintas vistas de una escena permite rechazar emparejamientos erróneos de características visuales entre imágenes, y de este modo robustecer el proceso de control mediante visión. Nuestro trabajo presenta dos técnicas de control visual para ser usadas por un robot moviéndose en el plano del suelo. En primer lugar, proponemos un nuevo método para homing visual, que emplea la información dada por un conjunto de imágenes de referencia adquiridas previamente en el entorno, y las imágenes que toma el robot a lo largo de su movimiento. Con el objeto de sacar partido de las cualidades de la visión omnidireccional, nuestro método de homing es puramente angular, y no emplea información alguna sobre distancia. Esta característica, unida al hecho de que el movimiento se realiza en un plano, motiva el empleo del modelo geométrico dado por el tensor trifocal 1D. En particular, las restricciones geométricas impuestas por dicho tensor, que puede ser calculado a partir de correspondencias de puntos entre tres imágenes, mejoran la robustez del control en presencia de errores de emparejamiento. El interés de nuestra propuesta reside en que el método de control empleado calcula las velocidades del robot a partir de información únicamente angular, siendo ésta muy precisa en las cámaras omnidireccionales. Además, presentamos un procedimiento que calcula las relaciones angulares entre las vistas disponibles de manera indirecta, sin necesidad de que haya información visual compartida entre todas ellas. La técnica descrita se puede clasificar como basada en imagen (image-based), dado que no precisa estimar la localización ni utiliza información 3D. El robot converge a la posición objetivo sin conocer la información métrica sobre la trayectoria seguida. Para algunas aplicaciones, como la evitación de obstáculos, puede ser necesario disponer de mayor información sobre el movimiento 3D realizado. Con esta idea en mente, presentamos un nuevo método de control visual basado en entradas sinusoidales. Las sinusoides son funciones con propiedades matemáticas bien conocidas y de variación suave, lo cual las hace adecuadas para su empleo en maniobras de aparcamiento de vehículos. A partir de las velocidades de variación sinusoidal que definimos en nuestro diseño, obtenemos las expresiones analíticas de la evolución de las variables de estado del robot. Además, basándonos en dichas expresiones, proponemos un método de control mediante realimentación del estado. La estimación del estado del robot se obtiene a partir del tensor trifocal 1D calculado entre la vista objetivo, la vista inicial y la vista actual del robot. Mediante este control sinusoidal, el robot queda alineado con la posición objetivo. En un segundo paso, efectuamos la corrección de la profundidad mediante una ley de control definida directamente en términos del tensor trifocal 1D. El funcionamiento de los dos controladores propuestos en el trabajo se ilustra mediante simulaciones, y con el objeto de respaldar su viabilidad se presentan análisis de estabilidad y resultados de simulaciones y de experimentos con imágenes reales

    Texture and Colour in Image Analysis

    Get PDF
    Research in colour and texture has experienced major changes in the last few years. This book presents some recent advances in the field, specifically in the theory and applications of colour texture analysis. This volume also features benchmarks, comparative evaluations and reviews

    Spectral skyline separation: Extended landmark databases and panoramic imaging

    Get PDF
    Differt D, Möller R. Spectral skyline separation: Extended landmark databases and panoramic imaging. Sensors. 2016;16(10): 1614

    Real-Time Multi-Fisheye Camera Self-Localization and Egomotion Estimation in Complex Indoor Environments

    Get PDF
    In this work a real-time capable multi-fisheye camera self-localization and egomotion estimation framework is developed. The thesis covers all aspects ranging from omnidirectional camera calibration to the development of a complete multi-fisheye camera SLAM system based on a generic multi-camera bundle adjustment method
    corecore