347 research outputs found

    Fusion of wearable and visual sensors for human motion analysis

    No full text
    Human motion analysis is concerned with the study of human activity recognition, human motion tracking, and the analysis of human biomechanics. Human motion analysis has applications within areas of entertainment, sports, and healthcare. For example, activity recognition, which aims to understand and identify different tasks from motion can be applied to create records of staff activity in the operating theatre at a hospital; motion tracking is already employed in some games to provide an improved user interaction experience and can be used to study how medical staff interact in the operating theatre; and human biomechanics, which is the study of the structure and function of the human body, can be used to better understand athlete performance, pathologies in certain patients, and assess the surgical skill of medical staff. As health services strive to improve the quality of patient care and meet the growing demands required to care for expanding populations around the world, solutions that can improve patient care, diagnosis of pathology, and the monitoring and training of medical staff are necessary. Surgical workflow analysis, for example, aims to assess and optimise surgical protocols in the operating theatre by evaluating the tasks that staff perform and measurable outcomes. Human motion analysis methods can be used to quantify the activities and performance of staff for surgical workflow analysis; however, a number of challenges must be overcome before routine motion capture of staff in an operating theatre becomes feasible. Current commercial human motion capture technologies have demonstrated that they are capable of acquiring human movement with sub-centimetre accuracy; however, the complicated setup procedures, size, and embodiment of current systems make them cumbersome and unsuited for routine deployment within an operating theatre. Recent advances in pervasive sensing have resulted in camera systems that can detect and analyse human motion, and small wear- able sensors that can measure a variety of parameters from the human body, such as heart rate, fatigue, balance, and motion. The work in this thesis investigates different methods that enable human motion to be more easily, reliably, and accurately captured through ambient and wearable sensor technologies to address some of the main challenges that have limited the use of motion capture technologies in certain areas of study. Sensor embodiment and accuracy of activity recognition is one of the challenges that affect the adoption of wearable devices for monitoring human activity. Using a single inertial sensor, which captures the movement of the subject, a variety of motion characteristics can be measured. For patients, wearable inertial sensors can be used in long-term activity monitoring to better understand the condition of the patient and potentially identify deviations from normal activity. For medical staff, inertial sensors can be used to capture tasks being performed for automated workflow analysis, which is useful for staff training, optimisation of existing processes, and early indications of complications within clinical procedures. Feature extraction and classification methods are introduced in thesis that demonstrate motion classification accuracies of over 90% for five different classes of walking motion using a single ear-worn sensor. To capture human body posture, current capture systems generally require a large number of sensors or reflective reference markers to be worn on the body, which presents a challenge for many applications, such as monitoring human motion in the operating theatre, as they may restrict natural movements and make setup complex and time consuming. To address this, a method is proposed, which uses a regression method to estimate motion using a subset of fewer wearable inertial sensors. This method is demonstrated using three sensors on the upper body and is shown to achieve mean estimation accuracies as low as 1.6cm, 1.1cm, and 1.4cm for the hand, elbow, and shoulders, respectively, when compared with the gold standard optical motion capture system. Using a subset of three sensors, mean errors for hand position reach 15.5cm. Unlike human motion capture systems that rely on vision and reflective reference point markers, commonly known as marker-based optical motion capture, wearable inertial sensors are prone to inaccuracies resulting from an accumulation of inaccurate measurements, which becomes increasingly prevalent over time. Two methods are introduced in this thesis, which aim to solve this challenge using visual rectification of the assumed state of the subject. Using a ceiling-mounted camera, a human detection and human motion tracking method is introduced to improve the average mean accuracy of tracking to within 5.8cm in a laboratory of 3m × 5m. To improve the accuracy of capturing the position of body parts and posture for human biomechanics, a camera is also utilised to track the body part movements and provide visual rectification of human pose estimates from inertial sensing. For most subjects, deviations of less than 10% from the ground truth are achieved for hand positions, which exhibit the greatest error, and the occurrence of sources of other common visual and inertial estimation errors, such as measurement noise, visual occlusion, and sensor calibration are shown to be reduced.Open Acces

    Precise measurement of position and attitude based on convolutional neural network and visual correspondence relationship

    Get PDF
    Accurate measurement of position and attitude information is particularly important. Traditional measurement methods generally require high-precision measurement equipment for analysis, leading to high costs and limited applicability. Vision-based measurement schemes need to solve complex visual relationships. With the extensive development of neural networks in related fields, it has become possible to apply them to the object position and attitude. In this paper, we propose an object pose measurement scheme based on convolutional neural network and we have successfully implemented end-toend position and attitude detection. Furthermore, to effectively expand the measurement range and reduce the number of training samples, we demonstrated the independence of objects in each dimension and proposed subadded training programs. At the same time, we generated generating image encoder to guarantee the detection performance of the training model in practical applications

    Development of a tool for rider posture tracking on a motorcycle

    Get PDF
    Safety has become one of the most important fields to develop for the vehicle industry. Both in the automotive industry and others, new features for traditional vehicles are being developed to increase the safety and comfort of passengers, such as autonomous driving. In the case of the motorcycle industry, autonomous driving is further away from implementation, so this sector is developing other ways to increase the safety of its vehicles. At the Technische Universität Darmstadt, a test motorcycle is setup to better understand how motorcycle riders behave in different situations. The system is based on different sensors and cameras to capture the status of both the motorcycle and the rider in each of these situations. One of the objectives of this project is to develop a tool that fits the TU Darmstadt project to estimate the position of the rider through fisheye cameras located on the back and front of the motorcycle. For this purpose, different methods of object detection through image processing have been evaluated, taking into account the potential of each and the final objective of this tool. The results of these different tests have resulted in a system composed of small markers that, incorporated on the jacket of the rider, allow to extract the position of each one of the locations. For this, a tool based on three stages has been developed. The first is to extract the parameters of the camera to be able to analyse the images afterwards. The second consists of compensating the distortion caused by the fisheye cameras, using the parameters extracted in the first stage. In the last stage, the tool allows to analyse the position and orientation of the markers in the image and converting it into the position it occupies in the motorcycle coordinates. This tool has been tested with different tests to analyse its accuracy. Analysing the results, it is determined that this tool is a valid approach to satisfy the purposes of the project of the Technische Univertität Darmstadt. The precision, although variant in each of the tests, is sufficiently high to satisfy its objective in the ranges in which the tool has been configured. Finally the tool is tested under the conditions for which it was designed, and the possible aspects to be improved are detailed for its subsequent implementation.Outgoin

    Climbing and Walking Robots

    Get PDF
    Nowadays robotics is one of the most dynamic fields of scientific researches. The shift of robotics researches from manufacturing to services applications is clear. During the last decades interest in studying climbing and walking robots has been increased. This increasing interest has been in many areas that most important ones of them are: mechanics, electronics, medical engineering, cybernetics, controls, and computers. Today’s climbing and walking robots are a combination of manipulative, perceptive, communicative, and cognitive abilities and they are capable of performing many tasks in industrial and non- industrial environments. Surveillance, planetary exploration, emergence rescue operations, reconnaissance, petrochemical applications, construction, entertainment, personal services, intervention in severe environments, transportation, medical and etc are some applications from a very diverse application fields of climbing and walking robots. By great progress in this area of robotics it is anticipated that next generation climbing and walking robots will enhance lives and will change the way the human works, thinks and makes decisions. This book presents the state of the art achievments, recent developments, applications and future challenges of climbing and walking robots. These are presented in 24 chapters by authors throughtot the world The book serves as a reference especially for the researchers who are interested in mobile robots. It also is useful for industrial engineers and graduate students in advanced study

    Non-Intrusive Gait Recognition Employing Ultra Wideband Signal Detection

    Get PDF
    A self-regulating and non-contact impulse radio ultra wideband (IR-UWB) based 3D human gait analysis prototype has been modeled and developed with the help of supervised machine learning (SML) for this application for the first time. The work intends to provide a rewarding assistive biomedical application which would help doctors and clinicians monitor human gait trait and abnormalities with less human intervention in the fields of physiological examinations, physiotherapy, home assistance, rehabilitation success determination and health diagnostics, etc. The research comprises IR-UWB data gathered from a number of male and female participants in both anechoic chamber and multi-path environments. In total twenty four individuals have been recruited, where twenty individuals were said to have normal gait and four persons complained of knee pain that resulted in compensated spastic walking patterns. A 3D postural model of human movements has been created from the backscattering property of the radar pulses employing understanding of spherical trigonometry and vector fields. This subjective data (height of the body areas from the ground) of an individual have been recorded and implemented to extract the gait trait from associated biomechanical activity and differentiates the lower limb movement patterns from other body areas. Initially, a 2D postural model of human gait is presented from IR-UWB sensing phenomena employing spherical co-ordinate and trigonometry where only two dimensions such as, distance from radar and height of reflection have been determined. There are five pivotal gait parameters; step frequency, cadence, step length, walking speed, total covered distance, and body orientation which have all been measured employing radar principles and short term Fourier transformation (STFT). Subsequently, the proposed gait identification and parameter characterization has been analysed, tested and validated against popularly accepted smartphone applications with resulting variations of less than 5%. Subsequently, the spherical trigonometric model has been elevated to a 3D postural model where the prototype can determine width of motion, distance from radar, and height of reflection. Vector algebra has been incorporated with this 3D model to measure knee angles and hip angles from the extension and flexion of lower limbs to understand the gait behavior throughout the entire range of bipedal locomotion. Simultaneously, the Microsoft Kinect Xbox One has been employed during the experiment to assist in the validation process. The same vector mathematics have been implemented to the skeleton data obtained from Kinect to determine both the hip and knee angles. The outcomes have been compared by statistical graphical approach Bland and Altman (B&A) analysis. Further, the changes of knee angles obtained from the normal gaits have been used to train popular SMLs such as, k-nearest neighbour (kNN) and support vector machines (SVM). The trained model has subsequently been tested with the new data (knee angles extracted from both normal and abnormal gait) to assess the prediction ability of gait abnormality recognition. The outcomes have been validated through standard and wellknown statistical performance metrics with promising results found. The outcomes prove the acceptability of the proposed non-contact IR-UWB gait recognition to detect gait

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Climbing and Walking Robots

    Get PDF
    With the advancement of technology, new exciting approaches enable us to render mobile robotic systems more versatile, robust and cost-efficient. Some researchers combine climbing and walking techniques with a modular approach, a reconfigurable approach, or a swarm approach to realize novel prototypes as flexible mobile robotic platforms featuring all necessary locomotion capabilities. The purpose of this book is to provide an overview of the latest wide-range achievements in climbing and walking robotic technology to researchers, scientists, and engineers throughout the world. Different aspects including control simulation, locomotion realization, methodology, and system integration are presented from the scientific and from the technical point of view. This book consists of two main parts, one dealing with walking robots, the second with climbing robots. The content is also grouped by theoretical research and applicative realization. Every chapter offers a considerable amount of interesting and useful information

    Application of a mobile robot to spatial mapping of radioactive substances in indoor environment

    Get PDF
    Nuclear medicine requires the use of radioactive substances that can contaminate critical areas (dangerous or hazardous) where the presence of a human must be reduced or avoided. The present work uses a mobile robot in real environment and 3D simulation to develop a method to realize spatial mapping of radioactive substances. The robot should visit all the waypoints arranged in a grid of connectivity that represents the environment. The work presents the methodology to perform the path planning, control and estimation of the robot location. For path planning two methods are approached, one a heuristic method based on observation of problem and another one was carried out an adaptation in the operations of the genetic algorithm. The control of the actuators was based on two methodologies, being the first to follow points and the second to follow trajectories. To locate the real mobile robot, the extended Kalman filter was used to fuse an ultra-wide band sensor with odometry, thus estimating the position and orientation of the mobile agent. The validation of the obtained results occurred using a low cost system with a laser range finder.A medicina nuclear requer o uso de substâncias radioativas que pode vir a contaminar áreas críticas, onde a presença de um ser humano deve ser reduzida ou evitada. O presente trabalho utiliza um robô móvel em ambiente real e em simulação 3D para desenvolver um método para o mapeamento espacial de substâncias radioativas. O robô deve visitar todos os waypoinst dispostos em uma grelha de conectividade que representa o ambiente. O trabalho apresenta a metodologia para realizar o planejamento de rota, controle e estimação da localização do robô. Para o planejamento de rota são abordados dois métodos, um baseado na heurística ao observar o problema e ou outro foi realizado uma adaptação nas operações do algoritmo genético. O controle dos atuadores foi baseado em duas metodologias, sendo a primeira para seguir de pontos e a segunda seguir trajetórias. Para localizar o robô móvel real foi utilizado o filtro de Kalman extendido para a fusão entre um sensor ultra-wide band e odometria, estimando assim a posição e orientação do agente móvel. A validação dos resultados obtidos ocorreu utilizando um sistema de baixo custo com um laser range finder

    Towards gestural understanding for intelligent robots

    Get PDF
    Fritsch JN. Towards gestural understanding for intelligent robots. Bielefeld: Universität Bielefeld; 2012.A strong driving force of scientific progress in the technical sciences is the quest for systems that assist humans in their daily life and make their life easier and more enjoyable. Nowadays smartphones are probably the most typical instances of such systems. Another class of systems that is getting increasing attention are intelligent robots. Instead of offering a smartphone touch screen to select actions, these systems are intended to offer a more natural human-machine interface to their users. Out of the large range of actions performed by humans, gestures performed with the hands play a very important role especially when humans interact with their direct surrounding like, e.g., pointing to an object or manipulating it. Consequently, a robot has to understand such gestures to offer an intuitive interface. Gestural understanding is, therefore, a key capability on the way to intelligent robots. This book deals with vision-based approaches for gestural understanding. Over the past two decades, this has been an intensive field of research which has resulted in a variety of algorithms to analyze human hand motions. Following a categorization of different gesture types and a review of other sensing techniques, the design of vision systems that achieve hand gesture understanding for intelligent robots is analyzed. For each of the individual algorithmic steps – hand detection, hand tracking, and trajectory-based gesture recognition – a separate Chapter introduces common techniques and algorithms and provides example methods. The resulting recognition algorithms are considering gestures in isolation and are often not sufficient for interacting with a robot who can only understand such gestures when incorporating the context like, e.g., what object was pointed at or manipulated. Going beyond a purely trajectory-based gesture recognition by incorporating context is an important prerequisite to achieve gesture understanding and is addressed explicitly in a separate Chapter of this book. Two types of context, user-provided context and situational context, are reviewed and existing approaches to incorporate context for gestural understanding are reviewed. Example approaches for both context types provide a deeper algorithmic insight into this field of research. An overview of recent robots capable of gesture recognition and understanding summarizes the currently realized human-robot interaction quality. The approaches for gesture understanding covered in this book are manually designed while humans learn to recognize gestures automatically during growing up. Promising research targeted at analyzing developmental learning in children in order to mimic this capability in technical systems is highlighted in the last Chapter completing this book as this research direction may be highly influential for creating future gesture understanding systems
    corecore