63 research outputs found

    Quaternionic Attitude Estimation with Inertial Measuring Unit for Robotic and Human Body Motion Tracking using Sequential Monte Carlo Methods with Hyper-Dimensional Spherical Distributions

    Get PDF
    This dissertation examined the inertial tracking technology for robotics and human tracking applications. This is a multi-discipline research that builds on the embedded system engineering, Bayesian estimation theory, software engineering, directional statistics, and biomedical engineering. A discussion of the orientation tracking representations and fundamentals of attitude estimation are presented briefly to outline the some of the issues in each approach. In addition, a discussion regarding to inertial tracking sensors gives an insight to the basic science and limitations in each of the sensing components. An initial experiment was conducted with existing inertial tracker to study the feasibility of using this technology in human motion tracking. Several areas of improvement were made based on the results and analyses from the experiment. As the performance of the system relies on multiple factors from different disciplines, the only viable solution is to optimize the performance in each area. Hence, a top-down approach was used in developing this system. The implementations of the new generation of hardware system design and firmware structure are presented in this dissertation. The calibration of the system, which is one of the most important factors to minimize the estimation error to the system, is also discussed in details. A practical approach using sequential Monte Carlo method with hyper-dimensional statistical geometry is taken to develop the algorithm for recursive estimation with quaternions. An analysis conducted from a simulation study provides insights to the capability of the new algorithms. An extensive testing and experiments was conducted with robotic manipulator and free hand human motion to demonstrate the improvements with the new generation of inertial tracker and the accuracy and stability of the algorithm. In addition, the tracking unit is used to demonstrate the potential in multiple biomedical applications including kinematics tracking and diagnosis instrumentation. The inertial tracking technologies presented in this dissertation is aimed to use specifically for human motion tracking. The goal is to integrate this technology into the next generation of medical diagnostic system

    Multimodal Noncontact Tracking of Surgical Instruments

    Get PDF
    For many procedures, open surgery is being replaced with minimally invasive surgical (MIS) techniques. The advantages of MIS include reduced operative trauma and fewer complications leading to faster patient recovery, better cosmetic results and shorter hospital stays. As the demand for MIS procedures increases, effective surgical training tools must be developed to improve procedure efficiency and patient safety. Motion tracking of laparoscopic instruments can provide objective skills assessment for novices and experienced users. The most common approaches to noncontact motion capture are optical and electromagnetic (EM) tracking systems, though each approach has operational limitations. Optical trackers are prone to occlusion and the performance of EM trackers degrades in the presence of magnetic and ferromagnetic material. The cost of these systems also limits their availability for surgical training and clinical environments. This thesis describes the development and validation of a novel, noncontact laparoscopic tracking system as an inexpensive alternative to current technology. This system is based on the fusion of inertial, magnetic and distance sensing to generate real-time, 6-DOF pose data. Orientation is estimated using a Kalman-filtered attitude-heading reference system (AHRS) and restricted motion at the trocar provides a datum from which position information can be recovered. The Inertial and Range-Enhanced Surgical (IRES) Tracker was prototyped, then validated using a MIS training box and by comparison to an EM tracking system. Results of IRES tracker testing showed similar performance to an EM tracker with position error as low as 1.25 mm RMS and orientation error \u3c0.58 degrees RMS along each axis. The IRES tracker also displayed greater precision and superior magnetic interference rejection capabilities. At a fraction of the cost of current laparoscopic tracking methods, the IRES tracking system would provide an excellent alternative for use in surgical training and skills assessment

    Directional Estimation for Robotic Beating Heart Surgery

    Get PDF
    In robotic beating heart surgery, a remote-controlled robot can be used to carry out the operation while automatically canceling out the heart motion. The surgeon controlling the robot is shown a stabilized view of the heart. First, we consider the use of directional statistics for estimation of the phase of the heartbeat. Second, we deal with reconstruction of a moving and deformable surface. Third, we address the question of obtaining a stabilized image of the heart

    Estimating the orientation of a game controller from inertial and magnetic measurements

    Get PDF
    L’estimation de l’orientation d’un corps rigide en mouvement dans l’espace joue un rôle indispensable dans les technologies de navigation, par exemple, les systèmes militaires de missiles, les avions civils, les systèmes de navigation chirurgicale, la cartographie faite par des robots, les véhicules autonomes et les contrôleurs de jeux. Cette technique est maintenant utilisée dans certaines applications qui nous touchent directement, notamment dans les contrôleurs de jeux tels que la Wii-mote. Dans cette veine, la recherche présentée ici porte sur l’estimation de l’orientation d’un corps rigide à partir des mesures de capteurs inertiels et magnétiques peu coûteux. Comme les capteurs inertiels permettent de mesurer les dérivées temporelles de l’orientation, il est naturel de commencer par l’estimation de la vitesse angulaire. Par conséquent, nous présentons d’abord une nouvelle façon de déterminer la vitesse angulaire d’un corps rigide à partir d’accéléromètres. Ensuite, afin d’estimer l’orientation, nous proposons une nouvelle méthode d’estimation de l’orientation d’un corps rigide dans le plan vertical à partir des mesures d’accéléromètres, en discernant ses composantes inertielle et gravitationnelle. Mais, ce n’est sûrement pas suffisant d’estimer l’orientation dans le plan vertical, parce que la plupart des applications se produisent dans l’espace tridimensionnel. Pour estimer les rotations dans l’espace, nous présentons d’abord la conception d’un contrôleur de jeu, dans lequel tous les capteurs nécessaires sont installés. Ensuite, ces capteurs sont étalonnés pour déterminer leurs facteurs d’échelle et leurs zéros, de manière à améliorer leurs exactitudes. Ensuite, nous développons une nouvelle méthode d’estimation de l’orientation d’un corps rigide se déplaçant dans l’espace, encore en discernant les composantes gravitationnelle et inertielle des accélérations. Finalement, pour imiter le contrôleur de jeu Wii, nous créons une interface usager simple de sorte qu’une représentation virtuelle du contrôleur de jeu puisse suivre chaque mouvement du contrôleur de jeu conçu (réalité virtuelle). L’interface usager conçue montre que l’algorithme proposé est suffisamment précis pour donner à l’usager un contrôle fidèle de l’orientation du contrôleur de jeu virtuel.Estimating the orientation of a rigid-body moving in space is an indispensable component of navigation technology, e.g., military missile systems, civil aircrafts, surgical navigation systems, robot mapping, autonomous vehicles and game controllers. It has now come directly into some aspects of our lives, notoriously in game controllers, such as the Wiimote. In this vein, this research focuses on the development of new algorithms to estimate the rigid-body orientation from common inexpensive inertial and magnetic sensors. As inertial sensors measure the time derivatives of the orientation, it is natural to start with the estimation of the angular velocity. More precisely, we present a novel way of determining the angular velocity of a rigid body from accelerometer measurements. This method finds application in crashworthiness and motion analysis in sports, for example, where impacts forbid the use of mechanical gyroscopes. Secondly, in an attempt to estimate the orientation in a simplified setting, we propose a novel method of estimating the orientation of a rigid body in the vertical plane from point-acceleration measurements, by discerning its gravitational and inertial components. Thirdly, it is surely not enough to estimate the orientation in the vertical plane, because most applications take place in three dimensions. For estimating rotations in space, we first present the game controller design, in which all necessary sensors are installed. Then, these sensors are calibrated to determine their scale factors and offsets so as to improve their performances. Thence, we develop a novel method of estimating the orientation of a rigid body moving in space from inertial sensors, also by discerning the gravitational and inertial components of the acceleration. Finally, in order to imitate the game controller Wii, we create a simple user interface in which a virtual representative of the game controller follows every orientation of the true game controller (virtual reality). The user interface shows that the proposed algorithm is sufficiently accurate to give the user a transparent control of the orientation of the virtual game controller

    Integration of Local Positioning System & Strapdown Inertial Navigation System for Hand-Held Tool Tracking

    Get PDF
    This research concerns the development of a smart sensory system for tracking a hand-held moving device to millimeter accuracy, for slow or nearly static applications over extended periods of time. Since different operators in different applications may use the system, the proposed design should provide the accurate position, orientation, and velocity of the object without relying on the knowledge of its operation and environment, and based purely on the motion that the object experiences. This thesis proposes the design of the integration a low-cost Local Positioning System (LPS) and a low-cost StrapDown Inertial Navigation System (SDINS) with the association of the modified EKF to determine 3D position and 3D orientation of a hand-held tool within a required accuracy. A hybrid LPS/SDINS combines and complements the best features of two different navigation systems, providing a unique solution to track and localize a moving object more precisely. SDINS provides continuous estimates of all components of a motion, but SDINS loses its accuracy over time because of inertial sensors drift and inherent noise. LPS has the advantage that it can possibly get absolute position and velocity independent of operation time; however, it is not highly robust, is computationally quite expensive, and exhibits low measurement rate. This research consists of three major parts: developing a multi-camera vision system as a reliable and cost-effective LPS, developing a SDINS for a hand-held tool, and developing a Kalman filter for sensor fusion. Developing the multi-camera vision system includes mounting the cameras around the workspace, calibrating the cameras, capturing images, applying image processing algorithms and features extraction for every single frame from each camera, and estimating the 3D position from 2D images. In this research, the specific configuration for setting up the multi-camera vision system is proposed to reduce the loss of line of sight as much as possible. The number of cameras, the position of the cameras with respect to each other, and the position and the orientation of the cameras with respect to the center of the world coordinate system are the crucial characteristics in this configuration. The proposed multi-camera vision system is implemented by employing four CCD cameras which are fixed in the navigation frame and their lenses placed on semicircle. All cameras are connected to a PC through the frame grabber, which includes four parallel video channels and is able to capture images from four cameras simultaneously. As a result of this arrangement, a wide circular field of view is initiated with less loss of line-of-sight. However, the calibration is more difficult than a monocular or stereo vision system. The calibration of the multi-camera vision system includes the precise camera modeling, single camera calibration for each camera, stereo camera calibration for each two neighboring cameras, defining a unique world coordinate system, and finding the transformation from each camera frame to the world coordinate system. Aside from the calibration procedure, digital image processing is required to be applied into the images captured by all four cameras in order to localize the tool tip. In this research, the digital image processing includes image enhancement, edge detection, boundary detection, and morphologic operations. After detecting the tool tip in each image captured by each camera, triangulation procedure and optimization algorithm are applied in order to find its 3D position with respect to the known navigation frame. In the SDINS, inertial sensors are mounted rigidly and directly to the body of the tracking object and the inertial measurements are transformed computationally to the known navigation frame. Usually, three gyros and three accelerometers, or a three-axis gyro and a three-axis accelerometer are used for implementing SDINS. The inertial sensors are typically integrated in an inertial measurement unit (IMU). IMUs commonly suffer from bias drift, scale-factor error owing to non-linearity and temperature changes, and misalignment as a result of minor manufacturing defects. Since all these errors lead to SDINS drift in position and orientation, a precise calibration procedure is required to compensate for these errors. The precision of the SDINS depends not only on the accuracy of calibration parameters but also on the common motion-dependent errors. The common motion-dependent errors refer to the errors caused by vibration, coning motion, sculling, and rotational motion. Since inertial sensors provide the full range of heading changes, turn rates, and applied forces that the object is experiencing along its movement, accurate 3D kinematics equations are developed to compensate for the common motion-dependent errors. Therefore, finding the complete knowledge of the motion and orientation of the tool tip requires significant computational complexity and challenges relating to resolution of specific forces, attitude computation, gravity compensation, and corrections for common motion-dependent errors. The Kalman filter technique is a powerful method for improving the output estimation and reducing the effect of the sensor drift. In this research, the modified EKF is proposed to reduce the error of position estimation. The proposed multi-camera vision system data with cooperation of the modified EKF assists the SDINS to deal with the drift problem. This configuration guarantees the real-time position and orientation tracking of the instrument. As a result of the proposed Kalman filter, the effect of the gravitational force in the state-space model will be removed and the error which results from inaccurate gravitational force is eliminated. In addition, the resulting position is smooth and ripple-free. The experimental results of the hybrid vision/SDINS design show that the position error of the tool tip in all directions is about one millimeter RMS. If the sampling rate of the vision system decreases from 20 fps to 5 fps, the errors are still acceptable for many applications

    Fusion of wearable and visual sensors for human motion analysis

    No full text
    Human motion analysis is concerned with the study of human activity recognition, human motion tracking, and the analysis of human biomechanics. Human motion analysis has applications within areas of entertainment, sports, and healthcare. For example, activity recognition, which aims to understand and identify different tasks from motion can be applied to create records of staff activity in the operating theatre at a hospital; motion tracking is already employed in some games to provide an improved user interaction experience and can be used to study how medical staff interact in the operating theatre; and human biomechanics, which is the study of the structure and function of the human body, can be used to better understand athlete performance, pathologies in certain patients, and assess the surgical skill of medical staff. As health services strive to improve the quality of patient care and meet the growing demands required to care for expanding populations around the world, solutions that can improve patient care, diagnosis of pathology, and the monitoring and training of medical staff are necessary. Surgical workflow analysis, for example, aims to assess and optimise surgical protocols in the operating theatre by evaluating the tasks that staff perform and measurable outcomes. Human motion analysis methods can be used to quantify the activities and performance of staff for surgical workflow analysis; however, a number of challenges must be overcome before routine motion capture of staff in an operating theatre becomes feasible. Current commercial human motion capture technologies have demonstrated that they are capable of acquiring human movement with sub-centimetre accuracy; however, the complicated setup procedures, size, and embodiment of current systems make them cumbersome and unsuited for routine deployment within an operating theatre. Recent advances in pervasive sensing have resulted in camera systems that can detect and analyse human motion, and small wear- able sensors that can measure a variety of parameters from the human body, such as heart rate, fatigue, balance, and motion. The work in this thesis investigates different methods that enable human motion to be more easily, reliably, and accurately captured through ambient and wearable sensor technologies to address some of the main challenges that have limited the use of motion capture technologies in certain areas of study. Sensor embodiment and accuracy of activity recognition is one of the challenges that affect the adoption of wearable devices for monitoring human activity. Using a single inertial sensor, which captures the movement of the subject, a variety of motion characteristics can be measured. For patients, wearable inertial sensors can be used in long-term activity monitoring to better understand the condition of the patient and potentially identify deviations from normal activity. For medical staff, inertial sensors can be used to capture tasks being performed for automated workflow analysis, which is useful for staff training, optimisation of existing processes, and early indications of complications within clinical procedures. Feature extraction and classification methods are introduced in thesis that demonstrate motion classification accuracies of over 90% for five different classes of walking motion using a single ear-worn sensor. To capture human body posture, current capture systems generally require a large number of sensors or reflective reference markers to be worn on the body, which presents a challenge for many applications, such as monitoring human motion in the operating theatre, as they may restrict natural movements and make setup complex and time consuming. To address this, a method is proposed, which uses a regression method to estimate motion using a subset of fewer wearable inertial sensors. This method is demonstrated using three sensors on the upper body and is shown to achieve mean estimation accuracies as low as 1.6cm, 1.1cm, and 1.4cm for the hand, elbow, and shoulders, respectively, when compared with the gold standard optical motion capture system. Using a subset of three sensors, mean errors for hand position reach 15.5cm. Unlike human motion capture systems that rely on vision and reflective reference point markers, commonly known as marker-based optical motion capture, wearable inertial sensors are prone to inaccuracies resulting from an accumulation of inaccurate measurements, which becomes increasingly prevalent over time. Two methods are introduced in this thesis, which aim to solve this challenge using visual rectification of the assumed state of the subject. Using a ceiling-mounted camera, a human detection and human motion tracking method is introduced to improve the average mean accuracy of tracking to within 5.8cm in a laboratory of 3m Ă— 5m. To improve the accuracy of capturing the position of body parts and posture for human biomechanics, a camera is also utilised to track the body part movements and provide visual rectification of human pose estimates from inertial sensing. For most subjects, deviations of less than 10% from the ground truth are achieved for hand positions, which exhibit the greatest error, and the occurrence of sources of other common visual and inertial estimation errors, such as measurement noise, visual occlusion, and sensor calibration are shown to be reduced.Open Acces

    Wearable Sensing for Solid Biomechanics: A Review

    Get PDF
    Understanding the solid biomechanics of the human body is important to the study of structure and function of the body, which can have a range of applications in health care, sport, well-being, and workflow analysis. Conventional laboratory-based biomechanical analysis systems and observation-based tests are designed only to capture brief snapshots of the mechanics of movement. With recent developments in wearable sensing technologies, biomechanical analysis can be conducted in less-constrained environments, thus allowing continuous monitoring and analysis beyond laboratory settings. In this paper, we review the current research in wearable sensing technologies for biomechanical analysis, focusing on sensing and analytics that enable continuous, long-term monitoring of kinematics and kinetics in a free-living environment. The main technical challenges, including measurement drift, external interferences, nonlinear sensor properties, sensor placement, and muscle variations, that can affect the accuracy and robustness of existing methods and different methods for reducing the impact of these sources of errors are described in this paper. Recent developments in motion estimation in kinematics, mobile force sensing in kinematics, sensor reduction for electromyography, and the future direction of sensing for biomechanics are also discussed

    A Low Complexity 6DoF Magnetic Tracking System For Biomedical Applications

    Get PDF
    L'abstract è presente nell'allegato / the abstract is in the attachmen

    Directional Estimation for Robotic Beating Heart Surgery

    Get PDF
    In robotic beating heart surgery, a remote-controlled robot can be used to carry out the operation while automatically canceling out the heart motion. The surgeon controlling the robot is shown a stabilized view of the heart. First, we consider the use of directional statistics for estimation of the phase of the heartbeat. Second, we deal with reconstruction of a moving and deformable surface. Third, we address the question of obtaining a stabilized image of the heart
    • …
    corecore