8 research outputs found

    Fusion of IMU and Vision for Absolute Scale Estimation in Monocular SLAM

    Get PDF
    The fusion of inertial and visual data is widely used to improve an object's pose estimation. However, this type of fusion is rarely used to estimate further unknowns in the visual framework. In this paper we present and compare two different approaches to estimate the unknown scale parameter in a monocular SLAM framework. Directly linked to the scale is the estimation of the object's absolute velocity and position in 3D. The first approach is a spline fitting task adapted from Jung and Taylor and the second is an extended Kalman filter. Both methods have been simulated offline on arbitrary camera paths to analyze their behavior and the quality of the resulting scale estimation. We then embedded an online multi rate extended Kalman filter in the Parallel Tracking and Mapping (PTAM) algorithm of Klein and Murray together with an inertial sensor. In this inertial/monocular SLAM framework, we show a real time, robust and fast converging scale estimation. Our approach does not depend on known patterns in the vision part nor a complex temporal synchronization between the visual and inertial senso

    Control of Redundant Joint Structures Using Image Information During the Tracking of Non-Smooth Trajectories

    Get PDF
    Visual information is increasingly being used in a great number of applications in order to perform the guidance of joint structures. This paper proposes an image-based controller which allows the joint structure guidance when its number of degrees of freedom is greater than the required for the developed task. In this case, the controller solves the redundancy combining two different tasks: the primary task allows the correct guidance using image information, and the secondary task determines the most adequate joint structure posture solving the possible joint redundancy regarding the performed task in the image space. The method proposed to guide the joint structure also employs a smoothing Kalman filter not only to determine the moment when abrupt changes occur in the tracked trajectory, but also to estimate and compensate these changes using the proposed filter. Furthermore, a direct visual control approach is proposed which integrates the visual information provided by this smoothing Kalman filter. This last aspect permits the correct tracking when noisy measurements are obtained. All the contributions are integrated in an application which requires the tracking of the faces of Asperger children

    Design and modeling of a stair climber smart mobile robot (MSRox)

    Full text link

    A cognitive ego-vision system for interactive assistance

    Get PDF
    With increasing computational power and decreasing size, computers nowadays are already wearable and mobile. They become attendant of peoples' everyday life. Personal digital assistants and mobile phones equipped with adequate software gain a lot of interest in public, although the functionality they provide in terms of assistance is little more than a mobile databases for appointments, addresses, to-do lists and photos. Compared to the assistance a human can provide, such systems are hardly to call real assistants. The motivation to construct more human-like assistance systems that develop a certain level of cognitive capabilities leads to the exploration of two central paradigms in this work. The first paradigm is termed cognitive vision systems. Such systems take human cognition as a design principle of underlying concepts and develop learning and adaptation capabilities to be more flexible in their application. They are embodied, active, and situated. Second, the ego-vision paradigm is introduced as a very tight interaction scheme between a user and a computer system that especially eases close collaboration and assistance between these two. Ego-vision systems (EVS) take a user's (visual) perspective and integrate the human in the system's processing loop by means of a shared perception and augmented reality. EVSs adopt techniques of cognitive vision to identify objects, interpret actions, and understand the user's visual perception. And they articulate their knowledge and interpretation by means of augmentations of the user's own view. These two paradigms are studied as rather general concepts, but always with the goal in mind to realize more flexible assistance systems that closely collaborate with its users. This work provides three major contributions. First, a definition and explanation of ego-vision as a novel paradigm is given. Benefits and challenges of this paradigm are discussed as well. Second, a configuration of different approaches that permit an ego-vision system to perceive its environment and its user is presented in terms of object and action recognition, head gesture recognition, and mosaicing. These account for the specific challenges identified for ego-vision systems, whose perception capabilities are based on wearable sensors only. Finally, a visual active memory (VAM) is introduced as a flexible conceptual architecture for cognitive vision systems in general, and for assistance systems in particular. It adopts principles of human cognition to develop a representation for information stored in this memory. So-called memory processes continuously analyze, modify, and extend the content of this VAM. The functionality of the integrated system emerges from their coordinated interplay of these memory processes. An integrated assistance system applying the approaches and concepts outlined before is implemented on the basis of the visual active memory. The system architecture is discussed and some exemplary processing paths in this system are presented and discussed. It assists users in object manipulation tasks and has reached a maturity level that allows to conduct user studies. Quantitative results of different integrated memory processes are as well presented as an assessment of the interactive system by means of these user studies

    ROBOT PROGRAMMING AND TRAJECTORY PLANNING USING AUGMENTED REALITY

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Intelligent Fastening Tool Tracking Systems Using Hybrid Remote Sensing Technologies

    Get PDF
    This research focuses on the development of intelligent fastening tool tracking systems for the automotive industry to identify the fastened bolts. In order to accomplish such a task, the position of the tool tip must be identified because the tool tip position coincides with the head of the fastened bolt while the tool fastens the bolt. The proposed systems utilize an inertial measurement unit (IMU) and another sensor to track the position and orientation of the tool tip. To minimize the position and orientation calculation error, an IMU needs to be calibrated as accurately as possible. This research presents a novel triaxial accelerometer calibration technique that offers a high accuracy. The simulation and experimental results of the accelerometer calibration are presented. To identify the fastening action, an expert system is developed based on the sensor measurements. When a fastening action is identified, the system identifies the fastened bolt by using an expert system based on the position and orientation of the tool tip and the position and orientation of the bolt. Since each fastening procedure needs different accuracies and requirements, three different systems are proposed. The first system utilizes a triaxial magnetometer and an IMU to identify the fastened bolt. This system calculates the position and orientation by using an IMU. An expert system is used to identify the initial position, stationary state, and the fastened bolt. When the tool fastens a bolt, the proposed expert system detects the fastening action by triaxial accelerometer and triaxial magnetometer measurements. When the fastening action is detected, the system corrects the velocity and position error using zero velocity update (ZUPT). By using the corrected tool tip position and orientation, the system can identify the fastened bolts. Then, with the fastened bolt position, the position of the IMU is corrected. When the tool is stationary, the system corrects linear velocity error and reduces the position error. The experimental results demonstrate that the proposed system can identify fastened bolts if the angles of the bolts are different or the bolts are not closely placed. This low cost system does not require a line of sight, but has limited position accuracy. The second system utilizes an intelligent system that incorporates Kalman filters (KFs) and a fuzzy expert system to track the tip of a fastening tool and to identify the fastened bolt. This system employs one IMU and one encoder-based position sensor to determine the orientation and the centre of mass location of the tool. When the KF is used, the orientation error increases over time due to the integration step. Therefore, a fuzzy expert system is developed to correct the tilt angle error and orientation error. When the tool fastens a bolt, the system identifies the fastened bolt by applying the fuzzy expert system. When the fastened bolt is identified, the 3D orientation error of the tool is corrected by using the location and the orientation of the fastened bolt and the position sensor outputs. This orientation correction method results in improved reliability in determining the tool tip location. The fastening tool tracking system was experimentally tested in a lab environment, and the results indicate that such a system can successfully identify the fastened bolts. This system not only has a low computational cost but also provides good position and orientation accuracy. The system can be used for most applications because it provides a high accuracy. The third system presents a novel position/orientation tracking methodology by hybridizing one position sensor and one factory calibrated IMU with the combination of a particle filter (PF) and a KF. In addition, an expert system is used to correct the angular velocity measurement errors. The experimental results indicate that the orientation errors of this method are significantly reduced compared to the orientation errors obtained from an EKF approach. The improved orientation estimation using the proposed method leads to a better position estimation accuracy. The experimental results of this system show that the orientation of the proposed method converges to the correct orientation even when the initial orientation is completely unknown. This new method was applied to the fastening tool tracking system. This system provides good orientation accuracy even when the gyroscopes (gyros hereafter) include a small error. In addition, since the orientation error of this system does not grow over time, the tool tip position drift is limited. This system can be applied to the applications where the bolts are closely placed. The position error comparison results of the second system and the third system are presented in this thesis. The comparison results indicate that the position accuracy of the third system is better than that of the second system because the orientation error does not increase over time. The advantages and limitations of all three systems are compared in this thesis. In addition, possible future work on fastening tool tracking system is described as well as applications that can be expanded by using the KF/PF combination method

    Ein modulares optisches Trackingsystem für medizintechnische Anwendungen: integrierte Datenflussarchitektur in Hard- und Software und Applikationsframework

    Get PDF
    Die vorliegende Arbeit beschreibt die Entwicklung eines modularen optischen Trackingsystems, ausgerichtet auf die speziellen Anforderungen im medizintechnischen Umfeld. Das Spektrum der vorgestellten Anwendungen des Systems reicht dabei von der Erfassung der Benutzerinteraktion in verschiedenen medizinischen Simulatoren (z.B. für Ophthalmochirurgie, Ophthalmoskopie und Neurochirurgie) bis hin zur Positionserfassung eines handgehaltenen Operationsroboters. Im Unterschied zu verfügbaren kommerziellen Trackingsystemem mit ihren eng umrissenen Anwendungsbereichen wird ein universell ausgelegtes Baukastensystem vorgestellt, das sich mit geringem Entwicklungsaufwand an die speziellen Anforderungen der jeweiligen Anwendungen anpassen lässt (so u.a. sehr kleine Geometrien, deformierbare Objekte, Einsatz von Originalinstrumenten, geringe Ressourcenverfügbarkeit im Simulator-PC). Zu diesem Zweck wird ein modulares Systemkonzept entwickelt, welches von der spezialisierten Datenverarbeitung gängiger Trackingsysteme abstrahiert und auf einer generalisierten, modularen Systemarchitektur für den Einsatz aller Arten von Markern mit drei Freiheitsgraden aufbaut. Neben den verbreiteten infrarotbasierten Signaliserungstechniken werden dabei auch passive Farbmarker zur Objektsignalisierung unterstützt. Die Implementierung von Bildverarbeitungsaufgaben in spezialisierter Hardware (FPGAs) direkt auf dem Kameradatenstrom ermöglicht eine frühzeitige Datenreduktion und damit niedrige Latenzzeiten. Der Entwicklungsprozess für neuartige Trackinglösungen wird vereinfacht durch die enge Integration der Hard- und Softwaremodule in einer einheitlichen durchgängigen Datenflussarchitektur, die flexibel an die jeweilige Aufgabenstellung anpassbar ist. Ein erweiterbares graphisches Frontend schließlich unterstützt bei Betrieb und Konfiguration und erlaubt auch die Simulation ganzer Systeme während der Entwicklung
    corecore