747 research outputs found

    Estimating Epipolar Geometry With The Use of a Camera Mounted Orientation Sensor

    Get PDF
    Context: Image processing and computer vision are rapidly becoming more and more commonplace, and the amount of information about a scene, such as 3D geometry, that can be obtained from an image, or multiple images of the scene is steadily increasing due to increasing resolutions and availability of imaging sensors, and an active research community. In parallel, advances in hardware design and manufacturing are allowing for devices such as gyroscopes, accelerometers and magnetometers and GPS receivers to be included alongside imaging devices at a consumer level. Aims: This work aims to investigate the use of orientation sensors in the field of computer vision as sources of data to aid with image processing and the determination of a scene’s geometry, in particular, the epipolar geometry of a pair of images - and devises a hybrid methodology from two sets of previous works in order to exploit the information available from orientation sensors alongside data gathered from image processing techniques. Method: A readily available consumer-level orientation sensor was used alongside a digital camera to capture images of a set of scenes and record the orientation of the camera. The fundamental matrix of these pairs of images was calculated using a variety of techniques - both incorporating data from the orientation sensor and excluding its use Results: Some methodologies could not produce an acceptable result for the Fundamental Matrix on certain image pairs, however, a method described in the literature that used an orientation sensor always produced a result - however in cases where the hybrid or purely computer vision methods also produced a result - this was found to be the least accurate. Conclusion: Results from this work show that the use of an orientation sensor to capture information alongside an imaging device can be used to improve both the accuracy and reliability of calculations of the scene’s geometry - however noise from the orientation sensor can limit this accuracy and further research would be needed to determine the magnitude of this problem and methods of mitigation

    A Survey of Positioning Systems Using Visible LED Lights

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.As Global Positioning System (GPS) cannot provide satisfying performance in indoor environments, indoor positioning technology, which utilizes indoor wireless signals instead of GPS signals, has grown rapidly in recent years. Meanwhile, visible light communication (VLC) using light devices such as light emitting diodes (LEDs) has been deemed to be a promising candidate in the heterogeneous wireless networks that may collaborate with radio frequencies (RF) wireless networks. In particular, light-fidelity has a great potential for deployment in future indoor environments because of its high throughput and security advantages. This paper provides a comprehensive study of a novel positioning technology based on visible white LED lights, which has attracted much attention from both academia and industry. The essential characteristics and principles of this system are deeply discussed, and relevant positioning algorithms and designs are classified and elaborated. This paper undertakes a thorough investigation into current LED-based indoor positioning systems and compares their performance through many aspects, such as test environment, accuracy, and cost. It presents indoor hybrid positioning systems among VLC and other systems (e.g., inertial sensors and RF systems). We also review and classify outdoor VLC positioning applications for the first time. Finally, this paper surveys major advances as well as open issues, challenges, and future research directions in VLC positioning systems.Peer reviewe

    CoBe -- Coded Beacons for Localization, Object Tracking, and SLAM Augmentation

    Full text link
    This paper presents a novel beacon light coding protocol, which enables fast and accurate identification of the beacons in an image. The protocol is provably robust to a predefined set of detection and decoding errors, and does not require any synchronization between the beacons themselves and the optical sensor. A detailed guide is then given for developing an optical tracking and localization system, which is based on the suggested protocol and readily available hardware. Such a system operates either as a standalone system for recovering the six degrees of freedom of fast moving objects, or integrated with existing SLAM pipelines providing them with error-free and easily identifiable landmarks. Based on this guide, we implemented a low-cost positional tracking system which can run in real-time on an IoT board. We evaluate our system's accuracy and compare it to other popular methods which utilize the same optical hardware, in experiments where the ground truth is known. A companion video containing multiple real-world experiments demonstrates the accuracy, speed, and applicability of the proposed system in a wide range of environments and real-world tasks. Open source code is provided to encourage further development of low-cost localization systems integrating the suggested technology at its navigation core

    Observability-aware Self-Calibration of Visual and Inertial Sensors for Ego-Motion Estimation

    Full text link
    External effects such as shocks and temperature variations affect the calibration of visual-inertial sensor systems and thus they cannot fully rely on factory calibrations. Re-calibrations performed on short user-collected datasets might yield poor performance since the observability of certain parameters is highly dependent on the motion. Additionally, on resource-constrained systems (e.g mobile phones), full-batch approaches over longer sessions quickly become prohibitively expensive. In this paper, we approach the self-calibration problem by introducing information theoretic metrics to assess the information content of trajectory segments, thus allowing to select the most informative parts from a dataset for calibration purposes. With this approach, we are able to build compact calibration datasets either: (a) by selecting segments from a long session with limited exciting motion or (b) from multiple short sessions where a single sessions does not necessarily excite all modes sufficiently. Real-world experiments in four different environments show that the proposed method achieves comparable performance to a batch calibration approach, yet, at a constant computational complexity which is independent of the duration of the session

    Visual-Inertial first responder localisation in large-scale indoor training environments.

    Get PDF
    Accurately and reliably determining the position and heading of first responders undertaking training exercises can provide valuable insights into their situational awareness and give a larger context to the decisions made. Measuring first responder movement, however, requires an accurate and portable localisation system. Training exercises of- ten take place in large-scale indoor environments with limited power infrastructure to support localisation. Indoor positioning technologies that use radio or sound waves for localisation require an extensive network of transmitters or receivers to be installed within the environment to ensure reliable coverage. These technologies also need power sources to operate, making their use impractical for this application. Inertial sensors are infrastructure independent, low cost, and low power positioning devices which are attached to the person or object being tracked, but their localisation accuracy deteriorates over long-term tracking due to intrinsic biases and sensor noise. This thesis investigates how inertial sensor tracking can be improved by providing correction from a visual sensor that uses passive infrastructure (fiducial markers) to calculate accurate position and heading values. Even though using a visual sensor increase the accuracy of the localisation system, combining them with inertial sensors is not trivial, especially when mounted on different parts of the human body and going through different motion dynamics. Additionally, visual sensors have higher energy consumption, requiring more batteries to be carried by the first responder. This thesis presents a novel sensor fusion approach by loosely coupling visual and inertial sensors to create a positioning system that accurately localises walking humans in largescale indoor environments. Experimental evaluation of the devised localisation system indicates sub-metre accuracy for a 250m long indoor trajectory. The thesis also proposes two methods to improve the energy efficiency of the localisation system. The first is a distance-based error correction approach which uses distance estimation from the foot-mounted inertial sensor to reduce the number of corrections required from the visual sensor. Results indicate a 70% decrease in energy consumption while maintaining submetre localisation accuracy. The second method is a motion type adaptive error correction approach, which uses the human walking motion type (forward, backward, or sideways) as an input to further optimise the energy efficiency of the localisation system by modulating the operation of the visual sensor. Results of this approach indicate a 25% reduction in the number of corrections required to keep submetre localisation accuracy. Overall, this thesis advances the state of the art by providing a sensor fusion solution for long-term submetre accurate localisation and methods to reduce the energy consumption, making it more practical for use in first responder training exercises

    Star Imager For Nanosatellite Applications

    Get PDF
    This research examines the feasibility of Commercial-off-the-shelf Complementary Metal-Oxide-Semiconductor image sensors for use on nanosatellites as a star imager. An emphasis is placed on method selection and implementation of the star imager algorithm: Centroiding, Identification and Attitude Determination. The star imager algorithm makes use of the Lost-in-Space condition to provide attitude knowledge for each image. Flat Field, Checker Board and Point Spread Function calibration methods were employed to characterize the star imager. Finally, feasibility testing of the star imager is accomplished through simulations and night sky images

    IMPLEMENTATION OF KALMAN FILTER TO TRACKING CUSTOM FOUR-WHEEL DRIVE FOUR-WHEEL-STEERING ROBOTIC PLATFORM

    Get PDF
    Vehicle tracking is an important component of autonomy in the robotics field, requiring integration of hardware and software, and the application of advanced algorithms. Sensors are often plagued with noise and require filtering. Additionally, no single sensor is sufficient for effective tracking. Data from multiple sensors is needed in order to perform effective tracking. The Kalman Filter provides a convenient and efficient solution for filtering and fusing sensor data as well as estimating noise error covariances. Consequently, it has been essential in tracking algorithms since its introduction in 1960. This thesis presents an application of the Kalman filter to tracking of a custom four-wheel-drive four-wheel-steering vehicle using a limited sensor suite. Sensor selection is discussed, along with the characteristics of the sensor noise as related to meeting the requirements of the Kalman filter for guaranteeing optimality. The filter requires the development of a dynamical model, which is derived using empirical data methods and evaluated. Tracking results are presented and compared to unfiltered data

    Fusion de données capteurs étendue pour applications vidéo embarquées

    Get PDF
    This thesis deals with sensor fusion between camera and inertial sensors measurements in order to provide a robust motion estimation algorithm for embedded video applications. The targeted platforms are mainly smartphones and tablets. We present a real-time, 2D online camera motion estimation algorithm combining inertial and visual measurements. The proposed algorithm extends the preemptive RANSAC motion estimation procedure with inertial sensors data, introducing a dynamic lagrangian hybrid scoring of the motion models, to make the approach adaptive to various image and motion contents. All these improvements are made with little computational cost, keeping the complexity of the algorithm low enough for embedded platforms. The approach is compared with pure inertial and pure visual procedures. A novel approach to real-time hybrid monocular visual-inertial odometry for embedded platforms is introduced. The interaction between vision and inertial sensors is maximized by performing fusion at multiple levels of the algorithm. Through tests conducted on sequences with ground-truth data specifically acquired, we show that our method outperforms classical hybrid techniques in ego-motion estimation.Le travail réalisé au cours de cette thÚse se concentre sur la fusion des données d'une caméra et de capteurs inertiels afin d'effectuer une estimation robuste de mouvement pour des applications vidéos embarquées. Les appareils visés sont principalement les téléphones intelligents et les tablettes. On propose une nouvelle technique d'estimation de mouvement 2D temps réel, qui combine les mesures visuelles et inertielles. L'approche introduite se base sur le RANSAC préemptif, en l'étendant via l'ajout de capteurs inertiels. L'évaluation des modÚles de mouvement se fait selon un score hybride, un lagrangien dynamique permettant une adaptation à différentes conditions et types de mouvements. Ces améliorations sont effectuées à faible coût, afin de permettre une implémentation sur plateforme embarquée. L'approche est comparée aux méthodes visuelles et inertielles. Une nouvelle méthode d'odométrie visuelle-inertielle temps réelle est présentée. L'interaction entre les données visuelles et inertielles est maximisée en effectuant la fusion dans de multiples étapes de l'algorithme. A travers des tests conduits sur des séquences acquises avec la vérité terrain, nous montrons que notre approche produit des résultats supérieurs aux techniques classiques de l'état de l'art
    • 

    corecore