12 research outputs found

    PUGTIFs: Passively user-generated thermal invariant features

    Get PDF
    Feature detection is a vital aspect of computer vision applications, but adverse environments, distance and illumination can affect the quality and repeatability of features or even prevent their identification. Invariance to these constraints would make an ideal feature attribute. Here we propose the first exploitation of consistently occurring thermal signatures generated by a moving platform, a paradigm we define as passively user-generated thermal invariant features (PUGTIFs). In this particular instance, the PUGTIF concept is applied through the use of thermal footprints that are passively and continuously user generated by heat differences, so that features are no longer dependent on the changing scene structure (as in classical approaches) but now maintain a spatial coherency and remain invariant to changes in illumination. A framework suitable for any PUGTIF has been designed consisting of three methods: first, the known footprint size is used to solve for monocular localisation and thus scale ambiguity; second, the consistent spatial pattern allows us to determine heading orientation; and third, these principles are combined in our automated thermal footprint detector (ATFD) method to achieve segmentation/feature detection. We evaluated the detection of PUGTIFs in four laboratory environments (sand, grass, grass with foliage, and carpet) and compared ATFD to typical image segmentation methods. We found that ATFD is superior to other methods while also solving for scaled monocular camera localisation and providing user heading in multiple environments

    Thermal stereo odometry for UAVs

    Get PDF
    In the last decade, visual odometry (VO) has attracted significant research attention within the computer vision community. Most of the works have been carried out using standard visible-band cameras. These sensors offer numerous advantages but also suffer from some drawbacks such as illumination variations and limited operational time (i.e., daytime only). In this paper, we explore techniques that allow us to extend the concepts beyond the visible spectrum. We introduce a localization solution based on a pair of thermal cameras. We focus on VO and demonstrate the accuracy of the proposed solution in daytime as well as night-time. The first challenge with thermal cameras is their geometric calibration. Here, we propose a solution to overcome this issue and enable stereopsis. VO requires a good set of feature correspondences. We use a combination of Fast-Hessian detector with for Fast Retina Keypoint descriptor for that purpose. A range of optimization techniques can be used to compute the incremental motion. Here, we propose the double dogleg algorithm and show that it presents an interesting alternative to the commonly used Levenberg-Marquadt approach. In addition, we explore thermal 3-D reconstruction and show that similar performance to the visible-band can be achieved. In order to validate the proposed solution, we build an innovative experimental setup to capture various data sets, where different weather and time conditions are considered

    Countermeasure Leveraging Optical Attractor Kits (CLOAK): interpretational disruption of a visual-based workflow

    Get PDF
    Due to their negligible cost, small energy footprint, compact size and passive nature, cameras are emerging as one of the most appealing sensing approaches for the realization of fully autonomous intelligent mobile platforms. In defence contexts, passive sensors, such as cameras, represent an important asset due to the absence of a detectable external operational signature – with at most some radiation generated by their components. This characteristic, however, makes targeting them a quite daunting task, as their active neutralization requires pinning a small angular diameter moving at a high speed. In this paper we introduce an interpretational countermeasure acting against autonomous platforms relying on featurebased optical workflows. We classify our approach as an interpretational disruption because it exploits the heuristics of the model used by the on-board artificial intelligence to interpret the available data. To remove the struggle of accurately pinpointing such an imperceptible target, our approach consists in passively corrupting, from a perception point of view, the whole environment with a small, sparse set of physical observables. The concrete design of these systems is developed from the response of a feature detector of interest. We define an optical attractor as the collection of pixels inducing an exceptionally strong response for a target feature detector. We also define a physical object inducing these pixel structures for defense purposes as a CLOAK: Countermeasure Leveraging Optical Attractor Kits. Using optical attractors, any optical based algorithm relying on features extraction can potentially be disrupted, in a completely passive and nondestructive fashion

    Perception fields: analysing distributions of optical features as a proximity navigation tool for autonomous probes around asteroids

    Get PDF
    This paper suggests a new way of interpreting visual information perceived by visible cameras in the proximity of small celestial bodies. At close ranges, camera-based perception processes generally rely on computational constructs known as features. Our hypothesis is that trends in the quantity of available optical features can be correlated to variations in the angular distance from the source of illumination. Indeed, the discussed approach is based on treating properties related to these detected optical features as readings of a field - the perception fields of the title, assumed induced by the coupling of the environmental conditions and the state of the sensing device. The extreme spectrum of shapes, surface properties and gravity fields of small celestial bodies heavily affects visual proximity operational procedures. Therefore, self-contained ancillary tools providing context and an evaluation of estimators' performance while using the least number of priors are extremely significant in these conditions. This preliminary study presents an analysis of the occurrences of optical feature observed around two asteroids, 101955 Bennu and (8567) 1996 HW1 in visual data simulated within Blender, a computer graphics engine. The comparison of three different feature detectors showed distinctive trends in the distribution of the detected optical features, directly correlated to the spacecraft-target-Sun angle, confirming our hypothesis

    NAV-Landmarks: deployable 3D infrastructures to enable CubeSats navigation near asteroids

    Get PDF
    Autonomous operations in the proximity of Near Earth Objects (NEO) are perhaps the most challenging and demanding type of mission operation currently being considered. The exceptional variability of geometric and illumination conditions, the scarcity of large scale surface features and the strong perturbations in their proximity require incredibly robust systems to be handled. Robustness is usually introduced by either increasing the number and/or the complexity of on-board sensors, or by employing algorithms capable of handling uncertainties, often computationally heavy. While for a large satellite this would be predominantly an economic issue, for small satellites these constraints might push the ability to accomplish challenging missions beyond the realm of technical possibility. The scope of this paper is to present an active approach that allows small satellites deployed by a mothership to perform robust navigation using only a monocular visible camera. In particular, the introduction of Non-cooperative Artificial Visual landmarks (NAVLandmarks) on the surface of the target object is proposed to augment the capabilities of small satellites. These external elements can be effectively regarded as an infrastructure forming an extension of the landing system. The quantitative efficiency estimation of this approach will be performed by comparing the outputs of a visual odometry algorithm, which operates on sequences of images representing ballistic descents around a small non-rotating asteroid. These sequences of virtual images will be obtained through the integration of two simulated models, both based on the Apollo asteroid 101955 Bennu. The first is a dynamical model, describing the landing trajectory, realized by integrating over time the gravitational potential around a three-axis ellipsoid. The second model is visual, generated by introducing in Unreal Engine 4 a CAD model of the asteroid (with a resolution of 75 cm) and scattering on its surface a number N of cubes with side length L. The effect of both N and L on the navigation accuracy will be reported. While defining an optimal shape for the NAV-Landmarks is out of the scope of this paper, prescriptions about the beacons geometry will be provided. In particular, in this work the objects will be represented as high-visibility cubes. This shape satisfies, albeit in a non-optimal way, most of the design goals

    Towards in-orbit hyperspectral imaging of space debris

    Get PDF
    Satellites are vulnerable to space debris larger than ~1 cm, but much of this debris cannot be tracked from the ground. In-orbit detection and tracking of debris is one solution to this problem. We present some steps towards achieving this, and in particular to use hyperspectral imaging to maximise the information obtained. We present current work related to hyperspectral in-orbit imaging of space debris in three areas: scenario evaluation, a reflectance database, and an image simulator. Example results are presented. Hyperspectral imaging has the potential to provide valuable additional information, such as assessments of spacecraft or debris condition and even spectral “finger-printing” of material types or use (e.g. propellant contamination). These project components are being merged to assess mission opportunities and to develop enhanced data processing methods to improve knowledge and understanding of the orbital environment

    HySim: a tool for space-to-space hyperspectral resolved imagery

    Get PDF
    This paper introduces HySim, a novel tool addressing the need for hyperspectral space-to-space imaging simulations, vital for in-orbit spacecraft inspection missions. This tool fills the gap by enabling the generation of hyperspectral space-to-space images across various scenarios, including fly-bys, inspections, rendezvous, and proximity operations. HySim combines open-source tools to handle complex scenarios, providing versatile configuration options for imaging scenarios, camera specifications, and material properties. It accurately simulates hyperspectral images of the target scene. This paper outlines HySim's features, validation against real space-borne images, and discusses its potential applications in space missions, emphasising its role in advancing space-to-space inspection and in-orbit servicing planning.UK Defence and Security Accelerator (DASA

    Standalone and embedded stereo visual odometry based navigation solution

    Get PDF
    © Cranfield University, 2014This thesis investigates techniques and designs an autonomous visual stereo based navigation sensor to improve stereo visual odometry for purpose of navigation in unknown environments. In particular, autonomous navigation in a space mission context which imposes challenging constraints on algorithm development and hardware requirements. For instance, Global Positioning System (GPS) is not available in this context. Thus, a solution for navigation cannot rely on similar external sources of information. Support to handle this problem is required with the conception of an intelligent perception-sensing device that provides precise outputs related to absolute and relative 6 degrees of freedom (DOF) positioning. This is achieved using only images from stereo calibrated cameras possibly coupled with an inertial measurement unit (IMU) while fulfilling real time processing requirements. Moreover, no prior knowledge about the environment is assumed. Robotic navigation has been the motivating research to investigate different and complementary areas such as stereovision, visual motion estimation, optimisation and data fusion. Several contributions have been made in these areas. Firstly, an efficient feature detection, stereo matching and feature tracking strategy based on Kanade-Lucas-Tomasi (KLT) feature tracker is proposed to form the base of the visual motion estimation. Secondly, in order to cope with extreme illumination changes, High dynamic range (HDR) imaging solution is investigated and a comparative assessment of feature tracking performance is conducted. Thirdly, a two views local bundle adjustment scheme based on trust region minimisation is proposed for precise visual motion estimation. Fourthly, a novel KLT feature tracker using IMU information is integrated into the visual odometry pipeline. Finally, a smart standalone stereo visual/IMU navigation sensor has been designed integrating an innovative combination of hardware as well as the novel software solutions proposed above. As a result of a balanced combination of hardware and software implementation, we achieved 5fps frame rate processing up to 750 initials features at a resolution of 1280x960. This is the highest reached resolution in real time for visual odometry applications to our knowledge. In addition visual odometry accuracy of our algorithm achieves the state of the art with less than 1% relative error in the estimated trajectories

    PUGTIFs: Passively User-Generated Thermal Invariant Features

    No full text
    corecore