10 research outputs found

    Heliport Detection Using Artificial Neural Networks

    No full text
    Automatic image exploitation is a critical technology for quick content analysis of high-resolution remote sensing images. The presence of a heliport on an image usually implies an important facility, such as military facilities. Therefore, detection of heliports can reveal critical information about the content of an image. In this article, two learning-based algorithms are presented that make use of artificial neural networks to detect H-shaped, light-colored heliports. The first algorithm is based on shape analysis of the heliport candidate segments using classical artificial neural networks. The second algorithm uses deep-learning techniques. While deep learning can solve difficult problems successfully, classical-learning approaches can be tuned easily to obtain fast and reasonable results. Therefore, although the main objective of this article is heliport detection, it also compares a deep-learning based approach with a classical learning-based approach and discusses advantages and disadvantages of both techniques

    Circular target detection algorithm on satellite images based on radial transformation

    No full text
    Remote sensing is used in a spreading manner by many governmental and industrial institutions worldwide in recent years. Target detection has an important place among the applications developed using satellite imagery. In this paper, an original circular target detection algorithm has been proposed based on a radial transformation. The algorithm consists of three stages such as pre-processing, target detection, and post-processing. In the pre-processing stage, bilateral noise reduction filtering and vegetation detection operations are completed which they are required by target detection step. The target detection stage finds the circular target by a radial transformation algorithm and variables obtained from the training, and post-processing stage carries out the elimination of falsely detected targets by utilizing the vegetation information. The Petroleum Oil Lubricants (POL) depots in the industrial areas and harbors have been chosen as an application area of the proposed algorithm. The algorithm has been trained and tested on a data set which includes 4-band images with Near-Infrared band. Proposed algorithm is able to detect many circular targets with different types and sizes as a consequence of using a full radial transformation search as well as it gives rewarding results on industrial areas and harbors in the experiments conducted

    Circular Target Detection Algorithm on Satellite Images based on Radial Transformation

    No full text
    Remote sensing is used in a spreading manner by many governmental and industrial institutions worldwide in recent years. Target detection has an important place among the applications developed using satellite imagery. In this paper, an original circular target detection algorithm has been proposed based on a radial transformation. The algorithm consists of three stages such as pre-processing, target detection, and post-processing. In the pre-processing stage, bilateral noise reduction filtering and vegetation detection operations are completed which they are required by target detection step. The target detection stage finds the circular target by a radial transformation algorithm and variables obtained from the training, and post-processing stage carries out the elimination of falsely detected targets by utilizing the vegetation information. The Petroleum Oil Lubricants (POL) depots in the industrial areas and harbors have been chosen as an application area of the proposed algorithm. The algorithm has been trained and tested on a data set which includes 4-band images with Near-Infrared band. Proposed algorithm is able to detect many circular targets with different types and sizes as a consequence of using a full radial transformation search as well as it gives rewarding results on industrial areas and harbors in the experiments conducted

    Relations Between Reconstructed 3D Entities

    No full text
    In this paper, we first propose an analytic formulation for the position's and orientation's uncertainty of local 3D line descriptors reconstructed by stereo. We evaluate these predicted uncertainties with Monte Carlo simulations, and study their dependency on different parameters (position and orientation). In a second part, we use this definition to derive a new formulation for inter-features distance and coplanarity. These new formulations take into account the predicted uncertainty, allowing for better robustness. We demonstrate the positive effect of the modified definitions on some simple scenarios

    Extraction of Multi-Modal Object Representations in a Robot Vision System

    No full text
    We introduce one module in a cognitive system that learns the shape of objects by active exploration. More specifically, we propose a feature tracking scheme that makes use of the knowledge of a robotic arm motion to: 1) segment the object currently grasped by the robotic arm from the rest of the visible scene, and 2) learn a representation of the 3D shape without any prior knowledge of the object. The 3D representation is generated by stereo-reconstruction of local multi-modal edge features. The segmentation between features belonging to the object those describing the rest of the scene is achieved using Bayesian inference. We then show the shape model extracted by this system from various objects

    Multi-spectral False Color Shadow Detection

    No full text
    With the availability of high-resolution commercial satellite images, automated analysis and object extraction became even a more important topic in remote sensing. As shadows cover a significant portion of an image, they play an important role on automated analysis. While they degrade performance of applications such as image registration, shadow is an important cue for information such as man-made structures. In this article, a shadow detection algorithm that makes use of near-infrared information in combination with RGB bands is introduced. The algorithm is applied on an application for automated building detection

    Semantic Reasoning for Scene Interpretation

    No full text
    In this paper, we propose a hierarchical architecture for representing scenes, covering 2D and 3D aspects of visual scenes as well as the semantic relations between the different aspects. We argue that labeled graphs are a suitable representational framework for this representation and demonstrate its potential by two applications. As a first application, we localize lane structures by the semantic descriptors and their relations in a Bayesian framework. As the second application, which is in the context of vision based grasping, we show how the semantic relations can be associated to actions that allow for grasping without using any object knowledge

    A Strategy for Grasping unknown Objects based on Co-Planarity and Colour Information

    No full text
    In this work, we describe and evaluate a grasping mechanism that does not make use of any specific object prior knowledge. The mechanism makes use of second-order relations between visually extracted multi-modal 3D features provided by an early cognitive vision system. More specifically, the algorithm is based on two relations covering geometric information in terms of a co-planarity constraint as well as appearance based information in terms of co-occurrence of colour properties. We show that our algorithm, although making use of such rather simple constraints, is able to grasp objects with a reasonable success rate in rather complex environments (i.e., cluttered scenes with multiple objects). Moreover, we have embedded the algorithm within a cognitive system that allows for autonomous exploration and learning in different contexts. First, the system is able to perform long action sequences which, although the grasping attempts not being always successful, can recover from mistakes and more importantly, is able to evaluate the success of the grasps autonomously by haptic feedback (i.e., by a force torque sensor at the wrist and proprioceptive information about the distance of the gripper after a gasping attempt). Such labelled data is then used for improving the initially hard-wired algorithm by learning. Moreover, the grasping behaviour has been used in a cognitive system to trigger higher level processes such as object learning and learning of object specific grasping

    Road interpretation for driver assistance based on an early cognitive vision system

    No full text
    Large scale maps, lane detection, independently moving objects In this work, we address the problem of road interpretation for driver assistance based on an early cognitive vision system. The structure of a road and the relevant traffic are interpreted in terms of ego-motion estimation of the car, independently moving objects on the road, lane markers and large scale maps of the road. We make use of temporal and spatial disambiguation mechanisms to increase the reliability of visually extracted 2D and 3D information. This information is then used to interpret the layout of the road by using lane markers that are detected via Bayesian reasoning. We also estimate the ego-motion of the car which is used to create large scale maps of the road and also to detect independently moving objects. Sample results for the presented algorithms are shown on a stereo image sequence, that has been collected from a structured road.
    corecore