16 research outputs found

    Edge Potential Functions (EPF) and Genetic Algorithms (GA) for Edge-Based Matching of Visual Objects

    Get PDF
    Edges are known to be a semantically rich representation of the contents of a digital image. Nevertheless, their use in practical applications is sometimes limited by computation and complexity constraints. In this paper, a new approach is presented that addresses the problem of matching visual objects in digital images by combining the concept of Edge Potential Functions (EPF) with a powerful matching tool based on Genetic Algorithms (GA). EPFs can be easily calculated starting from an edge map and provide a kind of attractive pattern for a matching contour, which is conveniently exploited by GAs. Several tests were performed in the framework of different image matching applications. The results achieved clearly outline the potential of the proposed method as compared to state of the art methodologies. (c) 2007 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works

    Robust Image Matching under a Large Disparity

    Get PDF
    We present a new method for detecting point matches between two images without using any combinatorial search. Our strategy is to impose various local and non-local constraints as "soft" constraints by introducing their "confidence" measures via "mean-field approximations". The computation is a cascade of evaluating the confidence values and sorting according to them. In the end, we impose the "hard" epipolar constraint by RANSAC. We also introduce a model selection procedure to test if the image mapping can be regarded as a homography. We demonstrate the effectiveness of our method by real image examples

    An optimised system for generating multi-resolution DTMS using NASA DTMS datasets

    Get PDF
    Abstract. Within the EU FP-7 iMars project, a fully automated multi-resolution DTM processing chain, called Co-registration ASP-Gotcha Optimised (CASP-GO) has been developed, based on the open source NASA Ames Stereo Pipeline (ASP). CASP-GO includes tiepoint based multi-resolution image co-registration and an adaptive least squares correlation-based sub-pixel refinement method called Gotcha. The implemented system guarantees global geo-referencing compliance with respect to HRSC (and thence to MOLA), provides refined stereo matching completeness and accuracy based on the ASP normalised cross-correlation. We summarise issues discovered from experimenting with the use of the open-source ASP DTM processing chain and introduce our new working solutions. These issues include global co-registration accuracy, de-noising, dealing with failure in matching, matching confidence estimation, outlier definition and rejection scheme, various DTM artefacts, uncertainty estimation, and quality-efficiency trade-offs

    AN OPTIMISED SYSTEM FOR GENERATING MULTI-RESOLUTION DTMS USING NASA MRO DATASETS

    Get PDF

    Object recognition and localisation from 3D point clouds by maximum likelihood estimation

    Get PDF
    We present an algorithm based on maximum likelihood analysis for the automated recognition of objects, and estimation of their pose, from 3D point clouds. Surfaces segmented from depth images are used as the features, unlike ‘interest point’ based algorithms which normally discard such data. Compared to the 6D Hough transform it has negligible memory requirements, and is computationally efficient compared to iterative closest point (ICP) algorithms. The same method is applicable to both the initial recognition/pose estimation problem as well as subsequent pose refinement through appropriate choice of the dispersion of the probability density functions. This single unified approach therefore avoids the usual requirement for different algorithms for these two tasks. In addition to the theoretical description, a simple 2 degree of freedom (DOF) example is given, followed by a full 6 DOF analysis of 3D point cloud data from a cluttered scene acquired by a projected fringe-based scanner, which demonstrated an rms alignment error as low as 0:3 mm

    Massive stereo-based DTM production for Mars on cloud computers

    Get PDF
    Digital Terrain Model (DTM) creation is essential to improving our understanding of the formation processes of the Martian surface. Although there have been previous demonstrations of open-source or commercial planetary 3D reconstruction software, planetary scientists are still struggling with creating good quality DTMs that meet their science needs, especially when there is a requirement to produce a large number of high quality DTMs using "free" software. In this paper, we describe a new open source system to overcome many of these obstacles by demonstrating results in the context of issues found from experience with several planetary DTM pipelines. We introduce a new fully automated multi-resolution DTM processing chain for NASA Mars Reconnaissance Orbiter (MRO) Context Camera (CTX) and High Resolution Imaging Science Experiment (HiRISE) stereo processing, called the Co-registration Ames Stereo Pipeline (ASP) Gotcha Optimised (CASP-GO), based on the open source NASA ASP. CASP-GO employs tie-point based multi-resolution image co-registration, and Gotcha sub-pixel refinement and densification. CASP-GO pipeline is used to produce planet-wide CTX and HiRISE DTMs that guarantee global geo-referencing compliance with respect to High Resolution Stereo Colour imaging (HRSC), and thence to the Mars Orbiter Laser Altimeter (MOLA); providing refined stereo matching completeness and accuracy. All software and good quality products introduced in this paper are being made open-source to the planetary science community through collaboration with NASA Ames, United States Geological Survey (USGS) and the Jet Propulsion Laboratory (JPL), Advanced Multi-Mission Operations System (AMMOS) Planetary Data System (PDS) Pipeline Service (APPS-PDS4), as well as browseable and visualisable through the iMars web based Geographic Information System (webGIS) system

    Anomaly activity classification in the grocery stores

    Get PDF
    Nowadays, because of the growing number of robberies in shopping malls and grocery stores, automatic camera’s applications are vital necessities to detect anomalous actions. These events usually happen quickly and unexpectedly. Therefore, having a robust system which can classify anomalies in a real-time with minimum false alarms is required. Due to this needs, the main objective of this project is to classify anomalies which may happen in grocery stores. This objective is acquired by considering properties, such as; using one fixed camera in the store and the presence of at least one person in the camera view. The actions of human upper body are used to determine the anomalies. Articulated motion model is used as the basis of the anomalies classification design. In the design, the process starts with feature extraction and followed by target model establishment, tracking and action classification. The features such as color and image gradient built the template as the target model. Then, the models of different upper body parts are tracked during consecutive frames by the tracking method which is sum of square differences (SSD) combined with the Kalman filter as the predictor. The spatio-temporal information as the trajectory of limbs gained by tracking part is sent to proposed classification part. For classification, three different scenarios are studied: attacking cash machine, cashier’s attacking and making the store messy. In implementing these scenarios, some events were introduced. These events are; basic (static) events which are the static objects in the scene, spatial events which are those actions depend on coordinates of body parts and spatio-temporal events in which these actions are tracked in consecutive frames. At last, if one of the scenarios happens, an anomalous action will be detected. The results show the robustness of the proposed methods which have the minimum false positive error of 7% for the cash machine attack and minimum false negative error of 19% for the cashier’s attacking scenario

    Satellite-Based Fusion of Image/Inertial Sensors for Precise Geolocation

    Get PDF
    The ability to produce high-resolution images of the Earth’s surface from space has flourished in recent years with the continuous development and improvement of satellite-based imaging sensors. Earth-imaging satellites often rely on complex onboard navigation systems, with dependence on Global Positioning System (GPS) tracking and/or continuous post-capture georegistration, to accurately geolocate ground targets of interest to either commercial and military customers. Consequently, these satellite systems are often massive, expensive, and susceptible to poor or unavailable target tracking capabilities in GPS-denied environments. Previous research has demonstrated that a tightly-coupled image-aided inertial navigation system (INS), using existing onboard imaging sensors, can provide significant target tracking improvement over that of conventional navigation and tracking systems. Satellite-based image-aided navigation is explored as a means of autonomously tracking stationary ground targets by implementing feature detection and recognition algorithms to accurately predict a ground target’s pixel location within subsequent satellite images. The development of a robust satellite-based image-aided INS model offers a convenient, low-cost, low-weight and highly accurate solution to the geolocation precision problem, without the need of human interaction or GPS dependency, while simultaneously providing redundant and sustainable satellite navigation capabilities

    Integrity Determination for Image Rendering Vision Navigation

    Get PDF
    This research addresses the lack of quantitative integrity approaches for vision navigation, relying on the use of image or image rendering techniques. The ability to provide quantifiable integrity is a critical aspect for utilization of vision systems as a viable means of precision navigation. This research describes the development of two unique approaches for determining uncertainty and integrity for a vision based, precision, relative navigation system, and is based on the concept of using a single camera vision system, such as an electro-optical (EO) or infrared imaging (IR) sensor, to monitor for unacceptably large and potentially unsafe relative navigation errors. The first approach formulates the integrity solution by means of discrete detection methods, for which the systems monitors for conditions when the platform is outside of a defined operational area, thus preventing hazardously misleading information (HMI). The second approach utilizes a generalized Bayesian inference approach, in which a full pdf determination of the estimated navigation state is realized. These integrity approaches are demonstrated, in the context of an aerial refueling application, to provide extremely high levels (10-6) of navigation integrity. Additionally, various sensitivities analyzes show the robustness of these integrity approaches to various vision sensor effects and sensor trade-offs
    corecore