5,408 research outputs found

    Mesh-based video coding for low bit-rate communications

    Get PDF
    In this paper, a new method for low bit-rate content-adaptive mesh-based video coding is proposed. Intra-frame coding of this method employs feature map extraction for node distribution at specific threshold levels to achieve higher density placement of initial nodes for regions that contain high frequency features and conversely sparse placement of initial nodes for smooth regions. Insignificant nodes are largely removed using a subsequent node elimination scheme. The Hilbert scan is then applied before quantization and entropy coding to reduce amount of transmitted information. For moving images, both node position and color parameters of only a subset of nodes may change from frame to frame. It is sufficient to transmit only these changed parameters. The proposed method is well-suited for video coding at very low bit rates, as processing results demonstrate that it provides good subjective and objective image quality at a lower number of required bits

    Intensity based image registration of satellite images using evolutionary techniques

    Get PDF
    Image registration is the fundamental image processing technique to determine geometrical transformation that gives the most accurate match between reference and floating images. Its main aim is to align two images. Satellite images to be fused for numerous applications must be registered before use. The main challenges in satellite image registration are finding out the optimum transformation parameters. Here in this work the non-alignment parameters are considered to be rigid and affine transformation. An intensity based satellite image registration technique is being used to register the floating image to the native co-ordinate system where the normalized mutual information (NMI) is taken as the similarity metric for optimizing and updating transform parameters. Because of no assumptions are made regarding the nature of the relationship between the image intensities in both modalities NMI is very general and powerful and can be applied automatically without prior segmentation on a large variety of data and as well works better for overlapped images as compared to mutual information(MI). In order to get maximum accuracy of registration the NMI is optimized using Genetic algorithm, particle swarm optimization and hybrid GA-PSO. The random initialization and computational complexity makes GA oppressive, whereas weak local search ability with a premature convergence is the main drawback of PSO. Hybrid GA-PSO makes a trade-off between the local and global search in order to achieve a better balance between convergence speed and computational complexity. The above registration algorithm is being validated with several satellite data sets. The hybrid GA-PSO outperforms in terms of optimized NMI value and percentage of mis-registration error

    Methods of evaluating the effects of coding on SAR data

    Get PDF
    It is recognized that mean square error (MSE) is not a sufficient criterion for determining the acceptability of an image reconstructed from data that has been compressed and decompressed using an encoding algorithm. In the case of Synthetic Aperture Radar (SAR) data, it is also deemed to be insufficient to display the reconstructed image (and perhaps error image) alongside the original and make a (subjective) judgment as to the quality of the reconstructed data. In this paper we suggest a number of additional evaluation criteria which we feel should be included as evaluation metrics in SAR data encoding experiments. These criteria have been specifically chosen to provide a means of ensuring that the important information in the SAR data is preserved. The paper also presents the results of an investigation into the effects of coding on SAR data fidelity when the coding is applied in (1) the signal data domain, and (2) the image domain. An analysis of the results highlights the shortcomings of the MSE criterion, and shows which of the suggested additional criterion have been found to be most important

    Near ground level sensing for spatial analysis of vegetation

    Get PDF
    Measured changes in vegetation indicate the dynamics of ecological processes and can identify the impacts from disturbances. Traditional methods of vegetation analysis tend to be slow because they are labor intensive; as a result, these methods are often confined to small local area measurements. Scientists need new algorithms and instruments that will allow them to efficiently study environmental dynamics across a range of different spatial scales. A new methodology that addresses this problem is presented. This methodology includes the acquisition, processing, and presentation of near ground level image data and its corresponding spatial characteristics. The systematic approach taken encompasses a feature extraction process, a supervised and unsupervised classification process, and a region labeling process yielding spatial information

    Towards Automatic SAR-Optical Stereogrammetry over Urban Areas using Very High Resolution Imagery

    Full text link
    In this paper we discuss the potential and challenges regarding SAR-optical stereogrammetry for urban areas, using very-high-resolution (VHR) remote sensing imagery. Since we do this mainly from a geometrical point of view, we first analyze the height reconstruction accuracy to be expected for different stereogrammetric configurations. Then, we propose a strategy for simultaneous tie point matching and 3D reconstruction, which exploits an epipolar-like search window constraint. To drive the matching and ensure some robustness, we combine different established handcrafted similarity measures. For the experiments, we use real test data acquired by the Worldview-2, TerraSAR-X and MEMPHIS sensors. Our results show that SAR-optical stereogrammetry using VHR imagery is generally feasible with 3D positioning accuracies in the meter-domain, although the matching of these strongly hetereogeneous multi-sensor data remains very challenging. Keywords: Synthetic Aperture Radar (SAR), optical images, remote sensing, data fusion, stereogrammetr

    Improving the Geotagging Accuracy of Street-level Images

    Get PDF
    Integrating images taken at street-level with satellite imagery is becoming increasingly valuable in the decision-making processes not only for individuals, but also in business and governmental sectors. To perform this integration, images taken at street-level need to be accurately georeferenced. This georeference information can be derived from a global positioning system (GPS). However, GPS data is prone to errors up to 15 meters, and needs to be corrected for the purpose of geo-referencing. In this thesis, an automatic method is proposed for correcting the georeference information obtained from the GPS data, based on image registration techniques. The proposed method uses an optimization technique to find local optimal solutions by matching high-level features and their relative locations. A global optimization method is then employed over all of the local solutions by applying a geometric constraint. The main contribution of this thesis is introducing a new direction for correcting the GPS data which is more economical and more consistent compared to existing manual method. Other than high cost (labor and management), the main concern with manual correction is the low degree of consistency between different human operators. Our proposed automatic software-based method is a solution for these drawbacks. Other contributions can be listed as (1) modified Chamfer matching (CM) cost function which improves the accuracy of standard CM for images with various misleading/disturbing edges; (2) Monte-Carlo-inspired statistical analysis which made it possible to quantify the overall performance of the proposed algorithm; (3) Novel similarity measure for applying normalized cross correlation (NCC) technique on multi-level thresholded images, which is used to compare multi-modal images more accurately as compared to standard application of NCC on raw images. (4) Casting the problem of selecting an optimal global solution among set of local minima into a problem of finding an optimal path in a graph using Dijkstra\u27s algorithm. We used our algorithm for correcting the georeference information of 20 chains containing more than 7000 fisheye images and our experimental results show that the proposed algorithm can achieve an average error of 2 meters, which is acceptable for most of applications
    corecore