81 research outputs found

    Image feature analysis using the Multiresolution Fourier Transform

    Get PDF
    The problem of identifying boundary contours or line structures is widely recognised as an important component in many applications of image analysis and computer vision. Typical solutions to the problem employ some form of edge detection followed by line following or, more commonly in recent years, Hough transforms. Because of the processing requirements of such methods and to try to improve the robustness of the algorithms, a number of authors have explored the use of multiresolution approaches to the problem. Non-parametric, iterative approaches such as relaxation labelling and "Snakes" have also been used. This thesis presents a boundary detection algorithm based on a multiresolution image representation, the Multiresolution Fourier Transform (MFT), which represents an image over a range of spatial/spatial-frequency resolutions. A quadtree based image model is described in which each leaf is a region which can be modelled using one of a set of feature classes. Consideration is given to using linear and circular arc features for this modelling, and frequency domain models are developed for them. A general model based decision process is presented and shown to be applicable to detecting local image features, selecting the most appropriate scale for modelling each region of the image and linking the local features into the region boundary structures of the image. The use of a consistent inference process for all of the subtasks used in the boundary detection represents a significant improvement over the adhoc assemblies of estimation and detection that have been common in previous work. Although the process is applied using a restricted set of local features, the framework presented allows for expansion of the number of boundary feature models and the possible inclusion of models of region properties. Results are presented demonstrating the effective application of these procedures to a number of synthetic and natural images

    Resolution enhancement of thermal infrared images via high resolution class-map and statistical methods

    Get PDF
    Remote sensing from long stand-off distances offers numerous advantages. As our ability to extract information from data has increased, so has the need for high spatial resolution. Such results are often not available due to technological or financial limitations on the detectors which scan the scene and produce the imagery. Therefore, for many years to come, spatial resolution enhancement using additional data from a variety of sources shall remain popular and cost-effective. Work has been on-going at the Rochester Institute of Technology\u27s Center for Imaging Science in the spatial resolution enhancement of thermal infrared imagery. Background thermal imaging theory is presented and the most recent work by the Digital Imaging and Remote Sensing group is reviewed. A literature search of materials published on the topic since 1985 is included: numerous methods and techniques are presented. Based upon these concepts several areas of study were carried out. All investigations undertaken were confined to cases that ensure radiometric fidelity across image processing operations, since derivation of accurate temperature or emissivity maps necessitate this requirement. Given a low spatial resolution thermal band, these methods produced a high resolution estimate thereof based on enhancement using: (1) a single panchromatic band, (2) a high resolution class-map derived from multi-spectral bands and (3) a statistically based combination of multi-spectral bands

    Online Multi-Stage Deep Architectures for Feature Extraction and Object Recognition

    Get PDF
    Multi-stage visual architectures have recently found success in achieving high classification accuracies over image datasets with large variations in pose, lighting, and scale. Inspired by techniques currently at the forefront of deep learning, such architectures are typically composed of one or more layers of preprocessing, feature encoding, and pooling to extract features from raw images. Training these components traditionally relies on large sets of patches that are extracted from a potentially large image dataset. In this context, high-dimensional feature space representations are often helpful for obtaining the best classification performances and providing a higher degree of invariance to object transformations. Large datasets with high-dimensional features complicate the implementation of visual architectures in memory constrained environments. This dissertation constructs online learning replacements for the components within a multi-stage architecture and demonstrates that the proposed replacements (namely fuzzy competitive clustering, an incremental covariance estimator, and multi-layer neural network) can offer performance competitive with their offline batch counterparts while providing a reduced memory footprint. The online nature of this solution allows for the development of a method for adjusting parameters within the architecture via stochastic gradient descent. Testing over multiple datasets shows the potential benefits of this methodology when appropriate priors on the initial parameters are unknown. Alternatives to batch based decompositions for a whitening preprocessing stage which take advantage of natural image statistics and allow simple dictionary learners to work well in the problem domain are also explored. Expansions of the architecture using additional pooling statistics and multiple layers are presented and indicate that larger codebook sizes are not the only step forward to higher classification accuracies. Experimental results from these expansions further indicate the important role of sparsity and appropriate encodings within multi-stage visual feature extraction architectures

    Compaction of C-band synthetic aperture radar based sea ice information for navigation in the Baltic Sea

    Get PDF
    In this work operational sea ice synthetic aperture radar (SAR) data products were improved and developed. A SAR instrument is transmitting electromagnetic radiation at certain wavelengths and measures the radiation which is scattered back towards the instrument from the target, in our case sea and sea ice. The measured backscattering is converted to an image describing the target area through complex signal processing. The images, however, differ from optical images, i.e. photographs, and their visual interpretation is not straightforward. The main idea in this work has been to deliver the essential SAR-based sea ice information to end-users (typically on ships) in a compact and user-friendly format. The operational systems at Finnish Institute of Marine Research (FIMR) are currently based on the data received from a Canadian SAR-satellite, Radarsat-1. The operational sea ice classification, developed by the author with colleagues, has been further developed. One problem with the SAR data is typically that the backscattering varies depending on the incidence angle. The incidence angle is the angle in which the transmitted electromagnetic wave meets the target surface and it varies within each SAR image and between different SAR images depending on the measuring geometry. To improve this situation, an incidence angle correction algorithm to normalize the backscattering over the SAR incidence angle range for Baltic Sea ice has been developed as part of this work. The algorithm is based on SAR backscattering statistics over the Baltic Sea. To locate different sea ice areas in SAR images, a SAR segmentation algorithm based on pulse-coupled neural networks has been developed and tested. The parameters have been tuned suitable for the operational data in use at FIMR. The sea ice classification is based on this segmentation and the classification is segment-wise rather than pixel-wise. To improve SAR-based distinguishing between sea ice and open water an open water detection algorithm based on segmentation and local autocorrelation has been developed. Also ice type classification based on higher-order statistics and independent component analysis have been studied to get an improved SAR-based ice type classification. A compression algorithm for compressing sea ice SAR data for visual use has been developed. This algorithm is based on the wavelet decomposition, zero-tree structure and arithmetic coding. Also some properties of the human visual system were utilized. This algorithm was developed to produce smaller compressed SAR images, with a reasonable visual quality. The transmission of the compressed images to ships with low-speed data connections in reasonable time is then possible. One of the navigationally most important sea ice parameters is the ice thickness. SAR-based ice thickness estimation has been developed and evaluated as part of this work. This ice thickness estimation method uses the ice thickness history derived from digitized ice charts, made daily at the Finnish Ice Service, as its input, and updates this chart based on the novel SAR data. The result is an ice thickness chart representing the ice situation at the SAR acquisition time in higher resolution than in the manually made ice thickness charts. For the evaluation of the results a helicopter-borne ice thickness measuring instrument, based on electromagnetic induction and laser altimeter, was used.reviewe

    Scale free information retrieval : visually searching and navigating the web

    Get PDF
    Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1998.Includes bibliographical references (p. [91]-92).Daniel Ethan Dreilinger.M.S

    Automatic road network extraction from high resolution satellite imagery using spectral classification methods

    Get PDF
    Road networks play an important role in a number of geospatial applications, such as cartographic, infrastructure planning and traffic routing software. Automatic and semi-automatic road network extraction techniques have significantly increased the extraction rate of road networks. Automated processes still yield some erroneous and incomplete results and costly human intervention is still required to evaluate results and correct errors. With the aim of improving the accuracy of road extraction systems, three objectives are defined in this thesis: Firstly, the study seeks to develop a flexible semi-automated road extraction system, capable of extracting roads from QuickBird satellite imagery. The second objective is to integrate a variety of algorithms within the road network extraction system. The benefits of using each of these algorithms within the proposed road extraction system, is illustrated. Finally, a fully automated system is proposed by incorporating a number of the algorithms investigated throughout the thesis. CopyrightDissertation (MSc)--University of Pretoria, 2010.Computer Scienceunrestricte

    Object Tracking in Distributed Video Networks Using Multi-Dimentional Signatures

    Get PDF
    From being an expensive toy in the hands of governmental agencies, computers have evolved a long way from the huge vacuum tube-based machines to today\u27s small but more than thousand times powerful personal computers. Computers have long been investigated as the foundation for an artificial vision system. The computer vision discipline has seen a rapid development over the past few decades from rudimentary motion detection systems to complex modekbased object motion analyzing algorithms. Our work is one such improvement over previous algorithms developed for the purpose of object motion analysis in video feeds. Our work is based on the principle of multi-dimensional object signatures. Object signatures are constructed from individual attributes extracted through video processing. While past work has proceeded on similar lines, the lack of a comprehensive object definition model severely restricts the application of such algorithms to controlled situations. In conditions with varying external factors, such algorithms perform less efficiently due to inherent assumptions of constancy of attribute values. Our approach assumes a variable environment where the attribute values recorded of an object are deemed prone to variability. The variations in the accuracy in object attribute values has been addressed by incorporating weights for each attribute that vary according to local conditions at a sensor location. This ensures that attribute values with higher accuracy can be accorded more credibility in the object matching process. Variations in attribute values (such as surface color of the object) were also addressed by means of applying error corrections such as shadow elimination from the detected object profile. Experiments were conducted to verify our hypothesis. The results established the validity of our approach as higher matching accuracy was obtained with our multi-dimensional approach than with a single-attribute based comparison
    corecore