359 research outputs found

    Extraction and interaction analysis of foreground objects in panning video

    Get PDF
    In this work, moving foreground objects are extracted from video. The video camera can be either stationary or panning. The motion information extracted from consecutive frames of video is used for object segmentation, and an appearance model of the moving object is created. No assumptions are made concerning the object model, resulting in segmented objects of varying shapes and sizes. Finally, relationships between objects in the image are evaluated toward the goal of higher level understanding of videos

    High Resolution Mapping of Soils and Landforms for the Desert Renewable Energy Conservation Plan (DRECP)

    Get PDF
    The Desert Renewable Energy Conservation Plan (DRECP), a major component of California's renewable energy planning efforts, is intended to provide effective protection and conservation of desert ecosystems, while allowing for the sensible development of renewable energy projects. This NASA mapping report was developed to support the DRECP and the Bureau of Land Management (BLM). We outline in this document remote sensing image processing methods to deliver new maps of biological soils crusts, sand dune movements, desert pavements, and sub-surface water sources across the DRECP area. We focused data processing first on the largely unmapped areas most likely to be used for energy developments, such as those within Renewable Energy Study Areas (RESA) and Solar Energy Zones (SEZs). We used imagery (multispectral and radar) mainly from the years 2009-2011

    Single image super resolution technique: An extension to true color images

    Get PDF
    The super-resolution (SR) technique reconstructs a high-resolution image from single or multiple low-resolution images. SR has gained much attention over the past decade, as it has significant applications in our daily life. This paper provides a new technique of a single image super-resolution on true colored images. The key idea is to obtain the super-resolved image from observed low-resolution images. A proposed technique is based on both the wavelet and spatial domain-based algorithms by exploiting the advantages of both of the algorithms. A back projection with an iterative method is implemented to minimize the reconstruction error and for noise removal wavelet-based de-noising method is used. Previously, this technique has been followed for the grayscale images. In this proposed algorithm, the colored images are taken into account for super-resolution. The results of the proposed method have been examined both subjectively by observation of the results visually and objectively by considering the peak signal-to-noise ratio (PSNR) and mean squared error (MSE), which gives significant results and visually better in quality from the bi-cubic interpolation technique

    Person re-Identification over distributed spaces and time

    Get PDF
    PhDReplicating the human visual system and cognitive abilities that the brain uses to process the information it receives is an area of substantial scientific interest. With the prevalence of video surveillance cameras a portion of this scientific drive has been into providing useful automated counterparts to human operators. A prominent task in visual surveillance is that of matching people between disjoint camera views, or re-identification. This allows operators to locate people of interest, to track people across cameras and can be used as a precursory step to multi-camera activity analysis. However, due to the contrasting conditions between camera views and their effects on the appearance of people re-identification is a non-trivial task. This thesis proposes solutions for reducing the visual ambiguity in observations of people between camera views This thesis first looks at a method for mitigating the effects on the appearance of people under differing lighting conditions between camera views. This thesis builds on work modelling inter-camera illumination based on known pairs of images. A Cumulative Brightness Transfer Function (CBTF) is proposed to estimate the mapping of colour brightness values based on limited training samples. Unlike previous methods that use a mean-based representation for a set of training samples, the cumulative nature of the CBTF retains colour information from underrepresented samples in the training set. Additionally, the bi-directionality of the mapping function is explored to try and maximise re-identification accuracy by ensuring samples are accurately mapped between cameras. Secondly, an extension is proposed to the CBTF framework that addresses the issue of changing lighting conditions within a single camera. As the CBTF requires manually labelled training samples it is limited to static lighting conditions and is less effective if the lighting changes. This Adaptive CBTF (A-CBTF) differs from previous approaches that either do not consider lighting change over time, or rely on camera transition time information to update. By utilising contextual information drawn from the background in each camera view, an estimation of the lighting change within a single camera can be made. This background lighting model allows the mapping of colour information back to the original training conditions and thus remove the need for 3 retraining. Thirdly, a novel reformulation of re-identification as a ranking problem is proposed. Previous methods use a score based on a direct distance measure of set features to form a correct/incorrect match result. Rather than offering an operator a single outcome, the ranking paradigm is to give the operator a ranked list of possible matches and allow them to make the final decision. By utilising a Support Vector Machine (SVM) ranking method, a weighting on the appearance features can be learned that capitalises on the fact that not all image features are equally important to re-identification. Additionally, an Ensemble-RankSVM is proposed to address scalability issues by separating the training samples into smaller subsets and boosting the trained models. Finally, the thesis looks at a practical application of the ranking paradigm in a real world application. The system encompasses both the re-identification stage and the precursory extraction and tracking stages to form an aid for CCTV operators. Segmentation and detection are combined to extract relevant information from the video, while several combinations of matching techniques are combined with temporal priors to form a more comprehensive overall matching criteria. The effectiveness of the proposed approaches is tested on datasets obtained from a variety of challenging environments including offices, apartment buildings, airports and outdoor public spaces

    Evaluation of the impacts of Hurricane Hugo on the land cover of Francis Marion National Forest, South Carolina using remote sensing

    Get PDF
    Hurricane Hugo struck the South Carolina coast on the night of September 21, 1989 at Sullivan’s Island, where it was considered a Category 4 on the Saffir-Simpson scale when the hurricane made landfall (Hook et al. 1991). It is probably amongst the most studied and documented hurricanes in the United States (USDA Southern Research Station Publication 1996). There has been a Landsat TM based Hugo damage assessment study conducted by Cablk et al. (1994) in the Hobcaw barony forest. This study attempted to assess for a different and smaller study area near the Wambaw and Coffee creek swamp. The main objective of this study was to compare the results of the traditional post-classification method and the triangular prism fractal method (TPSA hereafter, a spatial method) for change detection using Landsat TM data for the Francis Marion National Forest (FMNF hereafter) before and after Hurricane Hugo’s landfall (in 1987 and 1989). Additional methods considered for comparison were the principal component analysis (PCA hereafter), and tasseled cap transform (TCT hereafter). Classification accuracy was estimated at 81.44% and 85.71% for the hurricane images with 4 classes: water, woody wetland, forest and a combined cultivated row crops/transitional barren class. Post-classification was successful in identifying the Wambaw swamp, Coffee creek swamp, and the Little Wambaw wilderness as having a gain in homogeneity. It was the only method along with the local fractal method, which gave the percentage of changed land cover areas. Visual comparison of the PCA and TCT images show the dominant land cover changes in the study area with the TCT in general better able to identify the features in all their transformed three bands. The post-classification method, PCA, and the TCT brightness and greenness bands did not report increase in heterogeneity, but were successful in reporting gain in homogeneity. The local fractal TPSA method of a 17x17 moving window with five arithmetic steps was found to have the best visual representation of the textural patterns in the study area. The local fractal TPSA method was successful in identifying land cover areas as having the largest heterogeneity increase (a positive change in fractal dimension difference values) and largest homogeneity increase (a negative change in fractal dimension difference values). The woody wetland class was found to have the biggest increase in homogeneity and the forest class as having the biggest increase in heterogeneity, in addition to identifying the three swamp areas as having an overall increased homogeneity

    Procedures for Correcting Digital Camera Imagery Acquired by the AggieAir Remote Sensing Platform

    Get PDF
    Developments in sensor technologies have made consumer-grade digital cameras one of the more recent tools in remote sensing applications. Consumer-grade digital cameras have been the imaging sensor of choice by researchers due to their small size, light weight, limited power requirements, and their potential to store hundreds of images (Hardin 2011). Several studies have focused on the use of digital cameras and their efficacy in remote sensing applications. For satellite and airborne multispectral imaging systems, there is a well established radiometric processing approach. However, radiometric processing lines for digital cameras are currently being researched. The goal of this report is to describe an absolute method of radiometric normalization that converts digital numbers output by the camera to reflectance values that can be used for remote sensing applications. This process is used at the AggieAir Flying Circus (AAFC), a service center at the Utah Water Research Laboratory at Utah State University. The AAFC is a research unit that specializes in the acquisition, processing, and interpretation of aerial imagery obtained with the AggieAirTM platform. AggieAir is an autonomous, unmanned aerial vehicle system that captures multi-temporal and multispectral high resolution imagery for the production of orthorectified mosaics. The procedure used by the AAFC is based on methods adapted from Miura and Huete (2009), Crowther (1992) and Neale and Crowther (1994) for imagery acquired with Canon PowerShot SX100 cameras. Absolute normalization requires ground measurements at the time the imagery is acquired. In this study, a barium sulfate reflectance panel with absolute reflectance is used. The procedure was demonstrated using imagery captured from a wetland near Pleasant Grove, Utah, that is managed by the Utah Department of Transportation

    Earth Observation: Data, Processing and Applications. Volume 2C: Processing β€” Image Transformations

    Get PDF
    [Edited by] Harrison, B.A., Jupp, D.L.B., Lewis, M.M., Sparks, T., Mueller, N., Byrne,
    • …
    corecore