235 research outputs found

    NOAA Coastal Change Analysis Program (C-CAP): Guidance for Regional Implementation

    Get PDF
    EXECUTIVE SUMMARY: The Coastal Change Analysis Programl (C-CAP) is developing a nationally standardized database on landcover and habitat change in the coastal regions of the United States. C-CAP is part of the Estuarine Habitat Program (EHP) of NOAA's Coastal Ocean Program (COP). C-CAP inventories coastal submersed habitats, wetland habitats, and adjacent uplands and monitors changes in these habitats on a one- to five-year cycle. This type of information and frequency of detection are required to improve scientific understanding of the linkages of coastal and submersed wetland habitats with adjacent uplands and with the distribution, abundance, and health of living marine resources. The monitoring cycle will vary according to the rate and magnitude of change in each geographic region. Satellite imagery (primarily Landsat Thematic Mapper), aerial photography, and field data are interpreted, classified, analyzed, and integrated with other digital data in a geographic information system (GIS). The resulting landcover change databases are disseminated in digital form for use by anyone wishing to conduct geographic analysis in the completed regions. C-CAP spatial information on coastal change will be input to EHP conceptual and predictive models to support coastal resource policy planning and analysis. CCAP products will include 1) spatially registered digital databases and images, 2) tabular summaries by state, county, and hydrologic unit, and 3) documentation. Aggregations to larger areas (representing habitats, wildlife refuges, or management districts) will be provided on a case-by-case basis. Ongoing C-CAP research will continue to explore techniques for remote determination of biomass, productivity, and functional status of wetlands and will evaluate new technologies (e.g. remote sensor systems, global positioning systems, image processing algorithms) as they become available. Selected hardcopy land-cover change maps will be produced at local (1:24,000) to regional scales (1:500,000) for distribution. Digital land-cover change data will be provided to users for the cost of reproduction. Much of the guidance contained in this document was developed through a series of professional workshops and interagency meetings that focused on a) coastal wetlands and uplands; b) coastal submersed habitat including aquatic beds; c) user needs; d) regional issues; e) classification schemes; f) change detection techniques; and g) data quality. Invited participants included technical and regional experts and representatives of key State and Federal organizations. Coastal habitat managers and researchers were given an opportunity for review and comment. This document summarizes C-CAP protocols and procedures that are to be used by scientists throughout the United States to develop consistent and reliable coastal change information for input to the C-CAP nationwide database. It also provides useful guidelines for contributors working on related projects. It is considered a working document subject to periodic review and revision.(PDF file contains 104 pages.

    Earth observations division Earth resources data analysis capabilities

    Get PDF
    There are no author-identified significant results in this report

    Experimental study of digital image processing techniques for LANDSAT data

    Get PDF
    The author has identified the following significant results. Results are reported for: (1) subscene registration, (2) full scene rectification and registration, (3) resampling techniques, (4) and ground control point (GCP) extraction. Subscenes (354 pixels x 234 lines) were registered to approximately 1/4 pixel accuracy and evaluated by change detection imagery for three cases: (1) bulk data registration, (2) precision correction of a reference subscene using GCP data, and (3) independently precision processed subscenes. Full scene rectification and registration results were evaluated by using a correlation technique to measure registration errors of 0.3 pixel rms thoughout the full scene. Resampling evaluations of nearest neighbor and TRW cubic convolution processed data included change detection imagery and feature classification. Resampled data were also evaluated for an MSS scene containing specular solar reflections

    Special effects and digital photography

    Get PDF
    ThesisIf it walks like a duck, Swims like a duck, And quacks like a duck, It may well be a chicken, .. (Paul Fuqua) The human eye can be described as a camera that takes about ten pictures every second. It telegraphs to the brain the information that each picture contains. It cannot work faster for the retina needs appreciable time to receive and transmit each impression as well as get ready for the next one. Since the invention of photography man have been using it as a tool, to make it do what the human eye cannot, such as: speeding up time or slowing it down; to learn how things actually behave; of making things that are too distant, too small or too faint visible to the human eye. As photography developed it became invaluable to science and technology. The camera brings into being the most striking and useful views of the world even when it deliberately lies. It can alter what the eye would normally see into what the eye would like to see. It can make subtle shifts of perspective and radical distortions ofform. In the early history of photography photograph's was only taken of familiar objects; things that the human eyes can see. Faces, landscapes and buildings were the most familiar images. Photographers started experimenting and playing around and with the development of better equipment (such as faster emulsions, bigger lenses and flash equipment) photographers soon realised that they possessed a powerful instrument that could perceive and record things that the eye cannot see. For as long as people have contemplated the world, they have been fascinated by the seemingly impossible and, thereby, unexplainable ... (Sage 1996: 4

    Image-Processing Techniques for the Creation of Presentation-Quality Astronomical Images

    Full text link
    The quality of modern astronomical data, the power of modern computers and the agility of current image-processing software enable the creation of high-quality images in a purely digital form. The combination of these technological advancements has created a new ability to make color astronomical images. And in many ways it has led to a new philosophy towards how to create them. A practical guide is presented on how to generate astronomical images from research data with powerful image-processing programs. These programs use a layering metaphor that allows for an unlimited number of astronomical datasets to be combined in any desired color scheme, creating an immense parameter space to be explored using an iterative approach. Several examples of image creation are presented. A philosophy is also presented on how to use color and composition to create images that simultaneously highlight scientific detail and are aesthetically appealing. This philosophy is necessary because most datasets do not correspond to the wavelength range of sensitivity of the human eye. The use of visual grammar, defined as the elements which affect the interpretation of an image, can maximize the richness and detail in an image while maintaining scientific accuracy. By properly using visual grammar, one can imply qualities that a two-dimensional image intrinsically cannot show, such as depth, motion and energy. In addition, composition can be used to engage viewers and keep them interested for a longer period of time. The use of these techniques can result in a striking image that will effectively convey the science within the image, to scientists and to the public.Comment: 104 pages, 38 figures, submitted to A

    A database system to support image algorithm evaluation

    Get PDF
    The design is given of an interactive image database system IMDB, which allows the user to create, retrieve, store, display, and manipulate images through the facility of a high-level, interactive image query (IQ) language. The query language IQ permits the user to define false color functions, pixel value transformations, overlay functions, zoom functions, and windows. The user manipulates the images through generic functions. The user can direct images to display devices for visual and qualitative analysis. Image histograms and pixel value distributions can also be computed to obtain a quantitative analysis of images

    A reproducible notebook to acquire, process and analyse satellite imagery

    Get PDF
    Satellite imagery is often used to study and monitor Earth surface changes. The open availability and extensive temporal coverage of Landsat imagery has enabled changes in temperature, wind, vegetation and ice melting speed for a period of up to 46 years. Yet, the use of satellite imagery to study cities has remained underutilised, partly due to the lack of a methodological approach to capture features and changes in the urban environment. This notebook offers a framework based on Python tools to demonstrate how to batch-download high-resolution satellite imagery; and enable the extraction, analysis and visualisation of features of the built environment to capture long-term urban changes

    Utility of Spatial Filtering Techniques in the Remote Sensing of Soil Erosion in the Sefid-Rud Reservoir Catchment in Iran

    Get PDF
    The objective of this study is to investigate the applicability of Landsat Thematic Mapper digital images assisted by computer analysis to the study of soil erosion. The study aims to identify the sources of sediment and areas of dissected land in the catchment basin of the Sefid Rud reservoir in northern Iran. First, histogram equalization is deliberately applied to the original band 3 to reduce the noise and unv/anted edges and lines in the dark tail of the histogram, mainly vegetation, and the light tail, the non-eroded areas, and also to improve the visual appearance of edges and lines on the processed image. The next step is high pass filtering, unlike the conventional edge detection technique in which the first step is low pass filtering. In this instance, the result of low pass filtering was that faint edges, evidence of the gullies, were removed and highly eroded areas appeared as non eroded areas. Therefore low pass filtering was replaced with high pass filtering, which highlighted faint edges and lines. The next step is detecting the edges and lines. When using the edge and line detecting technique for detecting dissected lands one needs to take into account that a gully might appear as two or three edges if its width is more than one pixel or as one line if it is just one pixel or less than one pixel in width on the Thematic Mapper image. Therefore an algorithm should be chosen which has the ability to detect both edges and lines. The existina edge and line detecting filters such as the Sobel , the Robert, compass, the Laplacian convolution masks and the directional line detecting technique were evaluated. The Sobel and the Robert operators were found to be powerful edge detecting techniques, but the Laplacian convolution mask was found to be the best for detecting the badland and gullied areas because it has the ability to detect faint edges as well as coarse edges. Not only does it detect both edges and lines, but it also gives stronger weight to the lines than the edges. Only edges and lines in gullied areas were of interest for detecting the dissected lands, but all other artificial and natural lines and edges were also detected. The result of applying the Laplacian function appears on the screen as black, white and gray pixels. The black pixels are non-eroded land, white pixels are eroded and gray pixels are transitional between eroded and non eroded. To change the transitional pixels to either eroded or non eroded and also for printing the image as hardcopy the thresholding function of IAX was applied to the edge detected image. In order to mask out the noise within the vegetated areas caused by edges of plots of different crops the vegetation index (VI) was added to the detected image. In the derived image black pixels are evidence of gullies and white pixels are non dissected lands. In this image it is possible to find out the relative proportion of dissected and non dissected land globally and / or within the regions of interest. Although it is possible to measure the proportions of dissected and non dissected land and they are also visually distinguishable, they have not been categorised so far. To provide a map with categories of dissection, the first step is to smooth the image. To obtain the smooth image a low pass filter was used. Two ways were tested for producing the map of dissected lands from the smoothed image. In the first method one of the strongest edge detecting techniques, the Sobel operator was used on the smoothed image of dissected lands. In the result boundaries were detected and eroded and non eroded areas outlined. In the second method for categorising the smoothed image, the density slicing function of IAX was used to split the dissected land into different levels of severity. We concluded that the second method gives a better result. It was found in previous work that among erosion features gullies are recognizable on Thematic Mapper data. Detection of gullies and gullied areas by means of classification, whether supervised or unsupervised, was not successful in this study area. We came to the conclusion that the application of a Laplacian mask on the enhanced band 3 image could detect dissected lands. When aerial photographs and Thematic Mapper data are compared, the advantage of aerial photographs was that gullies actively cutting headwards were detectable, but on the Thematic Mapper data distiguishing between active and non active gullies was impossible. Aerial photographs are a very good tool to detect all kinds of erosion features (sheet, rill, and gully), but in my study area applying this new method (DLDT) on Thematic Mapper data can provide as much detail of soil erosion as is included in previous soil erosion maps made from aerial photographs. The Sobel and the Robert operators were found to be very strong edge detectors, but the ability of the Laplacian convolution mask for detecting gullies was greater. (Abstract shortened by ProQuest.)

    Current lanscape in the neighbourhood of open cast mines in northern Bohemia

    Full text link
    The classification of the landscape through different types of uses, it will be the basis of this classification of the work area. This study will explain in detail the method by which to classify the composition land units and the resulting land use composition. Using the GIS (geographic information system that integrates hardware, software and data for capturing, managing, analyzing and displaying all forms of geographically referenced information) with the orthophotos identify are identified the different types of land use of the work area. Land uses refer to the existing activity in this area at the time. GIS have proved to be very effective not only for determining the different types of landuse in an area, but also for the classification collecting valuable information for interpreting and it can determine trends in land use by comparing these maps from different years, very useful to see how it evolves and how it will do in the future, allowing to make decisions in advance. For land classification there are several methods, some also work using the GIS computer program but instead of classifying the land regarding its use, classified by their landscape value, others through the use of land of this area not only the actual but also in past years. In this project included different methods of classification, with a brief explanation of their methodology. Although some of these methods in addition to making the corresponding classification are also methods of analysis of changes in the work area over time. In this case working only with a layer of a specific year and I did not do this kind of study. Therefore, after the vectorization and correction of the original layer, I made an assessment of the data, grouped the values of land use in stable and unstable. Commenting on the corrected changes, the original data and the proportion of different land use types, I mean making an ecological assessment, including the impacts of open pit mining and possible corrective measures both during activity and abandonment.Hidalgo Escrihuela, A. (2011). Current lanscape in the neighbourhood of open cast mines in northern Bohemia. Universitat Politècnica de València. http://hdl.handle.net/10251/14452Archivo delegad
    corecore