95 research outputs found

    Standards and Best Practices - Two NASA Examples

    Get PDF
    Formal international standards as well as promotion of community or recommended practices have their place in ensuring "FAIRness" of data. Data management in NASA's Earth Observation System Data and Information System (EOSDIS) has benefited from both of these avenues to a significant extent. The purpose of this paper is to present one example of each of these, which promote (re)usability. The first is an ISO standard for specifying preservation content from Earth observation missions. The work on this started in 2011, informally within the Earth Science Information Partners (ESIP) in the US, while the European Space Agency (ESA) was leading an effort on Long-Term Data Preservation (LTDP). Resulting from the ESIP discussions was NASA's Preservation Content Specification, which was applied in 2012 as a requirement for NASA's new missions. ESA's Preserved Data Set Content (PDSC) document was codified into a document adopted by the Committee on Earth Observation Satellites (CEOS). It was recognized that it would be useful to combine PCS and PDSC into an ISO standard to ensure consistency in data preservation on a broader international scale. This standard, numbered ISO 19165-2 has been under development since mid-2017. The second is an example of developing recommendations for "best practices" within more limited (still fairly broad) communities. A Data Product Developers' Guide (DPDG) is currently being developed by one of NASA's Earth Science Data System Working Groups (ESDSWGs). It is for use by developers of products to be derived from Earth observation data to improve product (re)usability. One of the challenges in developing the guide is the fact that there are already many applicable standards and guides. The relevant information needs to be selected and expressed in a succinct manner, with appropriate pointers to references. The DPDG aims to compile the most applicable parts of earlier guides into a single document outlining the typical development process for Earth Science data products. Standards and best practices formally endorsed by the Earth Science Data and Information System (ESDIS) Standards Office (ESO), outputs from ESDSWGs (e.g., Dataset Interoperability Working Group, and Data Quality Working Group), and recommendations from Distributed Active Archive Centers and data producers are emphasized

    Preservation of Data for Earth System Science- Towards a Content Standard

    Get PDF
    Various remote sensing agencies of the world have created a data rich environment for research and applications over the last three decades. Especially over the last decade, the volume and variety of data useful for Earth system science have increased quite rapidly. One of the key purposes of collecting these data and generating useful digital products containing derived geophysical parameters is to study the long-term trends in the Earth s behavior. Long-term observational data and derived products are essential for validating results from models that predict the future behavior of the Earth system. Given the significant resources expended in gathering the observational data and developing the derived products, it is important to preserve them for the benefit of future generations of users. Preservation involves maintaining the bits with no loss (or loss within scientifically acceptable bounds) as they move across systems as well as over time, ensuring readability over time, and providing for long-term understandability and repeatability of previously obtained results. In order to ensure long-term understandability and repeatability, it is necessary to identify all items of content that must be preserved and plan for such preservation. This paper discusses the need for a standard enumerating and describing such content items and reports on the progress made by NASA and the Federation of Earth Science Information Partners (ESIP Federation) in the U.S. towards such a standard

    Synthetic aperture radar signal processing on the MPP

    Get PDF
    Satellite-borne Synthetic Aperture Radars (SAR) sense areas of several thousand square kilometers in seconds and transmit phase history signal data several tens of megabits per second. The Shuttle Imaging Radar-B (SIR-B) has a variable swath of 20 to 50 km and acquired data over 100 kms along track in about 13 seconds. With the simplification of separability of the reference function, the processing still requires considerable resources; high speed I/O, large memory and fast computation. Processing systems with regular hardware take hours to process one Seasat image and about one hour for a SIR-B image. Bringing this processing time closer to acquisition times requires an end-to-end system solution. For the purpose of demonstration, software was implemented on the present Massively Parallel Processor (MPP) configuration for processing Seasat and SIR-B data. The software takes advantage of the high processing speed offered by the MPP, the large Staging Buffer, and the high speed I/O between the MPP array unit and the Staging Buffer. It was found that with unoptimized Parallel Pascal code, the processing time on the MPP for a 4096 x 4096 sample subset of signal data ranges between 18 and 30.2 seconds depending on options

    Data compression experiments with LANDSAT thematic mapper and Nimbus-7 coastal zone color scanner data

    Get PDF
    A case study is presented where an image segmentation based compression technique is applied to LANDSAT Thematic Mapper (TM) and Nimbus-7 Coastal Zone Color Scanner (CZCS) data. The compression technique, called Spatially Constrained Clustering (SCC), can be regarded as an adaptive vector quantization approach. The SCC can be applied to either single or multiple spectral bands of image data. The segmented image resulting from SCC is encoded in small rectangular blocks, with the codebook varying from block to block. Lossless compression potential (LDP) of sample TM and CZCS images are evaluated. For the TM test image, the LCP is 2.79. For the CZCS test image the LCP is 1.89, even though when only a cloud-free section of the image is considered the LCP increases to 3.48. Examples of compressed images are shown at several compression ratios ranging from 4 to 15. In the case of TM data, the compressed data are classified using the Bayes' classifier. The results show an improvement in the similarity between the classification results and ground truth when compressed data are used, thus showing that compression is, in fact, a useful first step in the analysis

    Registration workshop report

    Get PDF
    The state-of-the-art in registration and rectification of image data for terrestrial applications is examined and recommendations for further research in these areas are made

    Motion detection in astronomical and ice floe images

    Get PDF
    Two approaches are presented for establishing correspondence between small areas in pairs of successive images for motion detection. The first one, based on local correlation, is used on a pair of successive Voyager images of the Jupiter which differ mainly in locally variable translations. This algorithm is implemented on a sequential machine (VAX 780) as well as the Massively Parallel Processor (MPP). In the case of the sequential algorithm, the pixel correspondence or match is computed on a sparse grid of points using nonoverlapping windows (typically 11 x 11) by local correlations over a predetermined search area. The displacement of the corresponding pixels in the two images is called the disparities to cubic surfaces. The disparities at points where the error between the computed values and the surface values exceeds a particular threshold are replaced by the surface values. A bilinear interpolation is then used to estimate disparities at all other pixels between the grid points. When this algorithm was applied at the red spot in the Jupiter image, the rotating velocity field of the storm was determined. The second method of motion detection is applicable to pairs of images in which corresponding areas can experience considerable translation as well as rotation

    Evolution of Archival Storage (from Tape to Memory)

    Get PDF
    Over the last three decades, there has been a significant evolution in storage technologies supporting archival of remote sensing data. This section provides a brief survey of how these technologies have evolved. Three main technologies are considered - tape, hard disk and solid state disk. Their historical evolution is traced, summarizing how reductions in cost have helped being able to store larger volumes of data on faster media. The cost per GB of media is only one of the considerations in determining the best approach to archival storage. Active archives generally require faster response to user requests for data than permanent archives. The archive costs have to consider facilities and other capital costs, operations costs, software licenses, utilities costs, etc. For meeting requirements in any organization, typically a mix of technologies is needed

    Parallel algorithm for determining motion vectors in ice floe images by matching edge features

    Get PDF
    A parallel algorithm is described to determine motion vectors of ice floes using time sequences of images of the Arctic ocean obtained from the Synthetic Aperture Radar (SAR) instrument flown on-board the SEASAT spacecraft. Researchers describe a parallel algorithm which is implemented on the MPP for locating corresponding objects based on their translationally and rotationally invariant features. The algorithm first approximates the edges in the images by polygons or sets of connected straight-line segments. Each such edge structure is then reduced to a seed point. Associated with each seed point are the descriptions (lengths, orientations and sequence numbers) of the lines constituting the corresponding edge structure. A parallel matching algorithm is used to match packed arrays of such descriptions to identify corresponding seed points in the two images. The matching algorithm is designed such that fragmentation and merging of ice floes are taken into account by accepting partial matches. The technique has been demonstrated to work on synthetic test patterns and real image pairs from SEASAT in times ranging from .5 to 0.7 seconds for 128 x 128 images

    Proceedings of the Scientific Data Compression Workshop

    Get PDF
    Continuing advances in space and Earth science requires increasing amounts of data to be gathered from spaceborne sensors. NASA expects to launch sensors during the next two decades which will be capable of producing an aggregate of 1500 Megabits per second if operated simultaneously. Such high data rates cause stresses in all aspects of end-to-end data systems. Technologies and techniques are needed to relieve such stresses. Potential solutions to the massive data rate problems are: data editing, greater transmission bandwidths, higher density and faster media, and data compression. Through four subpanels on Science Payload Operations, Multispectral Imaging, Microwave Remote Sensing and Science Data Management, recommendations were made for research in data compression and scientific data applications to space platforms

    Digital computer processing of LANDSAT data for North Alabama

    Get PDF
    Computer processing procedures and programs applied to Multispectral Scanner data from LANDSAT are described. The output product produced is a level 1 land use map in conformance with a Universal Transverse Mercator projection. The region studied was a five-county area in north Alabama
    corecore