25,045 research outputs found
GAIA: Composition, Formation and Evolution of the Galaxy
The GAIA astrometric mission has recently been approved as one of the next
two `cornerstones' of ESA's science programme, with a launch date target of not
later than mid-2012. GAIA will provide positional and radial velocity
measurements with the accuracies needed to produce a stereoscopic and kinematic
census of about one billion stars throughout our Galaxy (and into the Local
Group), amounting to about 1 per cent of the Galactic stellar population.
GAIA's main scientific goal is to clarify the origin and history of our Galaxy,
from a quantitative census of the stellar populations. It will advance
questions such as when the stars in our Galaxy formed, when and how it was
assembled, and its distribution of dark matter. The survey aims for
completeness to V=20 mag, with accuracies of about 10 microarcsec at 15 mag.
Combined with astrophysical information for each star, provided by on-board
multi-colour photometry and (limited) spectroscopy, these data will have the
precision necessary to quantify the early formation, and subsequent dynamical,
chemical and star formation evolution of our Galaxy. Additional products
include detection and orbital classification of tens of thousands of
extra-Solar planetary systems, and a comprehensive survey of some 10^5-10^6
minor bodies in our Solar System, through galaxies in the nearby Universe, to
some 500,000 distant quasars. It will provide a number of stringent new tests
of general relativity and cosmology. The complete satellite system was
evaluated as part of a detailed technology study, including a detailed payload
design, corresponding accuracy assesments, and results from a prototype data
reduction development.Comment: Accepted by A&A: 25 pages, 8 figure
Guidance for benthic habitat mapping: an aerial photographic approach
This document, Guidance for Benthic Habitat Mapping: An Aerial Photographic Approach, describes proven technology that can be applied in an operational manner by state-level scientists and resource managers. This information is based on the experience gained by NOAA Coastal Services Center staff and state-level cooperators in the production of a series of benthic habitat data sets in Delaware, Florida, Maine, Massachusetts, New York, Rhode Island, the Virgin Islands, and Washington, as well as during Center-sponsored workshops on coral remote sensing and seagrass and aquatic habitat assessment. (PDF contains 39 pages)
The original benthic habitat document, NOAA Coastal Change Analysis Program (C-CAP): Guidance for Regional Implementation (Dobson et al.), was published by the
Department of Commerce in 1995. That document summarized procedures that were to be used by scientists throughout the United States to develop consistent and reliable
coastal land cover and benthic habitat information. Advances in technology and new methodologies for generating these data created the need for this updated report,
which builds upon the foundation of its predecessor
A study of image quality for radar image processing
Methods developed for image quality metrics are reviewed with focus on basic interpretation or recognition elements including: tone or color; shape; pattern; size; shadow; texture; site; association or context; and resolution. Seven metrics are believed to show promise as a way of characterizing the quality of an image: (1) the dynamic range of intensities in the displayed image; (2) the system signal-to-noise ratio; (3) the system spatial bandwidth or bandpass; (4) the system resolution or acutance; (5) the normalized-mean-square-error as a measure of geometric fidelity; (6) the perceptual mean square error; and (7) the radar threshold quality factor. Selective levels of degradation are being applied to simulated synthetic radar images to test the validity of these metrics
Geometric reconstruction methods for electron tomography
Electron tomography is becoming an increasingly important tool in materials
science for studying the three-dimensional morphologies and chemical
compositions of nanostructures. The image quality obtained by many current
algorithms is seriously affected by the problems of missing wedge artefacts and
nonlinear projection intensities due to diffraction effects. The former refers
to the fact that data cannot be acquired over the full tilt range;
the latter implies that for some orientations, crystalline structures can show
strong contrast changes. To overcome these problems we introduce and discuss
several algorithms from the mathematical fields of geometric and discrete
tomography. The algorithms incorporate geometric prior knowledge (mainly
convexity and homogeneity), which also in principle considerably reduces the
number of tilt angles required. Results are discussed for the reconstruction of
an InAs nanowire
MISR stereoscopic image matchers: techniques and results
The Multi-angle Imaging SpectroRadiometer (MISR) instrument, launched in December 1999 on the NASA EOS Terra satellite, produces images in the red band at 275-m resolution, over a swath width of 360 km, for the nine camera angles 70.5/spl deg/, 60/spl deg/, 45.6/spl deg/, and 26.1/spl deg/ forward, nadir, and 26.1/spl deg/, 45.6/spl deg/, 60/spl deg/, and 70.5/spl deg/ aft. A set of accurate and fast algorithms was developed for automated stereo matching of cloud features to obtain cloud-top height and motion over the nominal six-year lifetime of the mission. Accuracy and speed requirements necessitated the use of a combination of area-based and feature-based stereo-matchers with only pixel-level acuity. Feature-based techniques are used for cloud motion retrieval with the off-nadir MISR camera views, and the motion is then used to provide a correction to the disparities used to measure cloud-top heights which are derived from the innermost three cameras. Intercomparison with a previously developed "superstereo" matcher shows that the results are very comparable in accuracy with much greater coverage and at ten times the speed. Intercomparison of feature-based and area-based techniques shows that the feature-based techniques are comparable in accuracy at a factor of eight times the speed. An assessment of the accuracy of the area-based matcher for cloud-free scenes demonstrates the accuracy and completeness of the stereo-matcher. This trade-off has resulted in the loss of a reliable quality metric to predict accuracy and a slightly high blunder rate. Examples are shown of the application of the MISR stereo-matchers on several difficult scenes which demonstrate the efficacy of the matching approach
Semi-automated geomorphological mapping applied to landslide hazard analysis
Computer-assisted three-dimensional (3D) mapping using stereo and multi-image (“softcopy”) photogrammetry is shown to enhance the visual interpretation of geomorphology in steep terrain with the direct benefit of greater locational accuracy than traditional manual mapping. This would benefit multi-parameter correlations between terrain attributes and landslide distribution in both direct and indirect forms of landslide hazard assessment. Case studies involve synthetic models of a landslide, and field studies of a rock slope and steep undeveloped hillsides with both recently formed and partly degraded, old landslide scars. Diagnostic 3D morphology was generated semi-automatically both using a terrain-following cursor under stereo-viewing and from high resolution digital elevation models created using area-based image correlation, further processed with curvature algorithms. Laboratory-based studies quantify limitations of area-based image correlation for measurement of 3D points on planar surfaces with varying camera orientations. The accuracy of point measurement is shown to be non-linear with limiting conditions created by both narrow and wide camera angles and moderate obliquity of the target plane. Analysis of the results with the planar surface highlighted problems with the controlling parameters of the area-based image correlation process when used for generating DEMs from images obtained with a low-cost digital camera. Although the specific cause of the phase-wrapped image artefacts identified was not found, the procedure would form a suitable method for testing image correlation software, as these artefacts may not be obvious in DEMs of non-planar surfaces.Modelling of synthetic landslides shows that Fast Fourier Transforms are an efficient method for removing noise, as produced by errors in measurement of individual DEM points, enabling diagnostic morphological terrain elements to be extracted. Component landforms within landslides are complex entities and conversion of the automatically-defined morphology into geomorphology was only achieved with manual interpretation; however, this interpretation was facilitated by softcopy-driven stereo viewing of the morphological entities across the hillsides.In the final case study of a large landslide within a man-made slope, landslide displacements were measured using a photogrammetric model consisting of 79 images captured with a helicopter-borne, hand-held, small format digital camera. Displacement vectors and a thematic geomorphological map were superimposed over an animated, 3D photo-textured model to aid non-stereo visualisation and communication of results
Bridging the Gap Between Imaging Performance and Image Quality Measures
Imaging system performance measures and Image Quality Metrics (IQM) are reviewed from a systems engineering perspective, focusing on spatial quality of still image capture systems. We classify IQMs broadly as: Computational IQMs (CPIQM), Multivariate Formalism IQMs (MF-IQM), Image Fidelity Metrics (IF-IQM), and Signal Transfer Visual IQMs (STV-IQM). Comparison of each genre finds STV-IQMs well suited for capture system quality evaluation: they incorporate performance measures relevant to optical systems design, such as Modulation Transfer Function (MTF) and Noise-Power Spectrum (NPS); their bottom, modular approach enables system components to be optimised separately. We suggest that correlation between STV IQMs and observer quality scores is limited by three factors: current MTF and NPS measures do not characterize scene-dependent performance introduced by imaging system non-linearities; contrast sensitivity models employed do not account for contextual masking effects; cognitive factors are not considered. We hypothesise that implementation of scene and process-dependent MTF (SPD-MTF) and NPS (SPD-NPS) measures should mitigate errors originating from scene dependent system performance. Further, we propose implementation of contextual contrast detection and discrimination models to better represent low-level visual performance in image quality analysis. Finally, we discuss image quality optimization functions that may potentially close the gap between contrast detection/discrimination and quality
Recommended from our members
Active sampling, scaling and dataset merging for large-scale image quality assessment
The field of subjective assessment is concerned with eliciting human judgements about a set of stimuli. Collecting such data is costly and time-consuming, especially when the subjective study is to be conducted in a controlled environment and using a specialized equipment. Thus, data from these studies are usually scarce. One of the areas, for which obtaining subjective measurements is difficult is image quality assessment. The results from these studies are used to develop and train automated or objective image quality metrics, which, with the advent of deep learning, require large amounts of versatile and heterogeneous data.
I present three main contributions in this dissertation. First, I propose a new active sampling method for efficient collection of pairwise comparisons in subjective assessment experiments. In these experiments observers are asked to express a preference between two conditions. However, many pairwise comparison protocols require a large number of comparisons to infer accurate scores, which may be unfeasible when each comparison is time-consuming (e.g. videos) or expensive (e.g. medical imaging). This motivates the use of an active sampling algorithm that chooses only the most informative pairs for comparison. I demonstrate, with real and synthetic data, that my algorithm offers the highest accuracy of inferred scores given a fixed number of measurements compared to the existing methods. Second, I propose a probabilistic framework to fuse the outcomes of different psychophysical experimental protocols, namely rating and pairwise comparisons experiments. Such a method can be used for merging existing datasets of subjective nature and for experiments in which both measurements are collected. Third, with a new dataset merging technique and by collecting additional cross-dataset quality comparisons I create a Unified Photometric Image Quality (UPIQ) dataset with over 4,000 images by realigning and merging existing high-dynamic-range (HDR) and standard-dynamic-range (SDR) datasets. The realigned quality scores share the same unified quality scale across all datasets. I then use the new dataset to retrain existing HDR metrics and show that the dataset is sufficiently large for training deep architectures. I show the utility of the dataset and metrics in an application to image compression that accounts for viewing conditions, including screen brightness and the viewing distance
- …