9,580 research outputs found

    Knowledge-based vision for space station object motion detection, recognition, and tracking

    Get PDF
    Computer vision, especially color image analysis and understanding, has much to offer in the area of the automation of Space Station tasks such as construction, satellite servicing, rendezvous and proximity operations, inspection, experiment monitoring, data management and training. Knowledge-based techniques improve the performance of vision algorithms for unstructured environments because of their ability to deal with imprecise a priori information or inaccurately estimated feature data and still produce useful results. Conventional techniques using statistical and purely model-based approaches lack flexibility in dealing with the variabilities anticipated in the unstructured viewing environment of space. Algorithms developed under NASA sponsorship for Space Station applications to demonstrate the value of a hypothesized architecture for a Video Image Processor (VIP) are presented. Approaches to the enhancement of the performance of these algorithms with knowledge-based techniques and the potential for deployment of highly-parallel multi-processor systems for these algorithms are discussed

    Region-enhanced passive radar imaging

    Get PDF
    The authors adapt and apply a recently-developed region-enhanced synthetic aperture radar (SAR) image reconstruction technique to the problem of passive radar imaging. One goal in passive radar imaging is to form images of aircraft using signals transmitted by commercial radio and television stations that are reflected from the objects of interest. This involves reconstructing an image from sparse samples of its Fourier transform. Owing to the sparse nature of the aperture, a conventional image formation approach based on direct Fourier transformation results in quite dramatic artefacts in the image, as compared with the case of active SAR imaging. The regionenhanced image formation method considered is based on an explicit mathematical model of the observation process; hence, information about the nature of the aperture is explicitly taken into account in image formation. Furthermore, this framework allows the incorporation of prior information or constraints about the scene being imaged, which makes it possible to compensate for the limitations of the sparse apertures involved in passive radar imaging. As a result, conventional imaging artefacts, such as sidelobes, can be alleviated. Experimental results using data based on electromagnetic simulations demonstrate that this is a promising strategy for passive radar imaging, exhibiting significant suppression of artefacts, preservation of imaged object features, and robustness to measurement noise

    Study of on-board compression of earth resources data

    Get PDF
    The current literature on image bandwidth compression was surveyed and those methods relevant to compression of multispectral imagery were selected. Typical satellite multispectral data was then analyzed statistically and the results used to select a smaller set of candidate bandwidth compression techniques particularly relevant to earth resources data. These were compared using both theoretical analysis and simulation, under various criteria of optimality such as mean square error (MSE), signal-to-noise ratio, classification accuracy, and computational complexity. By concatenating some of the most promising techniques, three multispectral data compression systems were synthesized which appear well suited to current and future NASA earth resources applications. The performance of these three recommended systems was then examined in detail by all of the above criteria. Finally, merits and deficiencies were summarized and a number of recommendations for future NASA activities in data compression proposed

    A study of image quality for radar image processing

    Get PDF
    Methods developed for image quality metrics are reviewed with focus on basic interpretation or recognition elements including: tone or color; shape; pattern; size; shadow; texture; site; association or context; and resolution. Seven metrics are believed to show promise as a way of characterizing the quality of an image: (1) the dynamic range of intensities in the displayed image; (2) the system signal-to-noise ratio; (3) the system spatial bandwidth or bandpass; (4) the system resolution or acutance; (5) the normalized-mean-square-error as a measure of geometric fidelity; (6) the perceptual mean square error; and (7) the radar threshold quality factor. Selective levels of degradation are being applied to simulated synthetic radar images to test the validity of these metrics

    Trying to break new ground in aerial archaeology

    Get PDF
    Aerial reconnaissance continues to be a vital tool for landscape-oriented archaeological research. Although a variety of remote sensing platforms operate within the earth’s atmosphere, the majority of aerial archaeological information is still derived from oblique photographs collected during observer-directed reconnaissance flights, a prospection approach which has dominated archaeological aerial survey for the past century. The resulting highly biased imagery is generally catalogued in sub-optimal (spatial) databases, if at all, after which a small selection of images is orthorectified and interpreted. For decades, this has been the standard approach. Although many innovations, including digital cameras, inertial units, photogrammetry and computer vision algorithms, geographic(al) information systems and computing power have emerged, their potential has not yet been fully exploited in order to re-invent and highly optimise this crucial branch of landscape archaeology. The authors argue that a fundamental change is needed to transform the way aerial archaeologists approach data acquisition and image processing. By addressing the very core concepts of geographically biased aerial archaeological photographs and proposing new imaging technologies, data handling methods and processing procedures, this paper gives a personal opinion on how the methodological components of aerial archaeology, and specifically aerial archaeological photography, should evolve during the next decade if developing a more reliable record of our past is to be our central aim. In this paper, a possible practical solution is illustrated by outlining a turnkey aerial prospection system for total coverage survey together with a semi-automated back-end pipeline that takes care of photograph correction and image enhancement as well as the management and interpretative mapping of the resulting data products. In this way, the proposed system addresses one of many bias issues in archaeological research: the bias we impart to the visual record as a result of selective coverage. While the total coverage approach outlined here may not altogether eliminate survey bias, it can vastly increase the amount of useful information captured during a single reconnaissance flight while mitigating the discriminating effects of observer-based, on-the-fly target selection. Furthermore, the information contained in this paper should make it clear that with current technology it is feasible to do so. This can radically alter the basis for aerial prospection and move landscape archaeology forward, beyond the inherently biased patterns that are currently created by airborne archaeological prospection

    Joint space aspect reconstruction of wide-angle SAR exploiting sparsity

    Get PDF
    In this paper we present an algorithm for wide-angle synthetic aperture radar (SAR) image formation. Reconstruction of wide-angle SAR holds a promise of higher resolution and better information about a scene, but it also poses a number of challenges when compared to the traditional narrow-angle SAR. Most prominently, the isotropic point scattering model is no longer valid. We present an algorithm capable of producing high resolution reflectivity maps in both space and aspect, thus accounting for the anisotropic scattering behavior of targets. We pose the problem as a non-parametric three-dimensional inversion problem, with two constraints: magnitudes of the backscattered power are highly correlated across closely spaced look angles and the backscattered power originates from a small set of point scatterers. This approach considers jointly all scatterers in the scene across all azimuths, and exploits the sparsity of the underlying scattering field. We implement the algorithm and present reconstruction results on realistic data obtained from the XPatch Backhoe dataset

    Fast and accurate object detection in high resolution 4K and 8K video using GPUs

    Full text link
    Machine learning has celebrated a lot of achievements on computer vision tasks such as object detection, but the traditionally used models work with relatively low resolution images. The resolution of recording devices is gradually increasing and there is a rising need for new methods of processing high resolution data. We propose an attention pipeline method which uses two staged evaluation of each image or video frame under rough and refined resolution to limit the total number of necessary evaluations. For both stages, we make use of the fast object detection model YOLO v2. We have implemented our model in code, which distributes the work across GPUs. We maintain high accuracy while reaching the average performance of 3-6 fps on 4K video and 2 fps on 8K video.Comment: 6 pages, 12 figures, Best Paper Finalist at IEEE High Performance Extreme Computing Conference (HPEC) 2018; copyright 2018 IEEE; (DOI will be filled when known

    Method and Means for an Improved Electron Beam Scanning System-Patent

    Get PDF
    Electron beam scanning system for improved image definition and reduced power requirements for video signal transmissio

    An investigative study of a spectrum-matching imaging system Final report

    Get PDF
    Evaluation system for classification of remote objects and materials identified by solar and thermal radiation emissio

    Video-rate computational super-resolution and integral imaging at longwave-infrared wavelengths

    Get PDF
    We report the first computational super-resolved, multi-camera integral imaging at long-wave infrared (LWIR) wavelengths. A synchronized array of FLIR Lepton cameras was assembled, and computational super-resolution and integral-imaging reconstruction employed to generate video with light-field imaging capabilities, such as 3D imaging and recognition of partially obscured objects, while also providing a four-fold increase in effective pixel count. This approach to high-resolution imaging enables a fundamental reduction in the track length and volume of an imaging system, while also enabling use of low-cost lens materials.Comment: Supplementary multimedia material in http://dx.doi.org/10.6084/m9.figshare.530302
    corecore