70 research outputs found

    Quantifying Interpretability Loss due to Image Compression

    Get PDF

    Image Quality Modeling and Characterization of Nyquist Sampled Framing Systems with Operational Considerations for Remote Sensing

    Get PDF
    The trade between detector and optics performance is often conveyed through the Q metric, which is defined as the ratio of detector sampling frequency and optical cutoff frequency. Historically sensors have operated at Q~1, which introduces aliasing but increases the system modulation transfer function (MTF) and signal-to-noise ratio (SNR). Though mathematically suboptimal, such designs have been operationally ideal when considering system parameters such as pointing stability and detector performance. Substantial advances in read noise and quantum efficiency of modern detectors may compensate for the negative aspects associated with balancing detector/optics performance, presenting an opportunity to revisit the potential for implementing Nyquist-sampled (Q~2) sensors. A digital image chain simulation is developed and validated against a laboratory testbed using objective and subjective assessments. Objective assessments are accomplished by comparison of the modeled MTF and measurements from slant-edge photographs. Subjective assessments are carried out by performing a psychophysical study where subjects are asked to rate simulation and testbed imagery against a Delta-NIIRS scale with the aid of a marker set. Using the validated model, additional test cases are simulated to study the effects of increased detector sampling on image quality with operational considerations. First, a factorial experiment using Q-sampling, pointing stability, integration time, and detector performance is conducted to measure the main effects and interactions of each on the response variable, Delta-NIIRS. To assess the fidelity of current models, variants of the General Image Quality Equation (GIQE) are evaluated against subject-provided ratings and two modied GIQE versions are proposed. Finally, using the validated simulation and modified IQE, trades are conducted to ascertain the feasibility of implementing Q~2 designs in future systems

    A Time-Variant Value-Focused Methodology for Supporting Pre-Acquisition

    Get PDF
    Military operations are dynamic in nature, as time-dependent requirements or adversary actions can contribute to differing levels of mission performance among systems. Future military operations commonly use multi-criteria decision analysis techniques that rely on value-focused thinking (VFT) to analyze and ultimately rank alternatives during the Analysis of Alternatives phase of the acquisition process. Traditional VFT approaches are not typically employed with the intention of analyzing time-variant performance of alternatives. In this research, a holistic approach towards integrating fundamental practices such as VFT, systems architecture, and modeling and simulation is used to analyze time-dependent data outputs of an alternative’s performance within an operational environment. Incorporating this approach prior to Milestone A of the acquisition process allows for the identification of time-based capability gaps and additional dynamic analysis of possible alternatives that can be implemented as a flexible means of assessment. As part of this research, the pre-acquisition methodology is implemented with a hypothetical multi-domain Intelligence, Surveillance, and Reconnaissance mission in order to exemplify multiple time-dependent analysis possibilities

    Force and Moment Measurements Applicable to a Flexible Weapons System

    Get PDF
    Continuing the development of the 6-DOF Motion Test Apparatus (MTA) for performing dynamic wind tunnel tests revealed the importance of a capable data acquisition (DAQ) system for collecting and analyzing data. Assembling and testing the DAQ system in conjunction with the MTA, AFIT Low-Speed Wind Tunnel, and a selection of high-performance sensors was the primary focus of this research. Specifically, acquiring time-accurate aerodynamic force and moment measurements on a variety of test models was of high importance. With the established real-time DAQ system hardware, a National Instruments (NI) LabVIEW program was created to acquire data from an ATI Nano25-E force transducer and a second LabVIEW program was modified to communicate with a MicroStrain® 3DM-GX4-15™ Inertial Measurement Unit (IMU). The complete DAQ hardware and software system was employed in both static and dynamic wind tunnel tests to collect the aerodynamic forces and moments acting on a NACA 0012 wing model. Dynamic testing involved pitch oscillation motions, which were tracked with the IMU, as well as pitch-plunge oscillation motions. Static tests yielded results that matched traditional sting mounted wind tunnel tests, despite minor angle-of-attack differences. Dynamic measurements of the lift, drag, and pitch moment coefficients were within expectations for the pitch oscillation tests and the highest ow velocity case of the pitch-plunge oscillation test

    X-Ray Image Processing and Visualization for Remote Assistance of Airport Luggage Screeners

    Get PDF
    X-ray technology is widely used for airport luggage inspection nowadays. However, the ever-increasing sophistication of threat-concealment measures and types of threats, together with the natural complexity, inherent to the content of each individual luggage make x-ray raw images obtained directly from inspection systems unsuitable to clearly show various luggage and threat items, particularly low-density objects, which poses a great challenge for airport screeners. This thesis presents efforts spent in improving the rate of threat detection using image processing and visualization technologies. The principles of x-ray imaging for airport luggage inspection and the characteristics of single-energy and dual-energy x-ray data are first introduced. The image processing and visualization algorithms, selected and proposed for improving single energy and dual energy x-ray images, are then presented in four categories: (1) gray-level enhancement, (2) image segmentation, (3) pseudo coloring, and (4) image fusion. The major contributions of this research include identification of optimum combinations of common segmentation and enhancement methods, HSI based color-coding approaches and dual-energy image fusion algorithms —spatial information-based and wavelet-based image fusions. Experimental results generated with these image processing and visualization algorithms are shown and compared. Objective image quality measures are also explored in an effort to reduce the overhead of human subjective assessments and to provide more reliable evaluation results. Two application software are developed − an x-ray image processing application (XIP) and a wireless tablet PC-based remote supervision system (RSS). In XIP, we implemented in a user-friendly GUI the preceding image processing and visualization algorithms. In RSS, we ported available image processing and visualization methods to a wireless mobile supervisory station for screener assistance and supervision. Quantitative and on-site qualitative evaluations for various processed and fused x-ray luggage images demonstrate that using the proposed algorithms of image processing and visualization constitutes an effective and feasible means for improving airport luggage inspection

    Toward Image-Based Three-Dimensional Reconstruction from Cubesats: Impacts of Spatial Resolution and SNR on Point Cloud Quality

    Get PDF
    The adoption of cube-satellites (cubesats) by the space community has drastically lowered the cost of access to space and reduced the development lifecycle from the hundreds of millions of dollars spent on traditional decade-long programs. Rapid deployment and low cost are attractive features of cubesat-based imaging that are conducive to applications such as disaster response and monitoring. One proposed application is 3D surface modeling through a high revisit rate constellation of cubesat imagers. This work begins with the characterization of an existing design for a cubesat imager based on ground sampled distance (GSD), signal-to-noise ratio (SNR), and smear. From this characterization, an existing 3D workflow is applied to datasets that have been degraded within the regime of spatial resolutions and signal-to-noise ratios anticipated for the cubesat imager. The fidelity of resulting point clouds are assessed locally for both an urban and a natural scene. The height of a building and normals to its surfaces are calculated from the urban scene, while quarry depth estimates and rough volume estimates of a pile of rocks are produced from the natural scene. Though the reconstructed scene geometry and completeness of the scene suffer noticeably from the degraded imagery, results indicate that useful information can still be extracted using some of these techniques up to a simulated GSD of 2 meters

    Integration, Testing, And Analysis Of Multispectral Imager On Small Unmanned Aerial System For Skin Detection

    Get PDF
    Small Unmanned Aerial Systems (SUAS) have been utilized by the military, geological researchers, and first responders, to provide information about the environment in real time. Hyperspectral Imagery (HSI) provides high resolution data in the spatial and spectral dimension; all objects, including skin have unique spectral signatures. However, little research has been done to integrate HSI into SUAS due to their cost and form factor. Multispectral Imagery (MSI) has proven capable of dismount detection with several distinct wavelengths. This research proposes a spectral imaging system that can detect dismounts on SUAS. Also, factors that pertain to accurate dismount detection with an SUAS are explored. Dismount skin detection from an aerial platform also has an inherent difficulty compared to ground-based platforms. Computer vision registration, stereo camera calibration, and geolocation from autopilot telemetry are utilized to design a dismount detection platform with the Systems Engineering methodology. An average 5.112% difference in ROC AUC values that compared a line scan spectral imager to the prototype area scan imager was recorded. Results indicated that an SUAS-based Spectral Imagers are capable tools in dismount detection protocols. Deficiencies associated with the test expedient prototype are discussed and recommendations for further improvements are provided

    Spectral image utility for target detection applications

    Get PDF
    In a wide range of applications, images convey useful information about scenes. The “utility” of an image is defined with reference to the specific task that an observer seeks to accomplish, and differs from the “fidelity” of the image, which seeks to capture the ability of the image to represent the true nature of the scene. In remote sensing of the earth, various means of characterizing the utility of satellite and airborne imagery have evolved over the years. Recent advances in the imaging modality of spectral imaging have enabled synoptic views of the earth at many finely sampled wavelengths over a broad spectral band. These advances challenge the ability of traditional earth observation image utility metrics to describe the rich information content of spectral images. Traditional approaches to image utility that are based on overhead panchromatic image interpretability by a human observer are not applicable to spectral imagery, which requires automated processing. This research establishes the context for spectral image utility by reviewing traditional approaches and current methods for describing spectral image utility. It proposes a new approach to assessing and predicting spectral image utility for the specific application of target detection. We develop a novel approach to assessing the utility of any spectral image using the target-implant method. This method is not limited by the requirements of traditional target detection performance assessment, which need ground truth and an adequate number of target pixels in the scene. The flexibility of this approach is demonstrated by assessing the utility of a wide range of real and simulated spectral imagery over a variety ii of target detection scenarios. The assessed image utility may be summarized to any desired level of specificity based on the image analysis requirements. We also present an approach to predicting spectral image utility that derives statistical parameters directly from an image and uses them to model target detection algorithm output. The image-derived predicted utility is directly comparable to the assessed utility and the accuracy of prediction is shown to improve with statistical models that capture the non-Gaussian behavior of real spectral image target detection algorithm outputs. The sensitivity of the proposed spectral image utility metric to various image chain parameters is examined in detail, revealing characteristics, requirements, and limitations that provide insight into the relative importance of parameters in the image utility. The results of these investigations lead to a better understanding of spectral image information vis-à-vis target detection performance that will hopefully prove useful to the spectral imagery analysis community and represent a step towards quantifying the ability of a spectral image to satisfy information exploitation requirements

    Perceptual Image Quality Of Launch Vehicle Imaging Telescopes

    Get PDF
    A large fleet (in the hundreds) of high quality telescopes are used for tracking and imaging of launch vehicles during ascent from Cape Canaveral Air Force Station and Kennedy Space Center. A maintenance tool has been development for use with these telescopes. The tool requires rankings of telescope condition in terms of the ability to generate useful imagery. It is thus a case of ranking telescope conditions on the basis of the perceptual image quality of their imagery. Perceptual image quality metrics that are well-correlated to observer opinions of image quality have been available for several decades. However, these are quite limited in their applications, not being designed to compare various optical systems. The perceptual correlation of the metrics implies that a constant image quality curve (such as the boundary between two qualitative categories labeled as excellent and good) would have a constant value of the metric. This is not the case if the optical system parameters (such as object distance or aperture diameter) are varied. No published data on such direct variation is available and this dissertation presents an investigation made into the perceptual metric responses as system parameters are varied. This investigation leads to some non-intuitive conclusions. The perceptual metrics are reviewed as well as more common metrics and their inability to perform in the necessary manner for the research of interest. Perceptual test methods are also reviewed, as is the human visual system. iv Image formation theory is presented in a non-traditional form, yielding the surprising result that perceptual image quality is invariant under changes in focal length if the final displayed image remains constant. Experimental results are presented of changes in perceived image quality as aperture diameter is varied. Results are analyzed and shortcomings in the process and metrics are discussed. Using the test results, predictions are made about the form of the metric response to object distance variations, and subsequent testing was conducted to validate the predictions. The utility of the results, limitations of applicability, and the immediate ability to further generalize the results is presented
    corecore