142 research outputs found
Scene Context Dependency of Pattern Constancy of Time Series Imagery
A fundamental element of future generic pattern recognition technology is the ability to extract similar patterns for the same scene despite wide ranging extraneous variables, including lighting, turbidity, sensor exposure variations, and signal noise. In the process of demonstrating pattern constancy of this kind for retinex/visual servo (RVS) image enhancement processing, we found that the pattern constancy performance depended somewhat on scene content. Most notably, the scene topography and, in particular, the scale and extent of the topography in an image, affects the pattern constancy the most. This paper will explore these effects in more depth and present experimental data from several time series tests. These results further quantify the impact of topography on pattern constancy. Despite this residual inconstancy, the results of overall pattern constancy testing support the idea that RVS image processing can be a universal front-end for generic visual pattern recognition. While the effects on pattern constancy were significant, the RVS processing still does achieve a high degree of pattern constancy over a wide spectrum of scene content diversity, and wide ranging extraneousness variations in lighting, turbidity, and sensor exposure
Smart Image Enhancement Process
Contrast and lightness measures are used to first classify the image as being one of non-turbid and turbid. If turbid, the original image is enhanced to generate a first enhanced image. If non-turbid, the original image is classified in terms of a merged contrast/lightness score based on the contrast and lightness measures. The non-turbid image is enhanced to generate a second enhanced image when a poor contrast/lightness score is associated therewith. When the second enhanced image has a poor contrast/lightness score associated therewith, this image is enhanced to generate a third enhanced image. A sharpness measure is computed for one image that is selected from (i) the non-turbid image, (ii) the first enhanced image, (iii) the second enhanced image when a good contrast/lightness score is associated therewith, and (iv) the third enhanced image. If the selected image is not-sharp, it is sharpened to generate a sharpened image. The final image is selected from the selected image and the sharpened image
Machine Vision Identification of Airport Runways With Visible and Infrared Videos
A widely used machine vision pipeline based on the Speeded-Up Robust Features feature detector was applied to the problem of identifying a runway from a universe of known runways, which was constructed using video records of 19 straight-in glidepath approaches to nine runways. The recordings studied included visible, short-wave infrared, and long-wave infrared videos in clear conditions, rain, and fog. Both daytime and nighttime runway approaches were used. High detection specificity (identification of the runway approached and rejection of the other runways in the universe) was observed in all conditions (greater than 90% Bayesian posterior probability). In the visible band, repeatability (identification of a given runway across multiple videos of it) was observed only if illumination (day versus night) was the same and approach visibility was good. Some repeatability was found across visible and shortwave sensor bands. Camera-based geolocation during aircraft landing was compared to the standard Charted Visual Approach Procedure
Method of improving a digital image
A method of improving a digital image is provided. The image is initially represented by digital data indexed to represent positions on a display. The digital data is indicative of an intensity value I.sub.i (x,y) for each position (x,y) in each i-th spectral band. The intensity value for each position in each i-th spectral band is adjusted to generate an adjusted intensity value for each position in each i-th spectral band in accordance with ##EQU1## where S is the number of unique spectral bands included in said digital data, W.sub.n is a weighting factor and * denotes the convolution operator. Each surround function F.sub.n (x,y) is uniquely scaled to improve an aspect of the digital image, e.g., dynamic range compression, color constancy, and lightness rendition. The adjusted intensity value for each position in each i-th spectral band is filtered with a common function and then presented to a display device. For color images, a novel color restoration step is added to give the image true-to-life color that closely matches human observation
A quantitative approach to the sexual reproductive biology and population structure in some arctic flowering plants: Dryas integrifolia, Silene acaulis and Ranunculus nivalis
The population ecology of three species in northwest Greenland (Dryas integrifolia, Silene acaulis and Ranunculus nivalis) was studied in two consecutive seasons. Flowering phenology, population structure, flowering biology (including numbers of pollen grains per anther, germinated pollen grains per stigma, ovules per gynoecium, seeds per fruit), pollination and insect activity were the main features investigated.
They were related to the micro- and macroclimatic conditions.
The results can be summarized as follows:
The unpredictability of the quality and length of the growing season makes the success of the reproductive cycle (i.e. production of mature seeds) very uncertain.
There are seedlings at the studied sites of the three species.
Most of the seedlings disappear.
Population structure results indicate that seedlings become established in at least some years, but input of new individuals is episodic.
Of the three species studied, R. nivalis allocates most resources to reproduction.
Percentage normal pollen varies mostly between plants, less between days and sites (D. integrifolia).
All three species are self-compatible.
Full seed set is obtained only after insect visits.
In S. acaulis seed set may be limited by the number of pollen grains reaching the stigmas.
The flowers provide food (pollen and nectar) for the insects. They also provide shelter, warmth, and a mating place for them.
There are sufficient insect visits per flower to ensure seed set, except possibly in S. acaulis.
About 1 % of total pollen grain production is found as germinated pollen grains on stigmas.
The utilization of pollen and stigmas varies between the three species. R. nivalis is the most efficient, whereas D. integrifolia is the most extravagant.
A cold and rainy summer in 1976 resulted in conspicuously lower seed set in 1977 in spite of the latter summer being comparatively dry and warm.
A 'reproductive budget' quantifying the various steps in the reproductive cycle is presented
The Spatial Vision Tree: A Generic Pattern Recognition Engine- Scientific Foundations, Design Principles, and Preliminary Tree Design
New foundational ideas are used to define a novel approach to generic visual pattern recognition. These ideas proceed from the starting point of the intrinsic equivalence of noise reduction and pattern recognition when noise reduction is taken to its theoretical limit of explicit matched filtering. This led us to think of the logical extension of sparse coding using basis function transforms for both de-noising and pattern recognition to the full pattern specificity of a lexicon of matched filter pattern templates. A key hypothesis is that such a lexicon can be constructed and is, in fact, a generic visual alphabet of spatial vision. Hence it provides a tractable solution for the design of a generic pattern recognition engine. Here we present the key scientific ideas, the basic design principles which emerge from these ideas, and a preliminary design of the Spatial Vision Tree (SVT). The latter is based upon a cryptographic approach whereby we measure a large aggregate estimate of the frequency of occurrence (FOO) for each pattern. These distributions are employed together with Hamming distance criteria to design a two-tier tree. Then using information theory, these same FOO distributions are used to define a precise method for pattern representation. Finally the experimental performance of the preliminary SVT on computer generated test images and complex natural images is assessed
Processing Digital Imagery to Enhance Perceptions of Realism
Multi-scale retinex with color restoration (MSRCR) is a method of processing digital image data based on Edwin Land s retinex (retina + cortex) theory of human color vision. An outgrowth of basic scientific research and its application to NASA s remote-sensing mission, MSRCR is embodied in a general-purpose algorithm that greatly improves the perception of visual realism and the quantity and quality of perceived information in a digitized image. In addition, the MSRCR algorithm includes provisions for automatic corrections to accelerate and facilitate what could otherwise be a tedious image-editing process. The MSRCR algorithm has been, and is expected to continue to be, the basis for development of commercial image-enhancement software designed to extend and refine its capabilities for diverse applications
Temperature profiles in high gradient furnaces
Accurate temperature measurement of the furnace environment is very important in both the science and technology of crystal growth as well as many other materials processing operations. A high degree of both accuracy and precision is acutely needed in the directional solidification of compound semiconductors in which the temperature profiles control the freezing isotherm which, in turn, affects the composition of the growth with a concomitant feedback perturbation on the temperature profile. Directional solidification requires a furnace configuration that will transport heat through the sample being grown. A common growth procedure is the Bridgman Stockbarger technique which basically consists of a hot zone and a cold zone separated by an insulator. In a normal growth procedure the material, contained in an ampoule, is melted in the hot zone and is then moved relative to the furnace toward the cold zone and solidification occurs in the insulated region. Since the primary path of heat between the hot and cold zones is through the sample, both axial and radial temperature gradients exist in the region of the growth interface. There is a need to know the temperature profile of the growth furnace with the crystal that is to be grown as the thermal load. However it is usually not feasible to insert thermocouples inside an ampoule and thermocouples attached to the outside wall of the ampoule have both a thermal and a mechanical contact problem as well as a view angle problem. The objective is to present a technique of calibrating a furnace with a thermal load that closely matches the sample to be grown and to describe procedures that circumvent both the thermal and mechanical contact problems
Anomalous Cases of Astronaut Helmet Detection
An astronaut's helmet is an invariant, rigid image element that is well suited for identification and tracking using current machine vision technology. Future space exploration will benefit from the development of astronaut detection software for search and rescue missions based on EVA helmet identification. However, helmets are solid white, except for metal brackets to attach accessories such as supplementary lights. We compared the performance of a widely used machine vision pipeline on a standard-issue NASA helmet with and without affixed experimental feature-rich patterns. Performance on the patterned helmet was far more robust. We found that four different feature-rich patterns are sufficient to identify a helmet and determine orientation as it is rotated about the yaw, pitch, and roll axes. During helmet rotation the field of view changes to frames containing parts of two or more feature-rich patterns. We took reference images in these locations to fill in detection gaps. These multiple feature-rich patterns references added substantial benefit to detection, however, they generated the majority of the anomalous cases. In these few instances, our algorithm keys in on one feature-rich pattern of the multiple feature-rich pattern reference and makes an incorrect prediction of the location of the other feature-rich patterns. We describe and make recommendations on ways to mitigate anomalous cases in which detection of one or more feature-rich patterns fails. While the number of cases is only a small percentage of the tested helmet orientations, they illustrate important design considerations for future spacesuits. In addition to our four successful feature-rich patterns, we present unsuccessful patterns and discuss the cause of their poor performance from a machine vision perspective. Future helmets designed with these considerations will enable automated astronaut detection and thereby enhance mission operations and extraterrestrial search and rescue
Real-time Enhancement, Registration, and Fusion for a Multi-Sensor Enhanced Vision System
Over the last few years NASA Langley Research Center (LaRC) has been developing an Enhanced Vision System (EVS) to aid pilots while flying in poor visibility conditions. The EVS captures imagery using two infrared video cameras. The cameras are placed in an enclosure that is mounted and flown forward-looking underneath the NASA LaRC ARIES 757 aircraft. The data streams from the cameras are processed in real-time and displayed on monitors on-board the aircraft. With proper processing the camera system can provide better-than- human-observed imagery particularly during poor visibility conditions. However, to obtain this goal requires several different stages of processing including enhancement, registration, and fusion, and specialized processing hardware for real-time performance. We are using a real-time implementation of the Retinex algorithm for image enhancement, affine transformations for registration, and weighted sums to perform fusion. All of the algorithms are executed on a single TI DM642 digital signal processor (DSP) clocked at 720 MHz. The image processing components were added to the EVS system, tested, and demonstrated during flight tests in August and September of 2005. In this paper we briefly discuss the EVS image processing hardware and algorithms. We then discuss implementation issues and show examples of the results obtained during flight tests. Keywords: enhanced vision system, image enhancement, retinex, digital signal processing, sensor fusio
- …