641 research outputs found

    Spectral image utility for target detection applications

    Get PDF
    In a wide range of applications, images convey useful information about scenes. The “utility” of an image is defined with reference to the specific task that an observer seeks to accomplish, and differs from the “fidelity” of the image, which seeks to capture the ability of the image to represent the true nature of the scene. In remote sensing of the earth, various means of characterizing the utility of satellite and airborne imagery have evolved over the years. Recent advances in the imaging modality of spectral imaging have enabled synoptic views of the earth at many finely sampled wavelengths over a broad spectral band. These advances challenge the ability of traditional earth observation image utility metrics to describe the rich information content of spectral images. Traditional approaches to image utility that are based on overhead panchromatic image interpretability by a human observer are not applicable to spectral imagery, which requires automated processing. This research establishes the context for spectral image utility by reviewing traditional approaches and current methods for describing spectral image utility. It proposes a new approach to assessing and predicting spectral image utility for the specific application of target detection. We develop a novel approach to assessing the utility of any spectral image using the target-implant method. This method is not limited by the requirements of traditional target detection performance assessment, which need ground truth and an adequate number of target pixels in the scene. The flexibility of this approach is demonstrated by assessing the utility of a wide range of real and simulated spectral imagery over a variety ii of target detection scenarios. The assessed image utility may be summarized to any desired level of specificity based on the image analysis requirements. We also present an approach to predicting spectral image utility that derives statistical parameters directly from an image and uses them to model target detection algorithm output. The image-derived predicted utility is directly comparable to the assessed utility and the accuracy of prediction is shown to improve with statistical models that capture the non-Gaussian behavior of real spectral image target detection algorithm outputs. The sensitivity of the proposed spectral image utility metric to various image chain parameters is examined in detail, revealing characteristics, requirements, and limitations that provide insight into the relative importance of parameters in the image utility. The results of these investigations lead to a better understanding of spectral image information vis-à-vis target detection performance that will hopefully prove useful to the spectral imagery analysis community and represent a step towards quantifying the ability of a spectral image to satisfy information exploitation requirements

    Visual Tactile Integration in Rats and Underlying Neuronal Mechanisms

    Get PDF
    Our experience of the world depends on integration of cues from multiple senses to form unified percepts. How the brain merges information across sensory modalities has been the object of debate. To measure how rats bring together information across sensory modalities, we devised an orientation categorization task that combines vision and touch. Rats encounter an object\u2013comprised of alternating black and white raised bars\u2013that looks and feels like a grating and can be explored by vision (V), touch (T), or both (VT). The grating is rotated to assume one orientation on each trial, spanning a range of 180 degrees. Rats learn to lick one spout for orientations of 0\ub145 degrees (\u201chorizontal\u201d) and the opposite spout for orientations of 90\ub145\ub0 (\u201cvertical\u201d). Though training was in VT condition, rats could recognize the object and apply the rules of the task on first exposure to V and to T conditions. This suggests that the multimodal percept corresponds to that of the single modalities. Quantifying their performance, we found that rats have good orientation acuity using their whiskers and snout (T condition); however under our default conditions, typically performance is superior by vision (V condition). Illumination could be adjusted to render V and T performance equivalent. Independently of whether V and T performance is made equivalent, performance is always highest in the VT condition, indicating multisensory enhancement. Is the enhancement optimal with respect to the best linear combination? To answer this, we computed the performance expected by optimal integration in the framework of Bayesian decision theory and found that most rats combine visual and tactile information better than predicted by the standard ideal\u2013observer model. To confirm these results, we interpreted the data in two additional frameworks: Summation of mutual information for each sensory channel and probabilities of independent events. All three analyses agree that rats combine vision and touch better than could be accounted for by a linear interaction. Electrophysiological recordings in the posterior parietal cortex (PPC) of behaving rats revealed that neuronal activity is modulated by decision of the rats as well as by categorical or graded modality-shared representations of the stimulus orientation. Because the population of PPC neurons expresses activity ranging from strongly stimulus-related (e.g. graded in relation to stimulus orientation) to strongly choice-related (e.g. modulated by stimulus category but not by orientation within a category) we suggest that this region is involved in the percept-to-choice transformation

    Supralinear and Supramodal Integration of Visual and Tactile Signals in Rats: Psychophysics and Neuronal Mechanisms

    Get PDF
    To better understand how object recognition can be triggered independently of the sensory channel through which information is acquired, we devised a task in which rats judged the orientation of a raised, black and white grating. They learned to recognize two categories of orientation: 0° ± 45° ("horizontal") and 90° ± 45° ("vertical"). Each trial required a visual (V), a tactile (T), or a visual-tactile (VT) discrimination; VT performance was better than that predicted by optimal linear combination of V and T signals, indicating synergy between sensory channels. We examined posterior parietal cortex (PPC) and uncovered key neuronal correlates of the behavioral findings: PPC carried both graded information about object orientation and categorical information about the rat's upcoming choice; single neurons exhibited identical responses under the three modality conditions. Finally, a linear classifier of neuronal population firing replicated the behavioral findings. Taken together, these findings suggest that PPC is involved in the supramodal processing of shape. Knowledge about objects can be accessed through multiple sensory pathways. Nikbakht et al. find that rats judge object orientation by synergistically combining signals from vision and touch; posterior parietal cortex seems to be involved in the supramodal knowledge of orientation
    corecore