3,553 research outputs found
Neural Dynamics of Motion Perception: Direction Fields, Apertures, and Resonant Grouping
A neural network model of global motion segmentation by visual cortex is described. Called the Motion Boundary Contour System (BCS), the model clarifies how ambiguous local movements on a complex moving shape are actively reorganized into a coherent global motion signal. Unlike many previous researchers, we analyse how a coherent motion signal is imparted to all regions of a moving figure, not only to regions at which unambiguous motion signals exist. The model hereby suggests a solution to the global aperture problem. The Motion BCS describes how preprocessing of motion signals by a Motion Oriented Contrast Filter (MOC Filter) is joined to long-range cooperative grouping mechanisms in a Motion Cooperative-Competitive Loop (MOCC Loop) to control phenomena such as motion capture. The Motion BCS is computed in parallel with the Static BCS of Grossberg and Mingolla (1985a, 1985b, 1987). Homologous properties of the Motion BCS and the Static BCS, specialized to process movement directions and static orientations, respectively, support a unified explanation of many data about static form perception and motion form perception that have heretofore been unexplained or treated separately. Predictions about microscopic computational differences of the parallel cortical streams V1 --> MT and V1 --> V2 --> MT are made, notably the magnocellular thick stripe and parvocellular interstripe streams. It is shown how the Motion BCS can compute motion directions that may be synthesized from multiple orientations with opposite directions-of-contrast. Interactions of model simple cells, complex cells, hypercomplex cells, and bipole cells are described, with special emphasis given to new functional roles in direction disambiguation for endstopping at multiple processing stages and to the dynamic interplay of spatially short-range and long-range interactions.Air Force Office of Scientific Research (90-0175); Defense Advanced Research Projects Agency (90-0083); Office of Naval Research (N00014-91-J-4100
Digital Color Imaging
This paper surveys current technology and research in the area of digital
color imaging. In order to establish the background and lay down terminology,
fundamental concepts of color perception and measurement are first presented
us-ing vector-space notation and terminology. Present-day color recording and
reproduction systems are reviewed along with the common mathematical models
used for representing these devices. Algorithms for processing color images for
display and communication are surveyed, and a forecast of research trends is
attempted. An extensive bibliography is provided
Neural Models of Motion Integration, Segmentation, and Probablistic Decision-Making
When brain mechanism carry out motion integration and segmentation processes that compute unambiguous global motion percepts from ambiguous local motion signals? Consider, for example, a deer running at variable speeds behind forest cover. The forest cover is an occluder that creates apertures through which fragments of the deer's motion signals are intermittently experienced. The brain coherently groups these fragments into a trackable percept of the deer in its trajectory. Form and motion processes are needed to accomplish this using feedforward and feedback interactions both within and across cortical processing streams. All the cortical areas V1, V2, MT, and MST are involved in these interactions. Figure-ground processes in the form stream through V2, such as the seperation of occluding boundaries of the forest cover from the boundaries of the deer, select the motion signals which determine global object motion percepts in the motion stream through MT. Sparse, but unambiguous, feauture tracking signals are amplified before they propogate across position and are intergrated with far more numerous ambiguous motion signals. Figure-ground and integration processes together determine the global percept. A neural model predicts the processing stages that embody these form and motion interactions. Model concepts and data are summarized about motion grouping across apertures in response to a wide variety of displays, and probabilistic decision making in parietal cortex in response to random dot displays.National Science Foundation (SBE-0354378); Office of Naval Research (N00014-01-1-0624
Automatic Color Inspection for Colored Wires in Electric Cables
In this paper, an automatic optical inspection system for checking the sequence of colored wires in electric cable is presented. The system is able to inspect cables with flat connectors differing in the type and number of wires. This variability is managed in an automatic way by means of a self-learning subsystem and does not require manual input from the operator or loading new data to the machine. The system is coupled to a connector crimping machine and once the model of a correct cable is learned, it can automatically inspect each cable assembled by the machine. The main contributions of this paper are: (i) the self-learning system; (ii) a robust segmentation algorithm for extracting wires from images even if they are strongly bent and partially overlapped; (iii) a color recognition algorithm able to cope with highlights and different finishing of the wire insulation. We report the system evaluation over a period of several months during the actual production of large batches of different cables; tests demonstrated a high level of accuracy and the absence of false negatives, which is a key point in order to guarantee defect-free productions
Aerospace medicine and biology: A continuing bibliography with indexes (supplement 341)
This bibliography lists 133 reports, articles and other documents introduced into the NASA Scientific and Technical Information System during September 1990. Subject coverage includes: aerospace medicine and psychology, life support systems and controlled environments, safety equipment, exobiology and extraterrestrial life, and flight crew behavior and performance
Redefining A in RGBA: Towards a Standard for Graphical 3D Printing
Advances in multimaterial 3D printing have the potential to reproduce various
visual appearance attributes of an object in addition to its shape. Since many
existing 3D file formats encode color and translucency by RGBA textures mapped
to 3D shapes, RGBA information is particularly important for practical
applications. In contrast to color (encoded by RGB), which is specified by the
object's reflectance, selected viewing conditions and a standard observer,
translucency (encoded by A) is neither linked to any measurable physical nor
perceptual quantity. Thus, reproducing translucency encoded by A is open for
interpretation.
In this paper, we propose a rigorous definition for A suitable for use in
graphical 3D printing, which is independent of the 3D printing hardware and
software, and which links both optical material properties and perceptual
uniformity for human observers. By deriving our definition from the absorption
and scattering coefficients of virtual homogeneous reference materials with an
isotropic phase function, we achieve two important properties. First, a simple
adjustment of A is possible, which preserves the translucency appearance if an
object is re-scaled for printing. Second, determining the value of A for a real
(potentially non-homogeneous) material, can be achieved by minimizing a
distance function between light transport measurements of this material and
simulated measurements of the reference materials. Such measurements can be
conducted by commercial spectrophotometers used in graphic arts.
Finally, we conduct visual experiments employing the method of constant
stimuli, and derive from them an embedding of A into a nearly perceptually
uniform scale of translucency for the reference materials.Comment: 20 pages (incl. appendices), 20 figures. Version with higher quality
images: https://cloud-ext.igd.fraunhofer.de/s/pAMH67XjstaNcrF (main article)
and https://cloud-ext.igd.fraunhofer.de/s/4rR5bH3FMfNsS5q (appendix).
Supplemental material including code:
https://cloud-ext.igd.fraunhofer.de/s/9BrZaj5Uh5d0cOU/downloa
A Self-Organizing Neural System for Learning to Recognize Textured Scenes
A self-organizing ARTEX model is developed to categorize and classify textured image regions. ARTEX specializes the FACADE model of how the visual cortex sees, and the ART model of how temporal and prefrontal cortices interact with the hippocampal system to learn visual recognition categories and their names. FACADE processing generates a vector of boundary and surface properties, notably texture and brightness properties, by utilizing multi-scale filtering, competition, and diffusive filling-in. Its context-sensitive local measures of textured scenes can be used to recognize scenic properties that gradually change across space, as well a.s abrupt texture boundaries. ART incrementally learns recognition categories that classify FACADE output vectors, class names of these categories, and their probabilities. Top-down expectations within ART encode learned prototypes that pay attention to expected visual features. When novel visual information creates a poor match with the best existing category prototype, a memory search selects a new category with which classify the novel data. ARTEX is compared with psychophysical data, and is benchmarked on classification of natural textures and synthetic aperture radar images. It outperforms state-of-the-art systems that use rule-based, backpropagation, and K-nearest neighbor classifiers.Defense Advanced Research Projects Agency; Office of Naval Research (N00014-95-1-0409, N00014-95-1-0657
Recommended from our members
Binocular Eye Movements Are Adapted to the Natural Environment.
Humans and many animals make frequent saccades requiring coordinated movements of the eyes. When landing on the new fixation point, the eyes must converge accurately or double images will be perceived. We asked whether the visual system uses statistical regularities in the natural environment to aid eye alignment at the end of saccades. We measured the distribution of naturally occurring disparities in different parts of the visual field. The central tendency of the distributions was crossed (nearer than fixation) in the lower field and uncrossed (farther) in the upper field in male and female participants. It was uncrossed in the left and right fields. We also measured horizontal vergence after completion of vertical, horizontal, and oblique saccades. When the eyes first landed near the eccentric target, vergence was quite consistent with the natural-disparity distribution. For example, when making an upward saccade, the eyes diverged to be aligned with the most probable uncrossed disparity in that part of the visual field. Likewise, when making a downward saccade, the eyes converged to enable alignment with crossed disparity in that part of the field. Our results show that rapid binocular eye movements are adapted to the statistics of the 3D environment, minimizing the need for large corrective vergence movements at the end of saccades. The results are relevant to the debate about whether eye movements are derived from separate saccadic and vergence neural commands that control both eyes or from separate monocular commands that control the eyes independently.SIGNIFICANCE STATEMENT We show that the human visual system incorporates statistical regularities in the visual environment to enable efficient binocular eye movements. We define the oculomotor horopter: the surface of 3D positions to which the eyes initially move when stimulated by eccentric targets. The observed movements maximize the probability of accurate fixation as the eyes move from one position to another. This is the first study to show quantitatively that binocular eye movements conform to 3D scene statistics, thereby enabling efficient processing. The results provide greater insight into the neural mechanisms underlying the planning and execution of saccadic eye movements
Characteristics of flight simulator visual systems
The physical parameters of the flight simulator visual system that characterize the system and determine its fidelity are identified and defined. The characteristics of visual simulation systems are discussed in terms of the basic categories of spatial, energy, and temporal properties corresponding to the three fundamental quantities of length, mass, and time. Each of these parameters are further addressed in relation to its effect, its appropriate units or descriptors, methods of measurement, and its use or importance to image quality
- …