460 research outputs found

    Curvature-based transfer functions for direct volume rendering: methods and applications

    Get PDF
    Journal ArticleDirect volume rendering of scalar fields uses a transfer function to map locally measured data properties to opacities and colors. The domain of the transfer function is typically the one-dimensional space of scalar data values. This paper advances the use of curvature information in multi-dimensional transfer functions, with a methodology for computing high-quality curvature measurements. The proposed methodology combines an implicit formulation of curvature with convolution-based reconstruction of the field. We give concrete guidelines for implementing the methodology, and illustrate the importance of choosing accurate filters for computing derivatives with convolution. Curvature-based transfer functions are shown to extend the expressivity and utility of volume rendering through contributions in three different application areas: nonphotorealistic volume rendering, surface smoothing via anisotropic diffusion, and visualization of isosurface uncertainty

    Current theories on the structure of the visual system

    Get PDF

    FPGA-Based Multimodal Embedded Sensor System Integrating Low- and Mid-Level Vision

    Get PDF
    Motion estimation is a low-level vision task that is especially relevant due to its wide range of applications in the real world. Many of the best motion estimation algorithms include some of the features that are found in mammalians, which would demand huge computational resources and therefore are not usually available in real-time. In this paper we present a novel bioinspired sensor based on the synergy between optical flow and orthogonal variant moments. The bioinspired sensor has been designed for Very Large Scale Integration (VLSI) using properties of the mammalian cortical motion pathway. This sensor combines low-level primitives (optical flow and image moments) in order to produce a mid-level vision abstraction layer. The results are described trough experiments showing the validity of the proposed system and an analysis of the computational resources and performance of the applied algorithms

    Visualization and analysis of diffusion tensor fields

    Get PDF
    technical reportThe power of medical imaging modalities to measure and characterize biological tissue is amplified by visualization and analysis methods that help researchers to see and understand the structures within their data. Diffusion tensor magnetic resonance imaging can measure microstructural properties of biological tissue, such as the coherent linear organization of white matter of the central nervous system, or the fibrous texture of muscle tissue. This dissertation describes new methods for visualizing and analyzing the salient structure of diffusion tensor datasets. Glyphs from superquadric surfaces and textures from reactiondiffusion systems facilitate inspection of data properties and trends. Fiber tractography based on vector-tensor multiplication allows major white matter pathways to be visualized. The generalization of direct volume rendering to tensor data allows large-scale structures to be shaded and rendered. Finally, a mathematical framework for analyzing the derivatives of tensor values, in terms of shape and orientation change, enables analytical shading in volume renderings, and a method of feature detection important for feature-preserving filtering of tensor fields. Together, the combination of methods enhances the ability of diffusion tensor imaging to provide insight into the local and global structure of biological tissue

    Two-Sphere Partition Functions and Gromov-Witten Invariants

    Full text link
    Many N=(2,2) two-dimensional nonlinear sigma models with Calabi-Yau target spaces admit ultraviolet descriptions as N=(2,2) gauge theories (gauged linear sigma models). We conjecture that the two-sphere partition function of such ultraviolet gauge theories -- recently computed via localization by Benini et al. and Doroud et al. -- yields the exact K\"ahler potential on the quantum K\"ahler moduli space for Calabi-Yau threefold target spaces. In particular, this allows one to compute the genus zero Gromov-Witten invariants for any such Calabi-Yau threefold without the use of mirror symmetry. More generally, when the infrared superconformal fixed point is used to compactify string theory, this provides a direct method to compute the spacetime K\"ahler potential of certain moduli (e.g., vector multiplet moduli in type IIA), exactly in {\alpha}'. We compute these quantities for the quintic and for R{\o}dland's Pfaffian Calabi-Yau threefold and find agreement with existing results in the literature. We then apply our methods to a codimension four determinantal Calabi-Yau threefold in P^7, recently given a nonabelian gauge theory description by the present authors, for which no mirror Calabi-Yau is currently known. We derive predictions for its Gromov-Witten invariants and verify that our predictions satisfy nontrivial geometric checks.Comment: 25 pages + 2 appendices; v2 corrects a divisor in K\"ahler moduli space and includes a new calculation that confirms a geometric prediction; v3 contains minor update of Gromov-Witten invariant extraction procedur

    Angular variation as a monocular cue for spatial percepcion

    Get PDF
    Monocular cues are spatial sensory inputs which are picked up exclusively from one eye. They are in majority static features that provide depth information and are extensively used in graphic art to create realistic representations of a scene. Since the spatial information contained in these cues is picked up from the retinal image, the existence of a link between it and the theory of direct perception can be conveniently assumed. According to this theory, spatial information of an environment is directly contained in the optic array. Thus, this assumption makes possible the modeling of visual perception processes through computational approaches. In this thesis, angular variation is considered as a monocular cue, and the concept of direct perception is adopted by a computer vision approach that considers it as a suitable principle from which innovative techniques to calculate spatial information can be developed. The expected spatial information to be obtained from this monocular cue is the position and orientation of an object with respect to the observer, which in computer vision is a well known field of research called 2D-3D pose estimation. In this thesis, the attempt to establish the angular variation as a monocular cue and thus the achievement of a computational approach to direct perception is carried out by the development of a set of pose estimation methods. Parting from conventional strategies to solve the pose estimation problem, a first approach imposes constraint equations to relate object and image features. In this sense, two algorithms based on a simple line rotation motion analysis were developed. These algorithms successfully provide pose information; however, they depend strongly on scene data conditions. To overcome this limitation, a second approach inspired in the biological processes performed by the human visual system was developed. It is based in the proper content of the image and defines a computational approach to direct perception. The set of developed algorithms analyzes the visual properties provided by angular variations. The aim is to gather valuable data from which spatial information can be obtained and used to emulate a visual perception process by establishing a 2D-3D metric relation. Since it is considered fundamental in the visual-motor coordination and consequently essential to interact with the environment, a significant cognitive effect is produced by the application of the developed computational approach in environments mediated by technology. In this work, this cognitive effect is demonstrated by an experimental study where a number of participants were asked to complete an action-perception task. The main purpose of the study was to analyze the visual guided behavior in teleoperation and the cognitive effect caused by the addition of 3D information. The results presented a significant influence of the 3D aid in the skill improvement, which showed an enhancement of the sense of presence.Las seĂąales monoculares son entradas sensoriales capturadas exclusivamente por un solo ojo que ayudan a la percepciĂłn de distancia o espacio. Son en su mayorĂ­a caracterĂ­sticas estĂĄticas que proveen informaciĂłn de profundidad y son muy utilizadas en arte grĂĄfico para crear apariencias reales de una escena. Dado que la informaciĂłn espacial contenida en dichas seĂąales son extraĂ­das de la retina, la existencia de una relaciĂłn entre esta extracciĂłn de informaciĂłn y la teorĂ­a de percepciĂłn directa puede ser convenientemente asumida. De acuerdo a esta teorĂ­a, la informaciĂłn espacial de todo le que vemos estĂĄ directamente contenido en el arreglo Ăłptico. Por lo tanto, esta suposiciĂłn hace posible el modelado de procesos de percepciĂłn visual a travĂŠs de enfoques computacionales. En esta tesis doctoral, la variaciĂłn angular es considerada como una seĂąal monocular, y el concepto de percepciĂłn directa adoptado por un enfoque basado en algoritmos de visiĂłn por computador que lo consideran un principio apropiado para el desarrollo de nuevas tĂŠcnicas de cĂĄlculo de informaciĂłn espacial. La informaciĂłn espacial esperada a obtener de esta seĂąal monocular es la posiciĂłn y orientaciĂłn de un objeto con respecto al observador, lo cual en visiĂłn por computador es un conocido campo de investigaciĂłn llamado estimaciĂłn de la pose 2D-3D. En esta tesis doctoral, establecer la variaciĂłn angular como seĂąal monocular y conseguir un modelo matemĂĄtico que describa la percepciĂłn directa, se lleva a cabo mediante el desarrollo de un grupo de mĂŠtodos de estimaciĂłn de la pose. Partiendo de estrategias convencionales, un primer enfoque implanta restricciones geomĂŠtricas en ecuaciones para relacionar caracterĂ­sticas del objeto y la imagen. En este caso, dos algoritmos basados en el anĂĄlisis de movimientos de rotaciĂłn de una lĂ­nea recta fueron desarrollados. Estos algoritmos exitosamente proveen informaciĂłn de la pose. Sin embargo, dependen fuertemente de condiciones de la escena. Para superar esta limitaciĂłn, un segundo enfoque inspirado en los procesos biolĂłgicos ejecutados por el sistema visual humano fue desarrollado. EstĂĄ basado en el propio contenido de la imagen y define un enfoque computacional a la percepciĂłn directa. El grupo de algoritmos desarrollados analiza las propiedades visuales suministradas por variaciones angulares. El propĂłsito principal es el de reunir datos de importancia con los cuales la informaciĂłn espacial pueda ser obtenida y utilizada para emular procesos de percepciĂłn visual mediante el establecimiento de relaciones mĂŠtricas 2D- 3D. Debido a que dicha relaciĂłn es considerada fundamental en la coordinaciĂłn visuomotora y consecuentemente esencial para interactuar con lo que nos rodea, un efecto cognitivo significativo puede ser producido por la aplicaciĂłn de mĂŠtodos de L estimaciĂłn de pose en entornos mediados tecnolĂłgicamente. En esta tesis doctoral, este efecto cognitivo ha sido demostrado por un estudio experimental en el cual un nĂşmero de participantes fueron invitados a ejecutar una tarea de acciĂłn-percepciĂłn. El propĂłsito principal de este estudio fue el anĂĄlisis de la conducta guiada visualmente en teleoperaciĂłn y el efecto cognitivo causado por la inclusiĂłn de informaciĂłn 3D. Los resultados han presentado una influencia notable de la ayuda 3D en la mejora de la habilidad, asĂ­ como un aumento de la sensaciĂłn de presencia

    On the Computational Modeling of Human Vision

    Full text link

    Statistics of gradient directions in natural images.

    Get PDF
    Interest in finding statistical regularities in natural images has been growing since the advent of information theory and the advancement of the efficient coding hypothesis that the human visual system is optimised to encode natural visual stimuli. In this thesis, a statistical analysis of gradient directions in an ensemble of natural images is reported. Information-theoretic measures have been used to compute the amount of dependency which exists between triples of gradient directions at separate image locations. Control experiments are performed on other image classes: phase randomized natural images, whitened natural images, and Gaussian noise images. The main results show that for an ensemble of natural images the average amount of de pendency between two and three gradient directions is the same as for an ensemble of phase randomized natural images. This result does not extend to i) the amount dependency between gradient magnitudes, ii) gradient directions at high gradient magnitude locations, or iii) individual natural images. Furthermore, no significant synergetic dependencies are found between triples of gradient directions in an ensemble natural images a synergetic dependency is an increase in dependency between a pair of gradient directions given the interaction of a third gradient direction. Additional experiments are performed to establish both the generality and specificity of the main results by studying the gradient direction dependencies of ensembles of noise (random phases) images with varying power law power spectra. The results of the additional experiments indicate that, for ensembles of images with varying power law power spectra, the amount of dependency between two and three gradient directions is determined by the ensemble's mean power spectrum rather than the phase spectrum. A framework is also presented for future work and preliminary results are provided for the dependency between second order derivative measurements (shape index) for up to 9-point configurations
    • …
    corecore