2,797 research outputs found

    Hidden Cues in Random Line Stereograms

    Get PDF
    Successful fusion of random-line stereograms with breaks in the vernier acuity range has been interpreted to suggest that the interpolation process underlying hyperacuity is parallel and preliminary to stereomatching. In this paper (a) we demonstrate with computer experiments that vernier cues are not needed to solve the stereomatching problem posed by these stereograms and (b) we provide psychophysical evidence that human stereopsis probably does not use vernier cues alone to achieve fusion of these random-line stereograms.MIT Artificial Intelligence Laborator

    Direct calculation of the hard-sphere crystal/melt interfacial free energy

    Get PDF
    We present a direct calculation by molecular-dynamics computer simulation of the crystal/melt interfacial free energy, γ\gamma, for a system of hard spheres of diameter σ\sigma. The calculation is performed by thermodynamic integration along a reversible path defined by cleaving, using specially constructed movable hard-sphere walls, separate bulk crystal and fluid systems, which are then merged to form an interface. We find the interfacial free energy to be slightly anisotropic with γ\gamma = 0.62±0.01\pm 0.01, 0.64±0.01\pm 0.01 and 0.58±0.01kBT/σ2\pm 0.01 k_BT/\sigma^2 for the (100), (110) and (111) fcc crystal/fluid interfaces, respectively. These values are consistent with earlier density functional calculations and recent experiments measuring the crystal nucleation rates from colloidal fluids of polystyrene spheres that have been interpreted [Marr and Gast, Langmuir {\bf 10}, 1348 (1994)] to give an estimate of γ\gamma for the hard-sphere system of 0.55±0.02kBT/σ20.55 \pm 0.02 k_BT/\sigma^2, slightly lower than the directly determined value reported here.Comment: 4 pages, 4 figures, submitted to Physical Review Letter

    Model-based Cognitive Neuroscience: Multifield Mechanistic Integration in Practice

    Get PDF
    Autonomist accounts of cognitive science suggest that cognitive model building and theory construction (can or should) proceed independently of findings in neuroscience. Common functionalist justifications of autonomy rely on there being relatively few constraints between neural structure and cognitive function (e.g., Weiskopf, 2011). In contrast, an integrative mechanistic perspective stresses the mutual constraining of structure and function (e.g., Piccinini & Craver, 2011; Povich, 2015). In this paper, I show how model-based cognitive neuroscience (MBCN) epitomizes the integrative mechanistic perspective and concentrates the most revolutionary elements of the cognitive neuroscience revolution (Boone & Piccinini, 2016). I also show how the prominent subset account of functional realization supports the integrative mechanistic perspective I take on MBCN and use it to clarify the intralevel and interlevel components of integration

    Towards Contextual Action Recognition and Target Localization with Active Allocation of Attention

    Get PDF
    Exploratory gaze movements are fundamental for gathering the most relevant information regarding the partner during social interactions. We have designed and implemented a system for dynamic attention allocation which is able to actively control gaze movements during a visual action recognition task. During the observation of a partners reaching movement, the robot is able to contextually estimate the goal position of the partner hand and the location in space of the candidate targets, while moving its gaze around with the purpose of optimizing the gathering of information relevant for the task. Experimental results on a simulated environment show that active gaze control provides a relevant advantage with respect to typical passive observation, both in term of estimation precision and of time required for action recognition. © 2012 Springer-Verlag

    Feature Lines for Illustrating Medical Surface Models: Mathematical Background and Survey

    Full text link
    This paper provides a tutorial and survey for a specific kind of illustrative visualization technique: feature lines. We examine different feature line methods. For this, we provide the differential geometry behind these concepts and adapt this mathematical field to the discrete differential geometry. All discrete differential geometry terms are explained for triangulated surface meshes. These utilities serve as basis for the feature line methods. We provide the reader with all knowledge to re-implement every feature line method. Furthermore, we summarize the methods and suggest a guideline for which kind of surface which feature line algorithm is best suited. Our work is motivated by, but not restricted to, medical and biological surface models.Comment: 33 page

    Factorization of natural 4 × 4 patch distributions

    Full text link
    The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-540-30212-4_15Revised and Selected Papers of ECCV 2004 Workshop SMVP 2004, Prague, Czech Republic, May 16, 2004The lack of sufficient machine readable images makes impossible the direct computation of natural image 4 × 4 block statistics and one has to resort to indirect approximated methods to reduce their domain space. A natural approach to this is to collect statistics over compressed images; if the reconstruction quality is good enough, these statistics will be sufficiently representative. However, a requirement for easier statistics collection is that the method used provides a uniform representation of the compression information across all patches, something for which codebook techniques are well suited. We shall follow this approach here, using a fractal compression–inspired quantization scheme to approximate a given patch B by a triplet (D B , μ B , σ B ) with σ B the patch’s contrast, μ B its brightness and D B a codebook approximation to the mean–variance normalization (B – μ B )/σ B of B. The resulting reduction of the domain space makes feasible the computation of entropy and mutual information estimates that, in turn, suggest a factorization of the approximation of p(B) ≃ p(D B , μ B , σ B ) as p(D B , μ B , σ B ) ≃ p(D B )p(μ)p(σ)Φ(|| ∇ ||), with Φ being a high contrast correction.With partial support of Spain’s CICyT, TIC 01–57

    The evolution of representation in simple cognitive networks

    Get PDF
    Representations are internal models of the environment that can provide guidance to a behaving agent, even in the absence of sensory information. It is not clear how representations are developed and whether or not they are necessary or even essential for intelligent behavior. We argue here that the ability to represent relevant features of the environment is the expected consequence of an adaptive process, give a formal definition of representation based on information theory, and quantify it with a measure R. To measure how R changes over time, we evolve two types of networks---an artificial neural network and a network of hidden Markov gates---to solve a categorization task using a genetic algorithm. We find that the capacity to represent increases during evolutionary adaptation, and that agents form representations of their environment during their lifetime. This ability allows the agents to act on sensorial inputs in the context of their acquired representations and enables complex and context-dependent behavior. We examine which concepts (features of the environment) our networks are representing, how the representations are logically encoded in the networks, and how they form as an agent behaves to solve a task. We conclude that R should be able to quantify the representations within any cognitive system, and should be predictive of an agent's long-term adaptive success.Comment: 36 pages, 10 figures, one Tabl
    • …
    corecore