3,241 research outputs found

    Tracking Vector Magnetograms with the Magnetic Induction Equation

    Full text link
    The differential affine velocity estimator (DAVE) developed in Schuck (2006) for estimating velocities from line-of-sight magnetograms is modified to directly incorporate horizontal magnetic fields to produce a differential affine velocity estimator for vector magnetograms (DAVE4VM). The DAVE4VM's performance is demonstrated on the synthetic data from the anelastic pseudospectral ANMHD simulations that were used in the recent comparison of velocity inversion techniques by Welsch (2007). The DAVE4VM predicts roughly 95% of the helicity rate and 75% of the power transmitted through the simulation slice. Inter-comparison between DAVE4VM and DAVE and further analysis of the DAVE method demonstrates that line-of-sight tracking methods capture the shearing motion of magnetic footpoints but are insensitive to flux emergence -- the velocities determined from line-of-sight methods are more consistent with horizontal plasma velocities than with flux transport velocities. These results suggest that previous studies that rely on velocities determined from line-of-sight methods such as the DAVE or local correlation tracking may substantially misrepresent the total helicity rates and power through the photosphere.Comment: 30 pages, 13 figure

    Troubles with Bayesianism: An introduction to the psychological immune system

    Get PDF
    A Bayesian mind is, at its core, a rational mind. Bayesianism is thus well-suited to predict and explain mental processes that best exemplify our ability to be rational. However, evidence from belief acquisition and change appears to show that we do not acquire and update information in a Bayesian way. Instead, the principles of belief acquisition and updating seem grounded in maintaining a psychological immune system rather than in approximating a Bayesian processor

    Direct calculation of the hard-sphere crystal/melt interfacial free energy

    Get PDF
    We present a direct calculation by molecular-dynamics computer simulation of the crystal/melt interfacial free energy, Îł\gamma, for a system of hard spheres of diameter σ\sigma. The calculation is performed by thermodynamic integration along a reversible path defined by cleaving, using specially constructed movable hard-sphere walls, separate bulk crystal and fluid systems, which are then merged to form an interface. We find the interfacial free energy to be slightly anisotropic with Îł\gamma = 0.62±0.01\pm 0.01, 0.64±0.01\pm 0.01 and 0.58±0.01kBT/σ2\pm 0.01 k_BT/\sigma^2 for the (100), (110) and (111) fcc crystal/fluid interfaces, respectively. These values are consistent with earlier density functional calculations and recent experiments measuring the crystal nucleation rates from colloidal fluids of polystyrene spheres that have been interpreted [Marr and Gast, Langmuir {\bf 10}, 1348 (1994)] to give an estimate of Îł\gamma for the hard-sphere system of 0.55±0.02kBT/σ20.55 \pm 0.02 k_BT/\sigma^2, slightly lower than the directly determined value reported here.Comment: 4 pages, 4 figures, submitted to Physical Review Letter

    Model-based Cognitive Neuroscience: Multifield Mechanistic Integration in Practice

    Get PDF
    Autonomist accounts of cognitive science suggest that cognitive model building and theory construction (can or should) proceed independently of findings in neuroscience. Common functionalist justifications of autonomy rely on there being relatively few constraints between neural structure and cognitive function (e.g., Weiskopf, 2011). In contrast, an integrative mechanistic perspective stresses the mutual constraining of structure and function (e.g., Piccinini & Craver, 2011; Povich, 2015). In this paper, I show how model-based cognitive neuroscience (MBCN) epitomizes the integrative mechanistic perspective and concentrates the most revolutionary elements of the cognitive neuroscience revolution (Boone & Piccinini, 2016). I also show how the prominent subset account of functional realization supports the integrative mechanistic perspective I take on MBCN and use it to clarify the intralevel and interlevel components of integration

    The Interstellar Rubidium Isotope Ratio toward Rho Ophiuchi A

    Full text link
    The isotope ratio, 85Rb/87Rb, places constraints on models of the nucleosynthesis of heavy elements, but there is no precise determination of the ratio for material beyond the Solar System. We report the first measurement of the interstellar Rb isotope ratio. Our measurement of the Rb I line at 7800 A for the diffuse gas toward rho Oph A yields a value of 1.21 +/- 0.30 (1-sigma) that differs significantly from the meteoritic value of 2.59. The Rb/K elemental abundance ratio for the cloud also is lower than that seen in meteorites. Comparison of the 85Rb/K and 87Rb/K ratios with meteoritic values indicates that the interstellar 85Rb abundance in this direction is lower than the Solar System abundance. We attribute the lower abundance to a reduced contribution from the r-process. Interstellar abundances for Kr, Cd, and Sn are consistent with much less r-process synthesis for the solar neighborhood compared to the amount inferred for the Solar System.Comment: 12 pages with 2 figures and 1 table; will appear in ApJ Letter

    Learning, Social Intelligence and the Turing Test - why an "out-of-the-box" Turing Machine will not pass the Turing Test

    Get PDF
    The Turing Test (TT) checks for human intelligence, rather than any putative general intelligence. It involves repeated interaction requiring learning in the form of adaption to the human conversation partner. It is a macro-level post-hoc test in contrast to the definition of a Turing Machine (TM), which is a prior micro-level definition. This raises the question of whether learning is just another computational process, i.e. can be implemented as a TM. Here we argue that learning or adaption is fundamentally different from computation, though it does involve processes that can be seen as computations. To illustrate this difference we compare (a) designing a TM and (b) learning a TM, defining them for the purpose of the argument. We show that there is a well-defined sequence of problems which are not effectively designable but are learnable, in the form of the bounded halting problem. Some characteristics of human intelligence are reviewed including it's: interactive nature, learning abilities, imitative tendencies, linguistic ability and context-dependency. A story that explains some of these is the Social Intelligence Hypothesis. If this is broadly correct, this points to the necessity of a considerable period of acculturation (social learning in context) if an artificial intelligence is to pass the TT. Whilst it is always possible to 'compile' the results of learning into a TM, this would not be a designed TM and would not be able to continually adapt (pass future TTs). We conclude three things, namely that: a purely "designed" TM will never pass the TT; that there is no such thing as a general intelligence since it necessary involves learning; and that learning/adaption and computation should be clearly distinguished.Comment: 10 pages, invited talk at Turing Centenary Conference CiE 2012, special session on "The Turing Test and Thinking Machines

    Origin Gaps and the Eternal Sunshine of the Second-Order Pendulum

    Full text link
    The rich experiences of an intentional, goal-oriented life emerge, in an unpredictable fashion, from the basic laws of physics. Here I argue that this unpredictability is no mirage: there are true gaps between life and non-life, mind and mindlessness, and even between functional societies and groups of Hobbesian individuals. These gaps, I suggest, emerge from the mathematics of self-reference, and the logical barriers to prediction that self-referring systems present. Still, a mathematical truth does not imply a physical one: the universe need not have made self-reference possible. It did, and the question then is how. In the second half of this essay, I show how a basic move in physics, known as renormalization, transforms the "forgetful" second-order equations of fundamental physics into a rich, self-referential world that makes possible the major transitions we care so much about. While the universe runs in assembly code, the coarse-grained version runs in LISP, and it is from that the world of aim and intention grows.Comment: FQXI Prize Essay 2017. 18 pages, including afterword on Ostrogradsky's Theorem and an exchange with John Bova, Dresden Craig, and Paul Livingsto

    Multiscale Discriminant Saliency for Visual Attention

    Full text link
    The bottom-up saliency, an early stage of humans' visual attention, can be considered as a binary classification problem between center and surround classes. Discriminant power of features for the classification is measured as mutual information between features and two classes distribution. The estimated discrepancy of two feature classes very much depends on considered scale levels; then, multi-scale structure and discriminant power are integrated by employing discrete wavelet features and Hidden markov tree (HMT). With wavelet coefficients and Hidden Markov Tree parameters, quad-tree like label structures are constructed and utilized in maximum a posterior probability (MAP) of hidden class variables at corresponding dyadic sub-squares. Then, saliency value for each dyadic square at each scale level is computed with discriminant power principle and the MAP. Finally, across multiple scales is integrated the final saliency map by an information maximization rule. Both standard quantitative tools such as NSS, LCC, AUC and qualitative assessments are used for evaluating the proposed multiscale discriminant saliency method (MDIS) against the well-know information-based saliency method AIM on its Bruce Database wity eye-tracking data. Simulation results are presented and analyzed to verify the validity of MDIS as well as point out its disadvantages for further research direction.Comment: 16 pages, ICCSA 2013 - BIOCA sessio
    • 

    corecore