164 research outputs found

    Evidence accumulation in a Laplace domain decision space

    Full text link
    Evidence accumulation models of simple decision-making have long assumed that the brain estimates a scalar decision variable corresponding to the log-likelihood ratio of the two alternatives. Typical neural implementations of this algorithmic cognitive model assume that large numbers of neurons are each noisy exemplars of the scalar decision variable. Here we propose a neural implementation of the diffusion model in which many neurons construct and maintain the Laplace transform of the distance to each of the decision bounds. As in classic findings from brain regions including LIP, the firing rate of neurons coding for the Laplace transform of net accumulated evidence grows to a bound during random dot motion tasks. However, rather than noisy exemplars of a single mean value, this approach makes the novel prediction that firing rates grow to the bound exponentially, across neurons there should be a distribution of different rates. A second set of neurons records an approximate inversion of the Laplace transform, these neurons directly estimate net accumulated evidence. In analogy to time cells and place cells observed in the hippocampus and other brain regions, the neurons in this second set have receptive fields along a "decision axis." This finding is consistent with recent findings from rodent recordings. This theoretical approach places simple evidence accumulation models in the same mathematical language as recent proposals for representing time and space in cognitive models for memory.Comment: Revised for CB

    The Inferior Temporal Numeral Area distinguishes numerals from other character categories during passive viewing: A representational similarity analysis

    Get PDF
    A region in the posterior inferior temporal gyrus (pITG) is thought to be specialized for processing Arabic numerals, but fMRI studies that compared passive viewing of numerals to other character types (e.g., letters and novel characters) have not found evidence of numeral preference in the pITG. However, recent studies showed that the engagement of the pITG is modulated by attention and task contexts, suggesting that passive viewing paradigms may be ill-suited for examining numeral specialization in the pITG. It is possible, however, that even if the strengths of responses to different category types are similar, the distributed response patterns (i.e., neural representations) in a candidate numeral-preferring pITG region ( pITG-numerals ) may reveal categorical distinctions, even during passive viewing. Using representational similarity analyses with three datasets that share the same task paradigm and stimulus sets (total N = 88), we tested whether the neural representations of digits, letters, and novel characters in pITG-numerals were organized according to visual form and/or conceptual categories (e.g., familiar versus novel, numbers versus others). Small-scale frequentist and Bayesian meta-analyses of our dataset-specific findings revealed that the organization of neural representations in pITG-numerals is unlikely to be described by differences in abstract shape, but can be described by a categorical digits versus letters distinction, or even a digits versus others distinction (suggesting greater numeral sensitivity). Evidence of greater numeral sensitivity during passive viewing suggest that pITG-numerals is likely part of a neural pathway that has been developed for automatic processing of objects with potential numerical relevance. Given that numerals and letters do not differ categorically in terms of shape, categorical distinction in pITG-numerals during passive viewing must reflect ontogenetic differentiation of symbol set representations based on repeated usage of numbers and letters in differing task contexts

    Evidence accumulation in a Laplace domain decision space

    Full text link
    Evidence accumulation models of simple decision-making have long assumed that the brain estimates a scalar decision variable corresponding to the log likelihood ratio of the two alternatives. Typical neural implementations of this algorithmic cognitive model assume that large numbers of neurons are each noisy exemplars of the scalar decision variable. Here, we propose a neural implementation of the diffusion model in which many neurons construct and maintain the Laplace transform of the distance to each of the decision bounds. As in classic findings from brain regions including LIP, the firing rate of neurons coding for the Laplace transform of net accumulated evidence grows to a bound during random dot motion tasks. However, rather than noisy exemplars of a single mean value, this approach makes the novel prediction that firing rates grow to the bound exponentially; across neurons, there should be a distribution of different rates. A second set of neurons records an approximate inversion of the Laplace transform; these neurons directly estimate net accumulated evidence. In analogy to time cells and place cells observed in the hippocampus and other brain regions, the neurons in this second set have receptive fields along a “decision axis.” This finding is consistent with recent findings from rodent recordings. This theoretical approach places simple evidence accumulation models in the same mathematical language as recent proposals for representing time and space in cognitive models for memory.Accepted manuscrip

    Rigorous numerical approaches in electronic structure theory

    Get PDF
    Electronic structure theory concerns the description of molecular properties according to the postulates of quantum mechanics. For practical purposes, this is realized entirely through numerical computation, the scope of which is constrained by computational costs that increases rapidly with the size of the system. The significant progress made in this field over the past decades have been facilitated in part by the willingness of chemists to forego some mathematical rigour in exchange for greater efficiency. While such compromises allow large systems to be computed feasibly, there are lingering concerns over the impact that these compromises have on the quality of the results that are produced. This research is motivated by two key issues that contribute to this loss of quality, namely i) the numerical errors accumulated due to the use of finite precision arithmetic and the application of numerical approximations, and ii) the reliance on iterative methods that are not guaranteed to converge to the correct solution. Taking the above issues in consideration, the aim of this thesis is to explore ways to perform electronic structure calculations with greater mathematical rigour, through the application of rigorous numerical methods. Of which, we focus in particular on methods based on interval analysis and deterministic global optimization. The Hartree-Fock electronic structure method will be used as the subject of this study due to its ubiquity within this domain. We outline an approach for placing rigorous bounds on numerical error in Hartree-Fock computations. This is achieved through the application of interval analysis techniques, which are able to rigorously bound and propagate quantities affected by numerical errors. Using this approach, we implement a program called Interval Hartree-Fock. Given a closed-shell system and the current electronic state, this program is able to compute rigorous error bounds on quantities including i) the total energy, ii) molecular orbital energies, iii) molecular orbital coefficients, and iv) derived electronic properties. Interval Hartree-Fock is adapted as an error analysis tool for studying the impact of numerical error in Hartree-Fock computations. It is used to investigate the effect of input related factors such as system size and basis set types on the numerical accuracy of the Hartree-Fock total energy. Consideration is also given to the impact of various algorithm design decisions. Examples include the application of different integral screening thresholds, the variation between single and double precision arithmetic in two-electron integral evaluation, and the adjustment of interpolation table granularity. These factors are relevant to both the usage of conventional Hartree-Fock code, and the development of Hartree-Fock code optimized for novel computing devices such as graphics processing units. We then present an approach for solving the Hartree-Fock equations to within a guaranteed margin of error. This is achieved by treating the Hartree-Fock equations as a non-convex global optimization problem, which is then solved using deterministic global optimization. The main contribution of this work is the development of algorithms for handling quantum chemistry specific expressions such as the one and two-electron integrals within the deterministic global optimization framework. This approach was implemented as an extension to an existing open source solver. Proof of concept calculations are performed for a variety of problems within Hartree-Fock theory, including those in i) point energy calculation, ii) geometry optimization, iii) basis set optimization, and iv) excited state calculation. Performance analyses of these calculations are also presented and discussed

    Variational determination of ground and excited-state two-electron reduced density matrices in the doubly occupied configuration space : A dispersion operator approach

    Get PDF
    This work implements a variational determination of the elements of two-electron reduced density matrices corresponding to the ground and excited states of N-electron interacting systems based on the dispersion operator technique. The procedure extends the previously reported proposal [Nakata et al., J. Chem. Phys. 125, 244109 (2006)] to two-particle interaction Hamiltonians and N-representability conditions for the two-, three-, and four-particle reduced density matrices in the doubly occupied configuration interaction space. The treatment has been applied to describe electronic spectra using two benchmark exactly solvable pairing models: reduced Bardeen–Cooper–Schrieffer and Richardson–Gaudin–Kitaev Hamiltonians. The dispersion operator combined with N-representability conditions up to the four-particle reduced density matrices provides excellent results.Instituto de Investigaciones Fisicoquímicas Teóricas y Aplicada

    Particle and energy transport in strongly driven one-dimensional quantum systems

    Get PDF
    This Dissertation concerns the transport properties of a strongly–correlated one–dimensional system of spinless fermions, driven by an external electric field which induces the flow of charges and energy through the system. Since the system does not exchange information with the environment, the evolution can be accurately followed to arbitrarily long times by solving numerically the time–dependent Schrödinger equation, going beyond Kubo’s linear response theory. The thermoelectric response of the system is here characterized, using the ratio of the induced energy and particle currents, in the nonequilibrium state under the steady applied electric field. Even though the equilibrium response can be reached for vanishingly small driving, strong fields produce quantum–mechanical Bloch oscillations in the currents, which disrupt the proportionality of the currents. The effects of the driving on the local state of the ring are analyzed via the reduced density matrix of small subsystems. The local entropy density can be defined and shown to be consistent with the laws of thermodynamics for quasistationary evolution. Even integrable systems are shown to thermalize under driving, with heat being produced via the Joule effect by the flow of currents. The spectrum of the reduced density matrix is shown to be distributed according the Gaussian unitary ensemble predicted by random–matrix theory, both during driving and a subsequent relaxation. The first fully–quantum model of a thermoelectric couple is realized by connecting two correlated quantum wires. The field is shown to produce heating and cooling at the junctions according to the Peltier effect, by mapping the changes in the local entropy density. In the quasiequilibrium regime, a local temperature can be defined, at the same time verifying that the subsystems are in a Gibbs thermal state. The gradient of temperatures, established by the external field, is shown to counterbalance the flow of energy in the system, terminating the operation of the thermocouple. Strong applied fields lead to new nonequilibrium phenomena. At the junctions, observable Bloch oscillations of the density of charge and energy develop at the junctions. Moreover, in a thermocouple built out of Mott insulators, a sufficiently strong field leads to a dynamical transition reversing the sign of the charge carriers and the Peltier effect

    The Complexity of the Consistency and N-representability Problems for Quantum States

    Full text link
    QMA (Quantum Merlin-Arthur) is the quantum analogue of the class NP. There are a few QMA-complete problems, most notably the ``Local Hamiltonian'' problem introduced by Kitaev. In this dissertation we show some new QMA-complete problems. The first one is ``Consistency of Local Density Matrices'': given several density matrices describing different (constant-size) subsets of an n-qubit system, decide whether these are consistent with a single global state. This problem was first suggested by Aharonov. We show that it is QMA-complete, via an oracle reduction from Local Hamiltonian. This uses algorithms for convex optimization with a membership oracle, due to Yudin and Nemirovskii. Next we show that two problems from quantum chemistry, ``Fermionic Local Hamiltonian'' and ``N-representability,'' are QMA-complete. These problems arise in calculating the ground state energies of molecular systems. N-representability is a key component in recently developed numerical methods using the contracted Schrodinger equation. Although these problems have been studied since the 1960's, it is only recently that the theory of quantum computation has allowed us to properly characterize their complexity. Finally, we study some special cases of the Consistency problem, pertaining to 1-dimensional and ``stoquastic'' systems. We also give an alternative proof of a result due to Jaynes: whenever local density matrices are consistent, they are consistent with a Gibbs state.Comment: PhD thesis. Yay, no more grad school!! (Finished in August, but did not get around to posting it until now.) 91 pages, a few figures, some boring sections. Has detailed proofs of results in quant-ph/0604166 and quant-ph/0609125. Ch.4 is a preliminary sketch of 0712.1388. Ch.5 is quant-ph/060301

    Development of a long life design procedure for Australian asphalt pavements

    Get PDF
    This project examined the incorporation of the Fatigue Endurance Limit concept into a pavement design procedure. The result of the project was the development of a validated method based off fundamental laboratory testing, which can be used to determine the maximum thickness of an asphalt pavement, beyond which any increase in design thickness will result in little to no increase in the structural capacity of the pavement

    CHARACTERIZING UNCERTAINTY OF A HYDROLOGIC MODELING SYSTEM FOR OPERATIONAL FLOOD FORECASTING OVER THE CONTERMINOUS UNITED STATES

    Get PDF
    The purpose of this work was to study the macro scale patterns of simulated streamflow errors in order to characterize uncertainty in a hydrologic modeling system and establish the basis for a probabilistic forecasting framework. The particular application of this endeavor is on flood and flash flood forecasting in an operational context. The hydrologic modeling system has been implemented at 1-km/5-min resolution to generate estimates of streamflow over the Conterminous United States (CONUS). The parameterization of the hydrologic model was prepared using spatially distributed information on soil characteristics, land cover/land use, and topography alone. An innovative method to estimate parameter values for the physics-based flow routing model was developed for the purpose of this research. Unlike the standard practice in hydrologic modeling exercises, no calibration of the hydrologic model was performed following its initial configuration. This calibration-free approach guarantees the spatiotemporal consistency of uncertainty and model biases, which is key for the methodology explored herein. Data from the CONUS-wide stream gauge network of the United States’ Geological Survey (USGS) were used as a reference to evaluate the discrepancies with the hydrological model predictions. Only stream gauges with drainages less than or equal to 1,000 km2 were employed. Streamflow errors were studied at the event scale with particular focus on the peak flow magnitude and timing. A total of 2,680 catchments and 75,496 events were used for the error analysis. A methodology based on automatic processing algorithms was developed to deal with this large sample for model diagnostics. Associations between streamflow errors and geophysical factors were explored and modeled. It was found that hydro-climatic factors and radar coverage could explain significant underestimation of peak flow in regions of complex terrain. Furthermore, the statistical modeling of peak flow errors showed that other geophysical factors such as basin geomorphometry and pedology could also provide explanatory information. Results from this research demonstrate the potential of uncertainty characterization in providing feedback for model improvement and its utility in enabling probabilistic flood forecasting that can be extended to ungauged locations

    Representational dynamics across multiple timescales in human cortical networks

    Get PDF
    Human cognition occurs at multiple timescales, including immediate processing of the ongoing experiences and slowly drifting higher-level thoughts. To understand how the brain selects and represents these various types of information to guide behavior, this thesis examined representational content within sensory regions, multiple demand (MD) network, and default mode network (DMN). Chapter 1 provides a background review of the current literature. It begins by reviewing experimental investigations of component visual processes that unfold over time. Next, the MD network is introduced as a collection of frontal and parietal regions involved in implementing cognitive control by assembling the required operations for task-relevant behavior. Finally, the DMN is introduced in the context of temporal processing hierarchies, with focus on its representation of situation models summarizing interactions among entities and the environment. The first experiment, presented in Chapter 2, used EEG/MEG to track multiple component processes of selective attention. Five distinct processing operations with different time-courses were quantified, including representation of visual display properties, target location, target identity, behavioral significance, and finally, possible reactivation of the attentional template. Chapter 3 used fMRI to examine neural representations of task episodes, which are temporally organized sequences of steps that occur within a given context. It was found that MD and visual regions showed sensitivity to the fine structure of the contents within a task. DMN regions showed gradual change throughout the entire task, with increased activation at the offset of the entire episode. Chapter 4 analyzed activation profiles of DMN regions using six diverse tasks to examine their functional convergence during social, episodic, and self-referential thought. Results supported proposals of separate subsystems, yet also suggest integration within the DMN. The final chapter, Chapter 5, provides an extended discussion of theoretical concepts related to the three experiments and proposes possible avenues for further research
    • 

    corecore