38,638 research outputs found

    Fast Ensemble Smoothing

    Full text link
    Smoothing is essential to many oceanographic, meteorological and hydrological applications. The interval smoothing problem updates all desired states within a time interval using all available observations. The fixed-lag smoothing problem updates only a fixed number of states prior to the observation at current time. The fixed-lag smoothing problem is, in general, thought to be computationally faster than a fixed-interval smoother, and can be an appropriate approximation for long interval-smoothing problems. In this paper, we use an ensemble-based approach to fixed-interval and fixed-lag smoothing, and synthesize two algorithms. The first algorithm produces a linear time solution to the interval smoothing problem with a fixed factor, and the second one produces a fixed-lag solution that is independent of the lag length. Identical-twin experiments conducted with the Lorenz-95 model show that for lag lengths approximately equal to the error doubling time, or for long intervals the proposed methods can provide significant computational savings. These results suggest that ensemble methods yield both fixed-interval and fixed-lag smoothing solutions that cost little additional effort over filtering and model propagation, in the sense that in practical ensemble application the additional increment is a small fraction of either filtering or model propagation costs. We also show that fixed-interval smoothing can perform as fast as fixed-lag smoothing and may be advantageous when memory is not an issue

    Judgments of effort exerted by others are influenced by received rewards

    Get PDF
    Estimating invested effort is a core dimension for evaluating own and others’ actions, and views on the relationship between effort and rewards are deeply ingrained in various societal attitudes. Internal representations of effort, however, are inherently noisy, e.g. due to the variability of sensorimotor and visceral responses to physical exertion. The uncertainty in effort judgments is further aggravated when there is no direct access to the internal representations of exertion – such as when estimating the effort of another person. Bayesian cue integration suggests that this uncertainty can be resolved by incorporating additional cues that are predictive of effort, e.g. received rewards. We hypothesized that judgments about the effort spent on a task will be influenced by the magnitude of received rewards. Additionally, we surmised that such influence might further depend on individual beliefs regarding the relationship between hard work and prosperity, as exemplified by a conservative work ethic. To test these predictions, participants performed an effortful task interleaved with a partner and were informed about the obtained reward before rating either their own or the partner’s effort. We show that higher rewards led to higher estimations of exerted effort in self-judgments, and this effect was even more pronounced for other-judgments. In both types of judgment, computational modelling revealed that reward information and sensorimotor markers of exertion were combined in a Bayes-optimal manner in order to reduce uncertainty. Remarkably, the extent to which rewards influenced effort judgments was associated with conservative world-views, indicating links between this phenomenon and general beliefs about the relationship between effort and earnings in society

    Towards a Multi-Subject Analysis of Neural Connectivity

    Full text link
    Directed acyclic graphs (DAGs) and associated probability models are widely used to model neural connectivity and communication channels. In many experiments, data are collected from multiple subjects whose connectivities may differ but are likely to share many features. In such circumstances it is natural to leverage similarity between subjects to improve statistical efficiency. The first exact algorithm for estimation of multiple related DAGs was recently proposed by Oates et al. 2014; in this letter we present examples and discuss implications of the methodology as applied to the analysis of fMRI data from a multi-subject experiment. Elicitation of tuning parameters requires care and we illustrate how this may proceed retrospectively based on technical replicate data. In addition to joint learning of subject-specific connectivity, we allow for heterogeneous collections of subjects and simultaneously estimate relationships between the subjects themselves. This letter aims to highlight the potential for exact estimation in the multi-subject setting.Comment: to appear in Neural Computation 27:1-2

    Unifying prospective and retrospective interval-time estimation: a fading-gaussian activation-based model of interval-timing

    Get PDF
    Hass and Hermann (2012) have shown that only variance-based processes will lead to the scalar growth of error that is characteristic of human time judgments. Secondly, a major meta-review of over one hundred studies (Block et al., 2010) reveals a striking interaction between the way in which temporal judgments are queried and cognitive load on participants’ judgments of interval duration. For retrospective time judgments, estimates under high cognitive load are longer than under low cognitive load. For prospective judgments, the reverse pattern holds, with increased cognitive load leading to shorter estimates. We describe GAMIT, a Gaussian spreading-activation model, in which the sampling rate of an activation trace is differentially affected by cognitive load. The model unifies prospective and retrospective time estimation, normally considered separately, by relating them to the same underlying process. The scalar property of time estimation arises naturally from the model dynamics and the model shows the appropriate interaction between mode of query and cognitive load

    Aggregated motion estimation for real-time MRI reconstruction

    Full text link
    Real-time magnetic resonance imaging (MRI) methods generally shorten the measuring time by acquiring less data than needed according to the sampling theorem. In order to obtain a proper image from such undersampled data, the reconstruction is commonly defined as the solution of an inverse problem, which is regularized by a priori assumptions about the object. While practical realizations have hitherto been surprisingly successful, strong assumptions about the continuity of image features may affect the temporal fidelity of the estimated images. Here we propose a novel approach for the reconstruction of serial real-time MRI data which integrates the deformations between nearby frames into the data consistency term. The method is not required to be affine or rigid and does not need additional measurements. Moreover, it handles multi-channel MRI data by simultaneously determining the image and its coil sensitivity profiles in a nonlinear formulation which also adapts to non-Cartesian (e.g., radial) sampling schemes. Experimental results of a motion phantom with controlled speed and in vivo measurements of rapid tongue movements demonstrate image improvements in preserving temporal fidelity and removing residual artifacts.Comment: This is a preliminary technical report. A polished version is published by Magnetic Resonance in Medicine. Magnetic Resonance in Medicine 201

    Variable Selection and Model Choice in Structured Survival Models

    Get PDF
    In many situations, medical applications ask for flexible survival models that allow to extend the classical Cox-model via the inclusion of time-varying and nonparametric effects. These structured survival models are very flexible but additional difficulties arise when model choice and variable selection is desired. In particular, it has to be decided which covariates should be assigned time-varying effects or whether parametric modeling is sufficient for a given covariate. Component-wise boosting provides a means of likelihood-based model fitting that enables simultaneous variable selection and model choice. We introduce a component-wise likelihood-based boosting algorithm for survival data that permits the inclusion of both parametric and nonparametric time-varying effects as well as nonparametric effects of continuous covariates utilizing penalized splines as the main modeling technique. Its properties and performance are investigated in simulation studies. The new modeling approach is used to build a flexible survival model for intensive care patients suffering from severe sepsis. A software implementation is available to the interested reader
    corecore