30,363 research outputs found

    Generalized Network Psychometrics: Combining Network and Latent Variable Models

    Full text link
    We introduce the network model as a formal psychometric model, conceptualizing the covariance between psychometric indicators as resulting from pairwise interactions between observable variables in a network structure. This contrasts with standard psychometric models, in which the covariance between test items arises from the influence of one or more common latent variables. Here, we present two generalizations of the network model that encompass latent variable structures, establishing network modeling as parts of the more general framework of Structural Equation Modeling (SEM). In the first generalization, we model the covariance structure of latent variables as a network. We term this framework Latent Network Modeling (LNM) and show that, with LNM, a unique structure of conditional independence relationships between latent variables can be obtained in an explorative manner. In the second generalization, the residual variance-covariance structure of indicators is modeled as a network. We term this generalization Residual Network Modeling (RNM) and show that, within this framework, identifiable models can be obtained in which local independence is structurally violated. These generalizations allow for a general modeling framework that can be used to fit, and compare, SEM models, network models, and the RNM and LNM generalizations. This methodology has been implemented in the free-to-use software package lvnet, which contains confirmatory model testing as well as two exploratory search algorithms: stepwise search algorithms for low-dimensional datasets and penalized maximum likelihood estimation for larger datasets. We show in simulation studies that these search algorithms performs adequately in identifying the structure of the relevant residual or latent networks. We further demonstrate the utility of these generalizations in an empirical example on a personality inventory dataset.Comment: Published in Psychometrik

    Evaluating Data Assimilation Algorithms

    Get PDF
    Data assimilation leads naturally to a Bayesian formulation in which the posterior probability distribution of the system state, given the observations, plays a central conceptual role. The aim of this paper is to use this Bayesian posterior probability distribution as a gold standard against which to evaluate various commonly used data assimilation algorithms. A key aspect of geophysical data assimilation is the high dimensionality and low predictability of the computational model. With this in mind, yet with the goal of allowing an explicit and accurate computation of the posterior distribution, we study the 2D Navier-Stokes equations in a periodic geometry. We compute the posterior probability distribution by state-of-the-art statistical sampling techniques. The commonly used algorithms that we evaluate against this accurate gold standard, as quantified by comparing the relative error in reproducing its moments, are 4DVAR and a variety of sequential filtering approximations based on 3DVAR and on extended and ensemble Kalman filters. The primary conclusions are that: (i) with appropriate parameter choices, approximate filters can perform well in reproducing the mean of the desired probability distribution; (ii) however they typically perform poorly when attempting to reproduce the covariance; (iii) this poor performance is compounded by the need to modify the covariance, in order to induce stability. Thus, whilst filters can be a useful tool in predicting mean behavior, they should be viewed with caution as predictors of uncertainty. These conclusions are intrinsic to the algorithms and will not change if the model complexity is increased, for example by employing a smaller viscosity, or by using a detailed NWP model

    Reliable inference of exoplanet light curve parameters using deterministic and stochastic systematics models

    Full text link
    Time-series photometry and spectroscopy of transiting exoplanets allow us to study their atmospheres. Unfortunately, the required precision to extract atmospheric information surpasses the design specifications of most general purpose instrumentation, resulting in instrumental systematics in the light curves that are typically larger than the target precision. Systematics must therefore be modelled, leaving the inference of light curve parameters conditioned on the subjective choice of models and model selection criteria. This paper aims to test the reliability of the most commonly used systematics models and model selection criteria. As we are primarily interested in recovering light curve parameters rather than the favoured systematics model, marginalisation over systematics models is introduced as a more robust alternative than simple model selection. This can incorporate uncertainties in the choice of systematics model into the error budget as well as the model parameters. Its use is demonstrated using a series of simulated transit light curves. Stochastic models, specifically Gaussian processes, are also discussed in the context of marginalisation over systematics models, and are found to reliably recover the transit parameters for a wide range of systematics functions. None of the tested model selection criteria - including the BIC - routinely recovered the correct model. This means that commonly used methods that are based on simple model selection may underestimate the uncertainties when extracting transmission and eclipse spectra from real data, and low-significance claims using such techniques should be treated with caution. In general, no systematics modelling techniques are perfect; however, marginalisation over many systematics models helps to mitigate poor model selection, and stochastic processes provide an even more flexible approach to modelling instrumental systematics.Comment: 15 pages, 2 figures, published in MNRAS, typo in footnote eq correcte

    The MVGC multivariate Granger causality toolbox: a new approach to Granger-causal inference

    Get PDF
    Background: Wiener-Granger causality (“G-causality”) is a statistical notion of causality applicable to time series data, whereby cause precedes, and helps predict, effect. It is defined in both time and frequency domains, and allows for the conditioning out of common causal influences. Originally developed in the context of econometric theory, it has since achieved broad application in the neurosciences and beyond. Prediction in the G-causality formalism is based on VAR (Vector AutoRegressive) modelling. New Method: The MVGC Matlab c Toolbox approach to G-causal inference is based on multiple equivalent representations of a VAR model by (i) regression parameters, (ii) the autocovariance sequence and (iii) the cross-power spectral density of the underlying process. It features a variety of algorithms for moving between these representations, enabling selection of the most suitable algorithms with regard to computational efficiency and numerical accuracy. Results: In this paper we explain the theoretical basis, computational strategy and application to empirical G-causal inference of the MVGC Toolbox. We also show via numerical simulations the advantages of our Toolbox over previous methods in terms of computational accuracy and statistical inference. Comparison with Existing Method(s): The standard method of computing G-causality involves estimation of parameters for both a full and a nested (reduced) VAR model. The MVGC approach, by contrast, avoids explicit estimation of the reduced model, thus eliminating a source of estimation error and improving statistical power, and in addition facilitates fast and accurate estimation of the computationally awkward case of conditional G-causality in the frequency domain. Conclusions: The MVGC Toolbox implements a flexible, powerful and efficient approach to G-causal inference. Keywords: Granger causality, vector autoregressive modelling, time series analysi

    PICACS: self-consistent modelling of galaxy cluster scaling relations

    Full text link
    In this paper, we introduce PICACS, a physically-motivated, internally consistent model of scaling relations between galaxy cluster masses and their observable properties. This model can be used to constrain simultaneously the form, scatter (including its covariance) and evolution of the scaling relations, as well as the masses of the individual clusters. In this framework, scaling relations between observables (such as that between X-ray luminosity and temperature) are modelled explicitly in terms of the fundamental mass-observable scaling relations, and so are fully constrained without being fit directly. We apply the PICACS model to two observational datasets, and show that it performs as well as traditional regression methods for simply measuring individual scaling relation parameters, but reveals additional information on the processes that shape the relations while providing self-consistent mass constraints. Our analysis suggests that the observed combination of slopes of the scaling relations can be described by a deficit of gas in low-mass clusters that is compensated for by elevated gas temperatures, such that the total thermal energy of the gas in a cluster of given mass remains close to self-similar expectations. This is interpreted as the result of AGN feedback removing low entropy gas from low mass systems, while heating the remaining gas. We deconstruct the luminosity-temperature (LT) relation and show that its steepening compared to self-similar expectations can be explained solely by this combination of gas depletion and heating in low mass systems, without any additional contribution from a mass dependence of the gas structure. Finally, we demonstrate that a self-consistent analysis of the scaling relations leads to an expectation of self-similar evolution of the LT relation that is significantly weaker than is commonly assumed.Comment: Updated to match published version. Improvements to presentation of results, and treatment of scatter and covariance. Main conclusions unchange

    Supernova Constraints and Systematic Uncertainties from the First Three Years of the Supernova Legacy Survey

    Get PDF
    We combine high-redshift Type Ia supernovae from the first three years of the Supernova Legacy Survey (SNLS) with other supernova (SN) samples, primarily at lower redshifts, to form a high-quality joint sample of 472 SNe (123 low-z, 93 SDSS, 242 SNLS, and 14 Hubble Space Telescope). SN data alone require cosmic acceleration at >99.999% confidence, including systematic effects. For the dark energy equation of state parameter (assumed constant out to at least z = 1.4) in a flat universe, we find w = –0.91^(+0.16)_(–0.20)(stat)^(+0.07)_(–0.14)(sys) from SNe only, consistent with a cosmological constant. Our fits include a correction for the recently discovered relationship between host-galaxy mass and SN absolute brightness. We pay particular attention to systematic uncertainties, characterizing them using a systematic covariance matrix that incorporates the redshift dependence of these effects, as well as the shape-luminosity and color-luminosity relationships. Unlike previous work, we include the effects of systematic terms on the empirical light-curve models. The total systematic uncertainty is dominated by calibration terms. We describe how the systematic uncertainties can be reduced with soon to be available improved nearby and intermediate-redshift samples, particularly those calibrated onto USNO/SDSS-like systems

    Optimization viewpoint on Kalman smoothing, with applications to robust and sparse estimation

    Full text link
    In this paper, we present the optimization formulation of the Kalman filtering and smoothing problems, and use this perspective to develop a variety of extensions and applications. We first formulate classic Kalman smoothing as a least squares problem, highlight special structure, and show that the classic filtering and smoothing algorithms are equivalent to a particular algorithm for solving this problem. Once this equivalence is established, we present extensions of Kalman smoothing to systems with nonlinear process and measurement models, systems with linear and nonlinear inequality constraints, systems with outliers in the measurements or sudden changes in the state, and systems where the sparsity of the state sequence must be accounted for. All extensions preserve the computational efficiency of the classic algorithms, and most of the extensions are illustrated with numerical examples, which are part of an open source Kalman smoothing Matlab/Octave package.Comment: 46 pages, 11 figure

    Structural Constant Conditional Correlation

    Get PDF
    A small strand of recent literature is occupied with identifying simultaneity in multiple equation systems through autoregressive conditional heteroscedasticity. Since this approach assumes that the structural innovations are uncorrelated, any contemporaneous connection of the endogenous variables needs to be exclusively explained by mutual spillover effects. In contrast, this paper allows for instantaneous covariances, which become identifiable by imposing the constraint of structural constant conditional correlation (SCCC). In this, common driving forces can be modelled in addition to simultaneous transmission effects. The new methodology is applied to the Dow Jones and Nasdaq Composite indexes in a small empirical example, illuminating scope and functioning of the SCCC model.Simultaneity, Identification, EGARCH, CCC
    corecore