549 research outputs found

    On the identifiability of ternary forms

    Full text link
    We describe a new method to determine the minimality and identifiability of a Waring decomposition AA of a specific form (symmetric tensor) TT in three variables. Our method, which is based on the Hilbert function of AA, can distinguish between forms in the span of the Veronese image of AA, which in general contains both identifiable and not identifiable points, depending on the choice of coefficients in the decomposition. This makes our method applicable for all values of the length rr of the decomposition, from 22 up to the generic rank, a range which was not achievable before. Though the method in principle can handle all cases of specific ternary forms, we introduce and describe it in details for forms of degree 88

    Identifiability for a class of symmetric tensors

    Full text link
    We use methods of algebraic geometry to find new, effective methods for detecting the identifiability of symmetric tensors. In particular, for ternary symmetric tensors T of degree 7, we use the analysis of the Hilbert function of a finite projective set, and the Cayley-Bacharach property, to prove that, when the Kruskal's rank of a decomposition of T are maximal (a condition which holds outside a Zariski closed set of measure 0), then the tensor T is identifiable, i.e. the decomposition is unique, even if the rank lies beyond the range of application of both the Kruskal's and the reshaped Kruskal's criteria

    On the description of identifiable quartics

    Full text link
    In this paper we study the identifiability of specific forms (symmetric tensors), with the target of extending recent methods for the case of 33 variables to more general cases. In particular, we focus on forms of degree 44 in 55 variables. By means of tools coming from classical algebraic geometry, such as Hilbert function, liaison procedure and Serre's construction, we give a complete geometric description and criteria of identifiability for ranks ≥9\geq 9, filling the gap between rank ≤8\leq 8, covered by Kruskal's criterion, and 1515, the rank of a general quartic in 55 variables. For the case r=12r=12, we construct an effective algorithm that guarantees that a given decomposition is unique

    PARX model for football matches predictions

    Get PDF
    We propose an innovative approach to model and predict the outcome of football matches based on the Poisson Autoregression with eXogenous covariates (PARX) model recently proposed by Agosto, Cavaliere, Kristensen and Rahbek (2016). We show that this methodology is particularly suited to model the goals distribution of a football team and provides a good forecast performance that can be exploited to develop a profitable betting strategy. The betting strategy is based on the idea that the odds proposed by the market do not reflect the true probability of the match because they may incorporate also the betting volumes or strategic price settings in order to exploit bettors’ biases. The out-of-sample performance of the PARX model is better than the reference approach by Dixon and Coles (1997). We also evaluate our approach in a simple betting strategy which is applied to the English football Premier League data for the 2013/2014 and 2014/2015 seasons. The results show that the return from the betting strategy is larger than 35% in all the cases considered and may even exceed 100% if we consider an alternative strategy based on a predetermined threshold which allows to exploit the inefficiency of the betting market

    Synergetic and redundant information flow detected by unnormalized Granger causality: application to resting state fMRI

    Full text link
    Objectives: We develop a framework for the analysis of synergy and redundancy in the pattern of information flow between subsystems of a complex network. Methods: The presence of redundancy and/or synergy in multivariate time series data renders difficult to estimate the neat flow of information from each driver variable to a given target. We show that adopting an unnormalized definition of Granger causality one may put in evidence redundant multiplets of variables influencing the target by maximizing the total Granger causality to a given target, over all the possible partitions of the set of driving variables. Consequently we introduce a pairwise index of synergy which is zero when two independent sources additively influence the future state of the system, differently from previous definitions of synergy. Results: We report the application of the proposed approach to resting state fMRI data from the Human Connectome Project, showing that redundant pairs of regions arise mainly due to space contiguity and interhemispheric symmetry, whilst synergy occurs mainly between non-homologous pairs of regions in opposite hemispheres. Conclusions: Redundancy and synergy, in healthy resting brains, display characteristic patterns, revealed by the proposed approach. Significance: The pairwise synergy index, here introduced, maps the informational character of the system at hand into a weighted complex network: the same approach can be applied to other complex systems whose normal state corresponds to a balance between redundant and synergetic circuits.Comment: 6 figures. arXiv admin note: text overlap with arXiv:1403.515

    Vi(E)va LLM! A Conceptual Stack for Evaluating and Interpreting Generative AI-based Visualizations

    Full text link
    The automatic generation of visualizations is an old task that, through the years, has shown more and more interest from the research and practitioner communities. Recently, large language models (LLM) have become an interesting option for supporting generative tasks related to visualization, demonstrating initial promising results. At the same time, several pitfalls, like the multiple ways of instructing an LLM to generate the desired result, the different perspectives leading the generation (code-based, image-based, grammar-based), and the presence of hallucinations even for the visualization generation task, make their usage less affordable than expected. Following similar initiatives for benchmarking LLMs, this paper copes with the problem of modeling the evaluation of a generated visualization through an LLM. We propose a theoretical evaluation stack, EvaLLM, that decomposes the evaluation effort in its atomic components, characterizes their nature, and provides an overview of how to implement and interpret them. We also designed and implemented an evaluation platform that provides a benchmarking resource for the visualization generation task. The platform supports automatic and manual scoring conducted by multiple assessors to support a fine-grained and semantic evaluation based on the EvaLLM stack. Two case studies on GPT3.5-turbo with Code Interpreter and Llama2-70-b models show the benefits of EvaLLM and illustrate interesting results on the current state-of-the-art LLM-generated visualizations

    Bootstrapping DSGE models

    Get PDF
    This paper explores the potential of bootstrap methods in the empirical evalu- ation of dynamic stochastic general equilibrium (DSGE) models and, more generally, in linear rational expectations models featuring unobservable (latent) components. We consider two dimensions. First, we provide mild regularity conditions that suffice for the bootstrap Quasi- Maximum Likelihood (QML) estimator of the structural parameters to mimic the asymptotic distribution of the QML estimator. Consistency of the bootstrap allows to keep the probability of false rejections of the cross-equation restrictions under control. Second, we show that the realizations of the bootstrap estimator of the structural parameters can be constructively used to build novel, computationally straightforward tests for model misspecification, including the case of weak identification. In particular, we show that under strong identification and boot- strap consistency, a test statistic based on a set of realizations of the bootstrap QML estimator approximates the Gaussian distribution. Instead, when the regularity conditions for inference do not hold as e.g. it happens when (part of) the structural parameters are weakly identified, the above result is no longer valid. Therefore, we can evaluate how close or distant is the esti- mated model from the case of strong identification. Our Monte Carlo experimentations suggest that the bootstrap plays an important role along both dimensions and represents a promising evaluation tool of the cross-equation restrictions and, under certain conditions, of the strength of identification. An empirical illustration based on a small-scale DSGE model estimated on U.S. quarterly observations shows the practical usefulness of our approach

    An identification and testing strategy for proxy-SVARs with weak proxies

    Full text link
    When proxies (external instruments) used to identify target structural shocks are weak, inference in proxy-SVARs (SVAR-IVs) is nonstandard and the construction of asymptotically valid confidence sets for the impulse responses of interest requires weak-instrument robust methods. In the presence of multiple target shocks, test inversion techniques require extra restrictions on the proxy-SVAR parameters other those implied by the proxies that may be difficult to interpret and test. We show that frequentist asymptotic inference in these situations can be conducted through Minimum Distance estimation and standard asymptotic methods if the proxy-SVAR can be identified by using `strong' instruments for the non-target shocks; i.e. the shocks which are not of primary interest in the analysis. The suggested identification strategy hinges on a novel pre-test for the null of instrument relevance based on bootstrap resampling which is not subject to pre-testing issues, in the sense that the validity of post-test asymptotic inferences is not affected by the outcomes of the test. The test is robust to conditionally heteroskedasticity and/or zero-censored proxies, is computationally straightforward and applicable regardless of the number of shocks being instrumented. Some illustrative examples show the empirical usefulness of the suggested identification and testing strategy
    • …
    corecore