71 research outputs found

    Invariant and metric free proximities for data matching: an R package

    Get PDF
    Data matching is a typical statistical problem in non experimental and/or observa- tional studies or, more generally, in cross-sectional studies in which one or more data sets are to be compared. Several methods are available in the literature, most of which based on a particular metric or on statistical models, either parametric or nonparametric. In this paper we present two methods to calculate a proximity which have the property of being invariant under monotonic transformations. These methods require at most the notion of ordering. An open-source software in the form of a R package is also presented

    Does European Monetary Union make inflation dynamics more uniform?

    Get PDF
    Using a nonparametric method to characterize Markovian operators, we describe the evolution of the short-run inflation processes among the EMU countries between 1996 and 2012. While a progressive clustering pattern can be outlined in the first half of the period - showing that the monetary union makes price dynamics more homogeneous - starting from 2004 an increase in price volatility makes the clustering pattern unstable, as the analysis of the changing points of the inflation processes confirms

    cem: Software for Coarsened Exact Matching

    Get PDF
    This program is designed to improve causal inference via a method of matching that is widely applicable in observational data and easy to understand and use (if you understand how to draw a histogram, you will understand this method). The program implements the coarsened exact matching (CEM) algorithm, described below. CEM may be used alone or in combination with any existing matching method. This algorithm, and its statistical properties, are described in Iacus, King, and Porro (2008)

    Controlling for Selection Bias in Social Media Indicators through Official Statistics: a Proposal

    Get PDF
    With the increase of social media usage, a huge new source of data has become available. Despite the enthusiasm linked to this revolution, one of the main outstanding criticisms in using these data is selection bias. Indeed, the reference population is unknown. Nevertheless, many studies show evidence that these data constitute a valuable source because they are more timely and possess higher space granularity. We propose to adjust statistics based on Twitter data by anchoring them to reliable official statistics through a weighted, space-time, small area estimation model. As a by-product, the proposed method also stabilizes the social media indicators, which is a welcome property required for official statistics. The method can be adapted anytime official statistics exists at the proper level of granularity and for which social media usage within the population is known. As an example, we adjust a subjective wellbeing indicator of \u201cworking conditions\u201d in Italy, and combine it with relevant official statistics. The weights depend on broadband coverage and the Twitter rate at province level, while the analysis is performed at regional level. The resulting statistics are then compared with survey statistics on the \u201cquality of job\u201d at macro-economic regional level, showing evidence of similar paths

    Random recursive partitioning : a matching method for the estimation of the average treatment effect

    Get PDF
    In this paper we introduce the Random Recursive Partitioning (RRP) matching method. RRP generates a proximity matrix which might be useful in econometric applications like average treatment effect estimation. RRP is a Monte Carlo method that randomly generates non-empty recursive partitions of the data and evaluates the proximity between two observations as the empirical frequency they fall in a same cell of these random partitions over all Monte Carlo replications. From the proximity matrix it is possible to derive both graphical and analytical tools to evaluate the extent of the common support between data sets. The RRP method is \u201chonest\u201d in that it does not match observations \u201cat any cost\u201d: if data sets are separated, the method clearly states i

    Numerical Analysis of Volatility Change Point Estimators for Discretely Sampled Stochastic Differential Equations

    Get PDF
    In this paper, we review recent advances on change point estimation for the volatility component of stochastic differential equations under different discrete sampling schemes. We consider both ergodic and non-ergodic cases, and present a Monte Carlo study on the change point estimator to compare the three methods under different setups. \ua9 2010 The Authors Economic Notes \ua9 2010 Banca Monte dei Paschi di Siena SpA

    Invariant and metric free proximities for data matching : an R package

    Get PDF
    Data matching is a typical statistical problem in non experimental and/or observational studies or, more generally, in cross-sectional studies in which one or more data sets are to be compared. Several methods are available in the literature, most of which based on a particular metric or on statistical models, either parametric or nonparametric. In this paper we present two methods to calculate a proximity which have the property of being invariant under monotonic transformations. These methods require at most the notion of ordering. An open-source software in the form of a R package is also presented

    Estimation for the discretely observed telegraph process

    Get PDF
    • …
    corecore