38 research outputs found
Estimating reliability coefficients with heterogeneous item weightings using Stata: A factor based approach
We show how to estimate a Cronbach's alpha reliability coefficient in Stata after running a principal component or factor analysis. Alpha evaluates to what extent items measure the same underlying content when the items are combined into a scale or used for latent variable. Stata allows for testing the reliability coefficient (alpha) of a scale only when all items receive homogenous weights. We present a user-written program that computes reliability coefficients when implementation of principal component or factor analysis shows heterogeneous item loadings. We use data on management practices from Bloom and Van Reenen (2010) to explain how to implement and interpret the adjusted internal consistency measure using afa.afa, factor analysis, Cronbach's alpha, reliability, heterogeneous scale construction,latent variable
The stimulative effect of an unconditional block grant on the decentralized provision of care
Understanding the impact of central government grants on decentralized healthcare provision is of crucial importance for the design of grant systems, yet empirical evidence on the prevalence of flypaper effects in this domain is rare. We study the decentralization of home care in the Netherlands and exploit the gradual introduction of formula-based equalization to identify the effect of exogenous changes in an unconditional block grant on local expenditure and utilization. A one euro increase in central government grants raises local expenditure by twenty to fifty cents. Adjustments occur through the number of hours as well as through substitution between basic and more advanced types of assistance. These findings suggest that conditioning of grants is not required for the central government to retain a moderate degree of control over the decentralized provision of care
Gravity Models of Trade-based Money Laundering
Several attempts have been made in the economics literature to measure money laundering. However, the adequacy of these models is difficult to assess, as money laundering takes place secretly and, hence, goes unobserved. An exception is trade- based money laundering (TBML), a special form of trade abuse that has been discovered only recently. TBML refers to criminal proceeds that are transferred around the world using fake invoices that under- or overvalue imports and exports. This article is a first attempt to test well-known prototype models proposed by Walker and Unger to predict illicit money laundering flows and to apply traditional gravity models borrowed from international trade theory. To do so, we use a dataset of Zdanowicz of TBML flows from the US to 199 countries. Our test rejects the specifications of the Walker and Unger prototype models, at least for TBML. The traditional gravity model that we present here can indeed explain TBML flows worldwide in a plausible manner. An important determinant is licit trade, the mass in which TBML is hidden. Furthermore, our results suggest that criminals use TBML in order to escape the stricter anti money laundering regulations of financial markets.Money laundering, international trade, gravity model, Walker model.
Gevolgen van het afschaffen van de ozb voor gebruikers van woningen op de andere gemeentelijke heffingen
Gevolgen van het afschaffen van de ozb voor gebruikers van woningen op de andere gemeentelijke heffingen
Recommended from our members
Improved imputation quality of low-frequency and rare variants in European samples using the ‘Genome of The Netherlands'
Although genome-wide association studies (GWAS) have identified many common variants associated with complex traits, low-frequency and rare variants have not been interrogated in a comprehensive manner. Imputation from dense reference panels, such as the 1000 Genomes Project (1000G), enables testing of ungenotyped variants for association. Here we present the results of imputation using a large, new population-specific panel: the Genome of The Netherlands (GoNL). We benchmarked the performance of the 1000G and GoNL reference sets by comparing imputation genotypes with ‘true' genotypes typed on ImmunoChip in three European populations (Dutch, British, and Italian). GoNL showed significant improvement in the imputation quality for rare variants (MAF 0.05–0.5%) compared with 1000G. In Dutch samples, the mean observed Pearson correlation, r2, increased from 0.61 to 0.71. We also saw improved imputation accuracy for other European populations (in the British samples, r2 improved from 0.58 to 0.65, and in the Italians from 0.43 to 0.47). A combined reference set comprising 1000G and GoNL improved the imputation of rare variants even further. The Italian samples benefitted the most from this combined reference (the mean r2 increased from 0.47 to 0.50). We conclude that the creation of a large population-specific reference is advantageous for imputing rare variants and that a combined reference panel across multiple populations yields the best imputation results