155,215 research outputs found
Monetary Policy Transparency:Lessons from Germany and the Eurozone
The conduct of monetary policy emphasises institutional arrangements which make monetary policy decision-making more âtransparentâ. Judged by these institutional features neither the Bundesbank, nor the ECB, score very highly. We test for (i) agentsâ average ability to anticipate policy rate changes under the Bundesbank and the ECB and (ii) and agentsâ forecasting unanimity of money market rates. Rising forecasting uncertainty may either be due to a lack of ECB transparency or to larger inflation and growth forecasting errors. Our results indicate that inflation forecast spreads widened amongst private agents and that inflation forecasting uncertainty increased the forecasting spread of money market ratestransparency, yield curve, forecasting uncertainty, Bundesbank, ECB
Testing the New Keynesian Phillips Curve Without Assuming Identification
We re-examine the evidence on the new Phillips curve model of Gali and Gertler (Journal of Monetary Economics 1999) using the conditional score test of Kleibergen (Econometrica 2005), which is robust to weak identification. In contrast to earlier studies, we find that US postwar data are consistent both with the view that inflation dynamics are forward-looking, and with the opposite view that they are predominantly backward-looking. Moreover, the labor share does not appear to be a relevant determinant of inflation. We show that this is an important factor contributing to the weak identification of the Phillips curve.
The effect of rare variants on inflation of the test statistics in case-control analyses.
BACKGROUND: The detection of bias due to cryptic population structure is an important step in the evaluation of findings of genetic association studies. The standard method of measuring this bias in a genetic association study is to compare the observed median association test statistic to the expected median test statistic. This ratio is inflated in the presence of cryptic population structure. However, inflation may also be caused by the properties of the association test itself particularly in the analysis of rare variants. We compared the properties of the three most commonly used association tests: the likelihood ratio test, the Wald test and the score test when testing rare variants for association using simulated data. RESULTS: We found evidence of inflation in the median test statistics of the likelihood ratio and score tests for tests of variants with less than 20 heterozygotes across the sample, regardless of the total sample size. The test statistics for the Wald test were under-inflated at the median for variants below the same minor allele frequency. CONCLUSIONS: In a genetic association study, if a substantial proportion of the genetic variants tested have rare minor allele frequencies, the properties of the association test may mask the presence or absence of bias due to population structure. The use of either the likelihood ratio test or the score test is likely to lead to inflation in the median test statistic in the absence of population structure. In contrast, the use of the Wald test is likely to result in under-inflation of the median test statistic which may mask the presence of population structure.This work was supported by a grant from Cancer Research UK (C490/A16561). AP is funded by a Medical Research Council studentship.This is the final published version. It first appeared at http://dx.doi.org/10.1186%2Fs12859-015-0496-1
Recommended from our members
Self-Monitoring Assessments for Educational Accountability Systems
Test-based accountability is now the cornerstone of U.S. education policy, and it is becoming more important in many other nations as well. Educators sometimes respond to test-based accountability in ways that produce score inflation. In the past, score inflation has usually been evaluated by comparing trends in scores on a high-stakes test to trends on a lower-stakes audit test. However, separate audit tests are often unavailable, and their use has several important drawbacks, such as potential bias from motivational differences. As an alternative, we propose self-monitoring assessments (SMAs) that incorporate audit components into operational high-stakes assessments. This paper provides a framework for designing SMAs. It describes five specific SMA designs that could be incorporated into the non-equivalent groups anchor test linking approaches used by most large-scale assessments and discusses analytical issues that would arise in their use
PENGARUH RISIKO KREDIT, MINIMALISASI RISIKO, PERTUMBUHAN PRODUK DOMESTIK BRUTO, DAN INFLASI TERHADAP PENDAPATAN BUNGA BERSIH PADA PERUSAHAAN PERBANKAN YANG TERDAFTAR DI BURSA EFEK INDONESIA PERIODE 2013-2016
The aim of this research is to examine empirically the impact of credit risk, risk aversion, gross domestic product (GDP) growth, and inflation to Net Interest Margin on banking companies enlisted in BEI year 2013-2016. The factors that influenced NIM is credit risk which is proxied with NPL (Non Performing Loan) ratio, risk aversion which is proxied with CAR (Capital Adequacy Ratio), GDP growth and inflation. The period used is from 2013-2016.This research is causal research, that is to find recausality of independent and dependent variable. The population is 42 banking companies. The sampling method used is purposive sampling method. Based on the criteria, there are 41 banking companies. The hypothesis trial is done by the analysis of data panel regression an before do it, the research did a classic assumption trial. The result of hypothesis trial is done partially is t-test showed that NPL has t-statistic score is 1,4136 and t-tabel score is 1,290 on alpha 10% , so NPL has positive impact to NIM. CAR has t-statistic score is -0,2698 and t-tabel score is 1,290 on alpha 10%, so CAR doesnĂąâŹâąt have impact to NIM. GDP growth has t-statistic score is 2,9349 and t-tabel score has 1,290 on alpha 10%, so GDP growth has positive impact to NIM. Inflation has t-statistic -0,5184 and t-tabel score is 1,290 on alpha 10%, so inflation doesnĂąâŹâąt have impact to NIM
The Woodcock Reading Mastery Test: Impact of Normative Changes
This study examined the magnitude of differences in standard scores, convergent validity, and concurrent validity when an individualâs performance was gauged using the revised and the normative update (Woodcock, 1998) editions of the Woodcock Reading Mastery Test in which the actual test items remained identical but norms have been updated. From three met- ropolitan areas, 899 first to third grade students referred by their teachers for a reading in- tervention program participated. Results showed the inverse Flynn effect, indicating systematic inflation averaging 5 to 9 standard score points, regardless of gender, IQ, city site, or ethnicity, when calculated using the updated norms. Inflation was greater at lower raw score levels. Implications for using the updated norms for identifying children with reading disabilities and changing norms during an ongoing study are discussed
Are boys discriminated in Swedish high schools?
Girls typically have higher grades than boys in school and recent research suggests that part of this gender difference may be due to discrimination of boys. We rigorously test this in a field experiment where a random sample of the same tests in the Swedish language is subject to blind and non-blind grading. The non-blind test score is on average 15 % lower for boys than for girls. Blind grading lowers the average grades with 13 %, indicating that personal ties and/or grade inflation are important in non-blind grading. But we find no evidence of discrimination against boys. The point estimate of the discrimination effect is close to zero with a 95 % confidence interval of ±4.5 % of the average non-blind grade.Discrimination; Field experiments; Grading; Education; Gender
Propensity Score Adjustment in Measurement Invariance
Measurement invariance testing is prerequisite if meaningful comparisons of latent construct across groups are important to the study in social science. If measurement invariance is rejected, the result of non-invariance might be from unbalanced covariates across groups. Propensity score is one approach to correct unbalanced covariates in the data when these unbalanced covariates are the source of measurement non-invariance.
The main purpose of this dissertation is to evaluate propensity score adjustment in testing measurement invariance in both empirical data and Monte Carlo simulation study. The traditional logistic regression and machine learning estimation method (i.e., random forest) were applied to obtain accurate propensity score.
In empirical study, when propensity score was applied as a new covariate to adjust unbalanced covariates across groups, measurement invariance was improved from metric invariance to scalar invariance. Weighting by odds method with random forest estimation improved the metric invariance to scalar invariance, but weighting with logistic regression did not.
The results of a simulation study indicated a substantial Type I error rate inflation if ignoring the unbalanced covariates among groups and using multiple group CFA to conduct the measurement invariance test. Type I error rate inflation was also observed if logistic regression was applied to adjust measurement invariance. On the other hand, using random forest estimation method to balance covariates across groups gave accurate measurement invariance test conclusion
La familia de falacias "enseñando para el examen"
This article explains the various meanings and ambiguities of the phrase âteaching to the testâ (TttT), describes its history and use as a pejorative, and outlines the policy implications of the popular, but fallacious, belief that âhigh stakesâ testing induces TttT which, in turn, produces âtest score inflationâ or artificial test score gains. The history starts with the infamous âLake Wobegon Effectâ test score scandal in the US in the 1980s. John J. Cannell, a medical doctor, discovered that all US states administering national norm-referenced tests claimed their studentsâ average scores exceeded the national average, a mathematical impossibility. Cannell blamed educator cheating and lax security for the test score inflation, but education insiders managed to convince many that high stakes was the cause, despite the fact that Cannellâs tests had no stakes. Elevating the high stakes causes TttT, which causes test score inflation fallacy to dogma has served to divert attention from the endemic lax security with âinternally administeredâ tests that should have encouraged policy makers to require more external controls in test administrations. The fallacy is partly responsible for promoting the ruinous practice of test preparation drilling on test format and administering practice tests as a substitute for genuine subject matter preparation. Finally, promoters of the fallacy have encouraged the practice of âauditingâ allegedly untrustworthy high-stakes test score trends with score trends from allegedly trustworthy low-stakes tests, despite an abundance of evidence that low-stakes test scores are far less reliable, largely due to student disinterestEste artĂculo explica los diversos significados y ambigĂŒedades de la frase "enseñar para el examen" (TttT: teaching to the test en inglĂ©s), describe su historia y su uso como un peyorativo, y describe las implicaciones polĂticas de la creencia popular, pero falaz, que las pruebas de a âgran escalaâ inducen TttT que, a su vez, produce una "inflaciĂłn en la calificaciĂłn obtenida en el examen" o ganancias em cuanto a los puntos obtenidos en la prueba. La historia comienza con el infame escĂĄndalo de la puntuaciĂłn de la prueba "Lake Wobegon Effect" en los Estados Unidos en los años ochenta. John J. Cannell, un mĂ©dico, descubriĂł que todos los estados de los Estados Unidos que administraban pruebas nacionales con referencias normativas afirmaban que los puntajes promedio de sus estudiantes excedĂan el promedio nacional, una imposibilidad matemĂĄtica. Cannell atribuyĂł a los educadores el engaño y la seguridad laxa por la inflaciĂłn de la puntuaciĂłn de los exĂĄmenes, pero los expertos en educaciĂłn lograron convencer a muchos de que las pruebas a gran escala eran la causa, a pesar de que las pruebas de Cannell no tenĂan ninguna fiabilidad. Exagerar las pruebas a gran escala hace que TttT hace que la falla de la inflaciĂłn de la puntuaciĂłn de la prueba al dogma haya servido para desviar la atenciĂłn de la seguridad laxa endĂ©mica con pruebas "internamente administradas" que deberĂan haber alentado a los responsables polĂticos a exigir mĂĄs controles externos en las administraciones de las pruebas. La falacia es en parte responsable de promover la prĂĄctica ruinosa en la preparaciĂłn de las pruebas en el formato de prueba y la administraciĂłn de pruebas prĂĄcticas como un sustituto de la preparaciĂłn de la materia original. Por Ășltimo, los promotores de la falacia han fomentado la prĂĄctica de "auditar" tendencias de determinadas puntuaciĂłn en las pruebas a gran escala con las tendencias de puntuaciĂłn presuntamente confiables de las pruebas de baja exigencia, a pesar de la abundancia de pruebas donde las puntuaciones de las pruebas a menor escala son mucho menos confiables debido al desinterĂ©s de los estudiante
- âŠ