4,372 research outputs found
On asymptotically optimal tests under loss of identifiability in semiparametric models
We consider tests of hypotheses when the parameters are not identifiable
under the null in semiparametric models, where regularity conditions for
profile likelihood theory fail. Exponential average tests based on integrated
profile likelihood are constructed and shown to be asymptotically optimal under
a weighted average power criterion with respect to a prior on the
nonidentifiable aspect of the model. These results extend existing results for
parametric models, which involve more restrictive assumptions on the form of
the alternative than do our results. Moreover, the proposed tests accommodate
models with infinite dimensional nuisance parameters which either may not be
identifiable or may not be estimable at the usual parametric rate. Examples
include tests of the presence of a change-point in the Cox model with current
status data and tests of regression parameters in odds-rate models with right
censored data. Optimal tests have not previously been studied for these
scenarios. We study the asymptotic distribution of the proposed tests under the
null, fixed contiguous alternatives and random contiguous alternatives. We also
propose a weighted bootstrap procedure for computing the critical values of the
test statistics. The optimal tests perform well in simulation studies, where
they may exhibit improved power over alternative tests.Comment: Published in at http://dx.doi.org/10.1214/08-AOS643 the Annals of
Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical
Statistics (http://www.imstat.org
Group Analysis of Self-organizing Maps based on Functional MRI using Restricted Frechet Means
Studies of functional MRI data are increasingly concerned with the estimation
of differences in spatio-temporal networks across groups of subjects or
experimental conditions. Unsupervised clustering and independent component
analysis (ICA) have been used to identify such spatio-temporal networks. While
these approaches have been useful for estimating these networks at the
subject-level, comparisons over groups or experimental conditions require
further methodological development. In this paper, we tackle this problem by
showing how self-organizing maps (SOMs) can be compared within a Frechean
inferential framework. Here, we summarize the mean SOM in each group as a
Frechet mean with respect to a metric on the space of SOMs. We consider the use
of different metrics, and introduce two extensions of the classical sum of
minimum distance (SMD) between two SOMs, which take into account the
spatio-temporal pattern of the fMRI data. The validity of these methods is
illustrated on synthetic data. Through these simulations, we show that the
three metrics of interest behave as expected, in the sense that the ones
capturing temporal, spatial and spatio-temporal aspects of the SOMs are more
likely to reach significance under simulated scenarios characterized by
temporal, spatial and spatio-temporal differences, respectively. In addition, a
re-analysis of a classical experiment on visually-triggered emotions
demonstrates the usefulness of this methodology. In this study, the
multivariate functional patterns typical of the subjects exposed to pleasant
and unpleasant stimuli are found to be more similar than the ones of the
subjects exposed to emotionally neutral stimuli. Taken together, these results
indicate that our proposed methods can cast new light on existing data by
adopting a global analytical perspective on functional MRI paradigms.Comment: 23 pages, 5 figures, 4 tables. Submitted to Neuroimag
Testing for vector autoregressive dynamics under heteroskedasticity
In this paper we introduce a bootstrap procedure to test parameterrestrictions in vector autoregressive models which is robust incases of conditionally heteroskedastic error terms. The adoptedwild bootstrap method does not require any parametricspecification of the volatility process and takes contemporaneouserror correlation implicitly into account. Via a Monte Carloinvestigation empirical size and power properties of the newmethod are illustrated. We compare the bootstrap approach withstandard procedures either ignoring heteroskedasticity or adoptinga heteroskedasticity consistent estimation of the relevantcovariance matrices in the spirit of the White correction. Interms of empirical size the proposed method clearly outperformscompeting approaches without paying any price in terms of sizeadjusted power. We apply the alternative tests to investigate thepotential of causal relationships linking daily prices of naturalgas and crude oil. Unlike standard inference ignoring time varyingerror variances, heteroskedasticity consistent test procedures donot deliver any evidence in favor of short run causality betweenthe two series.Energy markets;Causality;Bootstrap;Heteroskededasticity;Hypothesis testing;Vector autoregression
Bounding quantile demand functions using revealed preference inequalities
This paper develops a new technique for the estimation of consumer demand models with unobserved heterogeneity subject to revealed preference inequality restrictions. Particular attention is given to nonseparable heterogeneity. The inequality restrictions are used to identify bounds on quantile demand functions. A nonparametric estimator for these bounds is developed and asymptotic properties are derived. An empirical application using data from the U.K. Family Expenditure Survey illustrates the usefulness of the methods by deriving bounds and confidence sets for estimated quantile demand functions.
Economies of Scale in the Tunisian Industries
To date, empirical investigations of trade liberalization under the conditions of increasing returns to scale (IRS) and imperfect competition (IC) have either assumed or imposed the market and productive structures necessary for such a model. However, of the recent IRS/IC models used to simulate the effects of trade liberalization none have empirically tested for the presence of increasing return to scale prior to the analysis. With Tunisian data (1971-2004) and rigorous test procedures, we investigate evidence of IRS at the industry level. Using an econometric approach based on the estimation of the translog cost function and its associated cost share equations, we identify the sectors characterized by increasing returns to scale. Analysis of the results shows that specification of the model is sensitive to inclusion of time trend representing technology. The model accounting for technology did not fit the data well for most sectors. The estimation results without time trend interactions are different. Here most of the sectors show signs of increasing returns to scale.economies of scale, trade liberalization, new trade theory, Tunisian industries, cost functions
Poolability and Aggregation Problems of Regional Innovation Data: An Application to Nanomaterial Patenting
Research and development (R&D) in the field of nanomaterials is expected to be a major driver of innovation and economic growth. In this respect, many countries, as national systems of innovation, have established support programs offering subsidies for industry- and government-funded R&D. Consequently, it is of great interest to understand which factors facilitate the creation of new technological knowledge. The existing literature has typically addressed this question by employing a knowledge production function based on firm-, regional- or even country-level data. Estimating the effects for the entire national system of innovation, however, implicitly assumes poolability of regional data. We apply our reasoning to Germany, which has well-known – and wide – regional disparities, for example between the former East and West. Based on analyses at the level of NUTS-3 regions, we find different knowledge production functions for the East and the West. Moreover, we investigate how our results are affected by the adoption of alternative aggregation levels. Our findings have implications for further research in the field, that is, a careful evaluation of poolability and aggregation is required before estimating knowledge production functions at the regional level. Policy considerations are offered as well.nanotechnology, patents, poolability, aggregation, Germany, spatial autocorrelation, spatial filtering
Further results on weak-exogeneity in vector error correction models
This paper provides a necessary and sufficient condition for weak exogeneity in vector error correction models. An interesting property is that the statistics involved in the sequential procedure for testing this condition are distributed as ?ariables and can therefore easily be calculated with usual statistical computer packages, which makes our approach fully operational empirically. Finally, the power and size distortions of this sequential test procedure are analysed with Monte-Carlo simulationsCointegration, canonical representation, weak-exogeneity, power, size distortions.
- …