7,454 research outputs found
Invariant Tests Based on M-Estimators, Estimating Functions and the Generalized Method of Moments
In this paper, we study the invariance properties of various test criteria which have been proposed for hypothesis testing in the context of incompletely specified models, such as models which are formulated in terms of estimating functions (Godambe, 1960) moment conditions and are estimated by generalized method of moments (GMM) procedures (Hansen, 1982), and models estimated by pseudo-likelihood (Gourieroux, Monfort and Trognon, 1984) and M-estimation methods. The invariance properties considered include invariance to (possibly nonlinear) hypothesis reformulations and reparameterizations. The test statistics examined include Wald-type, LR-type, LM-type, score-type, and C(alpha)-type criteria. Extending the approach used in Dagenais and Dufour (1991), we show first that all these test statistics except the Wald-type ones are invariant to equivalent hypothesis reformulations (under usual regularity conditions), but all five of them are not generally invariant to model reparameterizations, including measurement unit changes in nonlinear models. In other words, testing two equivalent hypotheses in the context of equivalent models may lead to completely different inferences. For example, this may occur after an apparently innocuous rescaling of some model variables. Then, in view of avoiding such undesirable properties, we study restrictions that can be imposed on the objective functions used for pseudo-likelihood (or M-estimation) as well as the structure of the test criteria used with estimating functions and GMM procedures to obtain invariant tests. In particular, we show that using linear exponential pseudo-likelihood functions allows one to obtain invariant score-type and C(alpha)-type test criteria, while in the context of estimating function (or GMM) procedures it is possible to modify a LR-type statistic proposed by Newey and West (1987) to obtain a test statistic that is invariant to general reparameterizations. The invariance associated with linear exponential pseudo-likelihood functions is interpreted as a strong argument for using such pseudo-likelihood functions in empirical work.
Low-bid Auction Versus High-bid Auction For Siting Noxious Facilities in a Two-city Region: an Exact Approach
Two auctions have been proposed in the literature for siting noxious facilities: the high-bid and the low-bid auctions. In this paper, we pursue the analysis of these auctions made by O'Sullivan [1993], where he concludes that the high-bid auction has the edge over the low-bid auction. We point out that O'Sullivan has made an approximation for the expected value of the compensation obtained with the high-bid auction, and we show how to obtain the exact value. We discuss a paradox linked with O'Sullivan's result, which mitigates his conclusions, and we show that with exact compensation, the high-bid auction mechanism is indeed far superior to the low-bid auction Deux enchères ont été proposées dans la littérature pour localiser les biens publics générateurs de nuisances : l’enchère à compensation haute et l’enchère à compensation basse. Dans ce papier, nous poursuivons l’analyse faite par O’Sullivan [1993] de ces deux enchères, dans laquelle il conclut que l’enchère à compensation haute domine l’enchère à compensation basse. Nous soulignons le fait que O’Sullivan a utilisé une approximation pour la valeur espérée de la compensation dans l’enchère à compensation haute et nous montrons comment obtenir la valeur exacte. Nous discutons d’un paradoxe, lié au résultat de O’Sullivan, qui nuance ses conclusions, et nous montrons qu’avec la compensation exacte l’enchère à compensation haute est largement supérieure à l’enchère à compensation bassenoxious facility siting, NIMBY syndrome, auction scheme, Nash equilibrium, low-bid auction, high-bid auction, biens publics générateurs de nuisances, localisation, syndrome NIMBY, enchères, équilibre de Nash, enchère à compensation basse, enchère à compensation haute
A framework to reconcile frequency scaling measurements, from intracellular recordings, local-field potentials, up to EEG and MEG signals
In this viewpoint article, we discuss the electric properties of the medium
around neurons, which are important to correctly interpret extracellular
potentials or electric field effects in neural tissue. We focus on how these
electric properties shape the frequency scaling of brain signals at different
scales, such as intracellular recordings, the local field potential (LFP), the
electroencephalogram (EEG) or the magnetoencephalogram (MEG). These signals
display frequency-scaling properties which are not consistent with resistive
media. The medium appears to exert a frequency filtering scaling as
, which is the typical frequency scaling of ionic diffusion. Such a
scaling was also found recently by impedance measurements in physiological
conditions. Ionic diffusion appears to be the only possible explanation to
reconcile these measurements and the frequency-scaling properties found in
different brain signals. However, other measurements suggest that the
extracellular medium is essentially resistive. To resolve this discrepancy, we
show new evidence that metal-electrode measurements can be perturbed by shunt
currents going through the surface of the brain. Such a shunt may explain the
contradictory measurements, and together with ionic diffusion, provides a
framework where all observations can be reconciled. Finally, we propose a
method to perform measurements avoiding shunting effects, thus enabling to test
the predictions of this framework.Comment: (in press
The Renewable Resource Management Nexus: Impulse versus Continuous Harvesting Policies
We explore the link between cyclical and smooth resource exploitation. We define an impulse control framework which can generate both cyclical solutions and steady state solutions. For the cyclical solutions [...].
A newly conceived cylinder measuring machine and methods that eliminate the spindle errors
Advanced manufacturing processes require improving dimensional metrology applications to reach a nanometric accuracy level. Such measurements may be carried out using conventional highly accurate roundness measuring machines. On these machines, the metrology loop goes through the probing and the mechanical guiding elements. Hence, external forces, strain and thermal expansion are transmitted to the metrological structure through the supporting structure, thereby reducing measurement quality. The obtained measurement also combines both the motion error of the guiding system and the form error of the artifact. Detailed uncertainty budgeting might be improved, using error separation methods (multi-step, reversal and multi-probe error separation methods, etc), enabling identification of the systematic (synchronous or repeatable) guiding system motion errors as well as form error of the artifact. Nevertheless, the performance of this kind of machine is limited by the repeatability level of the mechanical guiding elements, which usually exceeds 25 nm (in the case of an air bearing spindle and a linear bearing). In order to guarantee a 5 nm measurement uncertainty level, LNE is currently developing an original machine dedicated to form measurement on cylindrical and spherical artifacts with an ultra-high level of accuracy. The architecture of this machine is based on the ‘dissociated metrological technique’ principle and contains reference probes and cylinder. The form errors of both cylindrical artifact and reference cylinder are obtained after a mathematical combination between the information given by the probe sensing the artifact and the information given by the probe sensing the reference cylinder by applying the modified multi-step separation method.Thèse CIFR
Concept and architecture of a new apparatus for cylindrical form measurement with a nanometric level of accuracy
In relation to the industrial need and to the progress of technology, Laboratoire National de M´etrologie et d’Essais (LNE) would like to improve the measurement of its primary pressure standards, spherical and flick standards. The spherical and flick standards are, respectively, used to calibrate the spindle motion error and the probe, which equip commercial conventional cylindricity-measuring machines. The primary pressure standards are obtained using pressure balances equipped with rotary pistons. To reach a relative uncertainty of 10−6 in the pressure measurement, it is necessary to know the diameters of both the piston and the cylinder with an uncertainty of 5 nm for a piston diameter of 10 mm. Conventional machines are not able to reach such an uncertainty level. That is why the development of a new machine is necessary. The purpose of this paper is to present the concepts and the architecture adopted in the development of the new equipment dedicated to cylindricity measurement at a nanometric level of a accuracy. The choice of these concepts is based on the analysis of the uncertainty sources encountered in conventional architectures. The architecture of the new ultra-high equipment as well as the associated calibration procedures will be described and detailed.Thèse CIFR
Systemes de numeration et fonctions fractales relatifs aux substitutions
AbstractLet A be a finite alphabet, σ a substitution over A, (un)n ϵ N a fixed point for σ, and for each a ϵ A, ƒ(a) a real number. We establish, under some assumptions, an asymptotic formula concerning the sum Sƒ (N) = Σi ⩽ N ƒ(ui), N ϵ N. This result generalizes some previous results from Coquet or Brillhart, Erdös, and Morton. Moreover, relations with self-affine functions (in a sense which generalizes a definition from Kamae) are proved. The calculi leave over systems of representation of integers and real numbers
- …