9,594 research outputs found

    "Real Exchange Rates and the International Mobility of Capital"

    Get PDF
    This paper demonstrates that the terms of trade are determined by the equalization of profit rates across international regulating capitals, for socially determined national real wages. This provides a classical/Marxian basis for the explanation of real exchange rates, based on the same principle of absolute cost advantage which rules national prices. Large international flows of direct investment are not necessary for this result, since the international mobility of financial capital is sufficient. Such a determination of the terms of trade implies that international trade will generally give rise to persistent structural trade imbalances covered by endogenously generated capital flows which will fill any existing gaps in the overall balance of payments. It also implies that devaluations will not have a lasting effect on trade balances, unless they are also attended by fundamental changes in national real wages or productivities. Finally, it implies that neither the absolute nor the relative version of the Purchasing Power Parity hypothesis (PPP) will generally hold, with the exception that the relative version of PPP will appear to hold when a country experiences a relatively high inflation rate. Such patterns are well documented, and in contrast to comparative advantage or PPP theory, the present approach implies that the existing historical record is perfectly coherent. Empirical tests of the propositions advanced in this paper have been conducted elsewhere, with good results.

    On stepdown control of the false discovery proportion

    Full text link
    Consider the problem of testing multiple null hypotheses. A classical approach to dealing with the multiplicity problem is to restrict attention to procedures that control the familywise error rate (FWERFWER), the probability of even one false rejection. However, if ss is large, control of the FWERFWER is so stringent that the ability of a procedure which controls the FWERFWER to detect false null hypotheses is limited. Consequently, it is desirable to consider other measures of error control. We will consider methods based on control of the false discovery proportion (FDPFDP) defined by the number of false rejections divided by the total number of rejections (defined to be 0 if there are no rejections). The false discovery rate proposed by Benjamini and Hochberg (1995) controls E(FDP)E(FDP). Here, we construct methods such that, for any γ\gamma and α\alpha, P{FDP>γ}αP\{FDP>\gamma \}\le \alpha. Based on pp-values of individual tests, we consider stepdown procedures that control the FDPFDP, without imposing dependence assumptions on the joint distribution of the pp-values. A greatly improved version of a method given in Lehmann and Romano \citer10 is derived and generalized to provide a means by which any sequence of nondecreasing constants can be rescaled to ensure control of the FDPFDP. We also provide a stepdown procedure that controls the FDRFDR under a dependence assumption.Comment: Published at http://dx.doi.org/10.1214/074921706000000383 in the IMS Lecture Notes--Monograph Series (http://www.imstat.org/publications/lecnotes.htm) by the Institute of Mathematical Statistics (http://www.imstat.org

    Stepup procedures for control of generalizations of the familywise error rate

    Full text link
    Consider the multiple testing problem of testing null hypotheses H1,...,HsH_1,...,H_s. A classical approach to dealing with the multiplicity problem is to restrict attention to procedures that control the familywise error rate (FWER\mathit{FWER}), the probability of even one false rejection. But if ss is large, control of the FWER\mathit{FWER} is so stringent that the ability of a procedure that controls the FWER\mathit{FWER} to detect false null hypotheses is limited. It is therefore desirable to consider other measures of error control. This article considers two generalizations of the FWER\mathit{FWER}. The first is the kFWERk-\mathit{FWER}, in which one is willing to tolerate kk or more false rejections for some fixed k1k\geq 1. The second is based on the false discovery proportion (FDP\mathit{FDP}), defined to be the number of false rejections divided by the total number of rejections (and defined to be 0 if there are no rejections). Benjamini and Hochberg [J. Roy. Statist. Soc. Ser. B 57 (1995) 289--300] proposed control of the false discovery rate (FDR\mathit{FDR}), by which they meant that, for fixed α\alpha, E(FDP)αE(\mathit{FDP})\leq\alpha. Here, we consider control of the FDP\mathit{FDP} in the sense that, for fixed γ\gamma and α\alpha, P{FDP>γ}αP\{\mathit{FDP}>\gamma\}\leq \alpha. Beginning with any nondecreasing sequence of constants and pp-values for the individual tests, we derive stepup procedures that control each of these two measures of error control without imposing any assumptions on the dependence structure of the pp-values. We use our results to point out a few interesting connections with some closely related stepdown procedures. We then compare and contrast two FDP\mathit{FDP}-controlling procedures obtained using our results with the stepup procedure for control of the FDR\mathit{FDR} of Benjamini and Yekutieli [Ann. Statist. 29 (2001) 1165--1188].Comment: Published at http://dx.doi.org/10.1214/009053606000000461 in the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    On the uniform asymptotic validity of subsampling and the bootstrap

    Full text link
    This paper provides conditions under which subsampling and the bootstrap can be used to construct estimators of the quantiles of the distribution of a root that behave well uniformly over a large class of distributions P\mathbf{P}. These results are then applied (i) to construct confidence regions that behave well uniformly over P\mathbf{P} in the sense that the coverage probability tends to at least the nominal level uniformly over P\mathbf{P} and (ii) to construct tests that behave well uniformly over P\mathbf{P} in the sense that the size tends to no greater than the nominal level uniformly over P\mathbf{P}. Without these stronger notions of convergence, the asymptotic approximations to the coverage probability or size may be poor, even in very large samples. Specific applications include the multivariate mean, testing moment inequalities, multiple testing, the empirical process and U-statistics.Comment: Published in at http://dx.doi.org/10.1214/12-AOS1051 the Annals of Statistics (http://www.imstat.org/aos/) by the Institute of Mathematical Statistics (http://www.imstat.org

    "Explaining Long-Term Exchange Rate Behavior in the United States and Japan"

    Get PDF
    Conventional exchange rate models are based on the fundamental hypothesis that, in the long run, real exchange rates will move in such a way as to make countries equally competitive. Thus they assume that, in the long run, trade between countries will be roughly balanced. The difficulty in assessing expectations about the consequences of trade arrangements (such as NAFTA or the EEC) is that these models perform quite poorly at an empirical level, making them an unreliable guide to economic policy. To have a sound foundation for economic policy requires operating from a theoretically grounded explanation of exchange rates that works well across a spectrum of developed and developing countries. This paper applies the theoretical and empirical foundation developed in Shaikh (1980, 1991, 1995), and previously applied to Spain, Mexico, and Greece (Roman 1997; Ruiz-Napoles 1996; Antonopoulos 1997), to the explanation of the exchange rates of the United States and Japan. Such a framework implies that it is a country's competitive position, as measured by the real unit costs of its tradables, that determines its real exchange rate. This determination of real exchange rates through real unit costs provides a possible explanation for why trade imbalances remain persistent and a policy rule-of-thumb for sustainable exchange rates. The aim is to show that a theoretically grounded, empirically robust, explanation of real exchange rate movements can be constructed that also can be of practical use to researchers and policymakers.

    AN INVERSE DEMAND APPROACH TO RECREATION FISHING SITE CHOICE AND IMPLIED MARGINAL VALUES

    Get PDF
    An alternative methodology for determining marginal willingness to pay values for recreational fishing trips is developed based on inverse demand systems and the distance function. Our empirical application uses joint estimation of several species-specific site equations from a recreation fishing data set. Results are compared to a random utility model.Resource /Energy Economics and Policy,

    "Measuring Capacity Utilization in OECD Countries: A Cointegration Method"

    Get PDF
    This paper derives measures of potential output and capacity utilization for a number of OECD countries, using a method based on the cointegration relation between output and the capital stock. The intuitive idea is that economic capacity (potential output) is the aspect of output that co-varies with the capital stock over the long run. We show that this notion can be derived from a simple model that allows for a changing capital-capacity ratio in response to partially exogenous, partially embodied, technical change. Our method provides a simple and general procedure for estimating capacity utilization. It also closely replicates a previously developed census-based measure of U.S. manufacturing capacity-utilization. Of particular interest is that our measures of capacity utilization are very different from those based on aggregate production functions, such as the ones provided by the IMF.
    corecore