13,180 research outputs found

    Nagelian Reduction and Coherence

    Get PDF
    It can be argued (cf. Dizadji‑Bahmani et al. 2010) that an increase in coherence is one goal that drives reductionist enterprises. Consequently, the question if or how well this goal is achieved can serve as an epistemic criterion for evaluating both a concrete case of a purported reduction and our model of reduction : what conditions on the model allow for an increase in coherence ? In order to answer this question, I provide an analysis of the relation between the reduction and the coherence of two theories. The underlying model of reduction is a (generalised) Nagelian model (cf. Nagel 1970, Schaffner 1974, Dizadji‑Bahmani et al. 2010). For coherence, different measures have been put forward (e.g. Shogenji 1999, Olsson 2002, Fitelson 2003, Bovens & Hartmann 2003). However, since there are counterexamples to each proposed coherence measure, we should be careful that the analysis be sufficiently stable (in a sense to be specified). It will turn out that this can be done

    Black market and official exchange rates: Long-run equilibrium and short-run dynamics

    Get PDF
    This paper provides further empirical results on the relationship between black market and official exchange rates in six emerging economies (Iran, India, Indonesia, Korea, Pakistan, and Thailand). First, it applies both time series techniques and heterogeneous panel methods to test for the existence of a long-run relation between these two types of exchange rates. Second, it tests formally the validity of the proportionality restriction implying a constant black-market premium. Third, in addition to the long-run equilibrium, it also analyses the short-run dynamic responses of both markets to shocks. Evidence of market inefficiency and incomplete (or long-lived) reversion to long-run equilibrium is found. This implies that financial managers can only partially reduce the exchange rate risk, whilst monetary authorities can effectively pursue their policy objectives by imposing foreign exchange or direct controls

    Production of tau tau jj final states at the LHC and the TauSpinner algorithm: the spin-2 case

    Full text link
    The TauSpinner algorithm is a tool that allows to modify the physics model of the Monte Carlo generated samples due to the changed assumptions of event production dynamics, but without the need of re-generating events. With the help of weights τ\tau-lepton production or decay processes can be modified accordingly to a new physics model. In a recent paper a new version TauSpinner ver.2.0.0 has been presented which includes a provision for introducing non-standard states and couplings and study their effects in the vector-boson-fusion processes by exploiting the spin correlations of τ\tau-lepton pair decay products in processes where final states include also two hard jets. In the present paper we document how this can be achieved taking as an example the non-standard spin-2 state that couples to Standard Model particles and tree-level matrix elements with complete helicity information included for the parton-parton scattering amplitudes into a τ\tau-lepton pair and two outgoing partons. This implementation is prepared as the external (user provided) routine for the TauSpinner algorithm. It exploits amplitudes generated by MadGraph5 and adopted to the TauSpinner algorithm format. Consistency tests of the implemented matrix elements, reweighting algorithm and numerical results for observables sensitive to τ\tau polarization are presented.Comment: 17 pages, 6 figures; version published in EPJ

    Asleep at the wheel: the real interest rate experience in Australia

    Get PDF
    A re-thinking and clear understanding of the factors underlying a country's balance of trade position is needed as the global trade regime becomes more liberalized. The relationship between the overall trade balance and its determinants as propounded in the standard models may not necessarily be the same with the bilateral trade balances. This study has developed a model of bilateral trade balance that captures the effects of all factors influencing trade balance as suggested by elasticity, absorption, and monetary approaches and the popular Gravity Model with some extensions. Specifically, the present paper postulates that the relative factors determine the trading pattern, and hence the trade balance of a country in bilateral trade with partners while in the earlier models absolute factors determine the trade balance,. Using standard panel data techniques the model is empirically tested and the results show significant effects of all the relative factors on the bilateral trade balance of Bangladesh in trading with her partners. The robustness check of the model ensures the validity of the specification.Trade Balance, Panel Data

    The Demand for Money in a Simultaneous-Equation Framework

    Get PDF
    This paper estimates the demand for money in the U.S. within a model where the money supply function is also considered simultaneously. The explanatory variables for the money demand function include a measure of the interest rate, real income and the exchange rate. The explanatory variables for the money supply function include the output gap and the inflation gap in addition to an interest rate. The parameters estimated for the two equations avoid being biased or inconsistent. The results should be useful to both macroeconomic researchers and policy makers.Money demand, money supply, simultaneous-equation model, output gap, inflation gap, three stage least squares

    Densest Subgraph in Dynamic Graph Streams

    Full text link
    In this paper, we consider the problem of approximating the densest subgraph in the dynamic graph stream model. In this model of computation, the input graph is defined by an arbitrary sequence of edge insertions and deletions and the goal is to analyze properties of the resulting graph given memory that is sub-linear in the size of the stream. We present a single-pass algorithm that returns a (1+Ï”)(1+\epsilon) approximation of the maximum density with high probability; the algorithm uses O(\epsilon^{-2} n \polylog n) space, processes each stream update in \polylog (n) time, and uses \poly(n) post-processing time where nn is the number of nodes. The space used by our algorithm matches the lower bound of Bahmani et al.~(PVLDB 2012) up to a poly-logarithmic factor for constant Ï”\epsilon. The best existing results for this problem were established recently by Bhattacharya et al.~(STOC 2015). They presented a (2+Ï”)(2+\epsilon) approximation algorithm using similar space and another algorithm that both processed each update and maintained a (4+Ï”)(4+\epsilon) approximation of the current maximum density in \polylog (n) time per-update.Comment: To appear in MFCS 201
    • 

    corecore