4,077 research outputs found
Bailing in the private sector : on the adequate design of international bond contracts
During the last decade, there has been a significant bias towards bond financing on emerging markets, with private investors relying on a bail-out of bonds by the international community. The bias has been a main cause for recent excessive fragility of international capital markets. The paper shows how collective action clauses in bonds contracts help to involve the private sector in risk sharing. It argues that such clauses, as a market based instrument, will raise spreads for emerging market debt and so help to correct a market failure towards excessive bond finance. Recent pressure by the IMF to involve the private sector is facing a conflict between the principle to honour existing contracts and the principle of equal treatment of bondholders
Gradualism vs Cold Turkey
The paper analyzes the incentive for the ECB to establish reputation by pursuing a restrictive policy right at the start of its operation. The bank is modelled as risk averse with respect to deviations of both inflation and output from her target. The public, being imperfectly informed about the bank’s preferences uses observed inflation as (imperfect) signal for the unknown preferences. Under linear learning rules - which are commonly used in the literature - a gradual build up of reputation is the optimal response. The paper shows that such a linear learning rule is not consistent with efficient signaling. It is shown that in a game with efficient signaling, a cold turkey approach - allowing for deflation - is optimal for a strong bank - accepting high current output losses at the beginning in order to demonstrate its toughness.Die Arbeit untersucht die Anreize der Europäischen Zentralbank, in der Startphase durch restriktive Politik Reputation aufzubauen. Die Öffentlichkeit kennt die Präferenzen der Zentralbank nicht; sie verwendet die beobachtete Inflationsrate als (imperfektes) Signal. Wird eine lineare Lernregel unterstellt - der Standardfall in der Literatur - erweist es sich als optimal, hohe Inflationserwartungen zumindest teilweise zu akkommodieren und so Reputation nur schrittweise aufzubauen. Die Arbeit zeigt aber, daß eine solche lineare Lernregel mit effizientem Signalverhalten nicht konsistent ist. Bei effizientem Signalisieren kann es für eine harte Zentralbank optimal sein, in der Startphase durch eine sehr restriktive, deflationäre Politik ihre Präferenzen zu offenbaren
Gradualism vs Cold Turkey : how to establish credibility for the ECB
The paper analyzes the incentive for the ECB to establish reputation by pursuing a restrictive policy right at the start of its operation. The bank is modelled as risk averse with respect to deviations of both inflation and output from her target. The public, being imperfectly informed about the bank’s preferences uses observed inflation as (imperfect) signal for the unknown preferences. Under linear learning rules - which are commonly used in the literature - a gradual build up of reputation is the optimal response. The paper shows that such a linear learning rule is not consistent with efficient signaling. It is shown that in a game with efficient signaling, a cold turkey approach - allowing for deflation - is optimal for a strong bank - accepting high current output losses at the beginning in order to demonstrate its toughness. JEL classification: D 82, E 58Die Arbeit untersucht die Anreize der Europäischen Zentralbank, in der Startphase durch restriktive Politik Reputation aufzubauen. Die Öffentlichkeit kennt die Präferenzen der Zentralbank nicht; sie verwendet die beobachtete Inflationsrate als (imperfektes) Signal. Wird eine lineare Lernregel unterstellt - der Standardfall in der Literatur - erweist es sich als optimal, hohe Inflationserwartungen zumindest teilweise zu akkommodieren und so Reputation nur schrittweise aufzubauen. Die Arbeit zeigt aber, daß eine solche lineare Lernregel mit effizientem Signalverhalten nicht konsistent ist. Bei effizientem Signalisieren kann es für eine harte Zentralbank optimal sein, in der Startphase durch eine sehr restriktive, deflationäre Politik ihre Präferenzen zu offenbaren. JEL classification: D 82, E 5
Financial Stability and Monetary Policy – A Framework
The paper presents a stylised framework to analyse conditions under which monetary policy contributes to amplified movements in the housing market. Extending work by Hyun Shin (2005), the paper analyses self enforcing feedback mechanisms resulting in amplifier effects in a credit constrained economy. The paper characterizes conditions for asymmetric effects, causing systemic crises. By injecting liquidity, monetary policy can prevent a meltdown. Anticipating such a response, private agents are encouraged to take higher risks. Provision of liquidity works as a public good, but it may create potential conflicts with other policy objectives and may give incentives to build up leverage with a high systemic exposure to small probability events.
Speculative attacks : unique sunspot equilibrium and transparency
Models with multiple equilibria are a popular way to explain currency attacks. Morris and Shin (1998) have shown that, in the context of those models, unique equilibria may prevail once noisy private information is introduced. In this paper, we generalize the results of Morris and Shin to a broader class of probability distributions and show - using the technique of iterated elimination of dominated strategies - that uniqueness will hold, even if we allow for sunspots and individual uncertainty about strategic behavior of other agents. We provide a clear exposition of the logic of this model and we analyse the impact of transparency on the probability of a speculative attack. For the case of uniform distribution of noisy signals, we show that increased transparency of government policy reduces the likelihood of attacks. JEL Classification F 31, D 82Modelle mit multiplen Gleichgewichten sind ein populärer Ansatz zur Erklärung spekulativer Attacken. Morris und Shin (1998) haben jedoch gezeigt, dass auch im Rahmen dieser Modelle eindeutige Gleichgewichte zu erwarten sind, sobald die Spekulanten verzerrte private Signale über die Fundamentaldaten erhalten. In dieser Arbeit verallgemeinern wir die Ergebnisse von Morris und Shin und zeigen, dass die Gleichgewichte selbst dann eindeutig sind, wenn Sunspot Variablen und individuelle Unsicherheit über Strategien zugelassen werden. Zudem analysieren wir, welchen Einfluss Transparenz auf die Wahrscheinlichkeit erfolgreicher Attacken hat. Für den Fall der Gleichverteilung verzerrter Signale zeigen wir, dass bei transparenter Geldpolitik ein Ausbruch solcher Attacken mit geringerer Wahrscheinlichkeit auftritt
Speculative Attacks
Models with multiple equilibria are a popular way to explain currency attacks. Morris and Shin (1998) have shown that, in the context of those models, unique equilibria may prevail once noisy private information is introduced. In this paper, we generalize the results of Morris and Shin to a broader class of probability distributions and show - using the technique of iterated elimination of dominated strategies - that uniqueness will hold, even if we allow for sunspots and individual uncertainty about strategic behavior of other agents. We provide a clear exposition of the logic of this model and we analyse the impact of transparency on the probability of a speculative attack. For the case of uniform distribution of noisy signals, we show that increased transparency of government policy reduces the likelihood of attacks
The New Basel Capital Accord and the Cyclical Behaviour of Bank Capital
The authors conduct a counterfactual simulation of the proposed rules under the new Basel Capital Accord (Basel II), including the revised treatment of expected and unexpected credit losses proposed by the Basel Committee in October 2003. When the authors apply the simulation to Canadian banking system data over the period 1984–2003, they find that capital requirements for banks will likely fall in absolute terms even after allowing for the new operational risk charge (bearing in mind that the induced behavioural response of banks to the changed incentives under Basel II is not captured). The impact on the volatility of required bank capital is less clear. It will depend importantly on the credit quality distribution of banks' loan portfolios and on the precise way in which they calculate expected and unexpected losses. Sensitivity analysis, including that based on a range of hypothetical distributions for banks' loan portfolios, shows the potential for a substantial increase in implied volatility. Moreover, if historical relationships are a good indicator of the future, changes in required capital and provisions for commercial and industrial, interbank, and sovereign exposures will likely be countercyclical under Basel II (i.e., capital requirements will increase during recessions). This raises questions about the new accord's potentially procyclical impact on banks' lending behaviour, and the resultant macroeconomic implications.Financial institutions
Liquidity Shortages and Monetary Policy
The paper models the interaction between risk taking in the financial sector and central bank policy for the case of pure illiquidity risk. It is shown that, when bad states are highly unlikely, public provision of liquidity may improve the allocation, even though it encourages more risk taking (less liquid investment) by private banks. In general, however, there is an incentive of financial intermediaries to free ride on liquidity in good states, resulting in excessively low liquidity in bad states. In the prevailing mixed-strategy equilibrium, depositors are worse off than if banks would coordinate on more liquid investment. In that case, liquidity injection will make the free riding problem even worse. The results show that even in the case of pure illiquidity risk, there is a serious commitment problem for central banks. We show that unconditional free lending against good collateral, as suggested by the Bagehot Rule, fails to address the moral hazard problem: Even though we consider a model with pure illiquidity risk, it turns out that such a policy will encourage banks to behave naughty, providing insufficient level of liquidity.monetary policy, liquidity risk, financial stability
Biologically plausible deep learning -- but how far can we go with shallow networks?
Training deep neural networks with the error backpropagation algorithm is
considered implausible from a biological perspective. Numerous recent
publications suggest elaborate models for biologically plausible variants of
deep learning, typically defining success as reaching around 98% test accuracy
on the MNIST data set. Here, we investigate how far we can go on digit (MNIST)
and object (CIFAR10) classification with biologically plausible, local learning
rules in a network with one hidden layer and a single readout layer. The hidden
layer weights are either fixed (random or random Gabor filters) or trained with
unsupervised methods (PCA, ICA or Sparse Coding) that can be implemented by
local learning rules. The readout layer is trained with a supervised, local
learning rule. We first implement these models with rate neurons. This
comparison reveals, first, that unsupervised learning does not lead to better
performance than fixed random projections or Gabor filters for large hidden
layers. Second, networks with localized receptive fields perform significantly
better than networks with all-to-all connectivity and can reach backpropagation
performance on MNIST. We then implement two of the networks - fixed, localized,
random & random Gabor filters in the hidden layer - with spiking leaky
integrate-and-fire neurons and spike timing dependent plasticity to train the
readout layer. These spiking models achieve > 98.2% test accuracy on MNIST,
which is close to the performance of rate networks with one hidden layer
trained with backpropagation. The performance of our shallow network models is
comparable to most current biologically plausible models of deep learning.
Furthermore, our results with a shallow spiking network provide an important
reference and suggest the use of datasets other than MNIST for testing the
performance of future models of biologically plausible deep learning.Comment: 14 pages, 4 figure
- …
