5,711 research outputs found
Essays on monetary policy and financial stability
Doutoramento em EconomiaBy focusing on the relationship between financial stability and monetary policy for the cases of Chile, Colombia, Japan, Portugal and the UK, this thesis aims to add to the existing literature on the fundamental issue of the relationship between financial stability and monetary policy, a traditional topic that gained importance in the aftermath of the GFC as Central Banks lowered policy rates in an effort to rescue their economies. As the zero-lower bound loomed and the reach of traditional monetary policy narrowed, policy makers realised that alternative frameworks were needed and hence, macroprudential policy measures aimed at targeting the financial system as a whole were introduced.
The second chapter looks at the relationship between monetary policy and financial stability, which has gained importance in recent years as Central Bank policy rates neared the zero-lower bound. We use an SVAR model to study the impact of monetary policy shocks on three proxies for financial stability as well as a proxy for economic growth. Monetary policy is represented by policy rates for the EMEs and shadow rates for the AEs in our chapter. Our main results show that monetary policy may be used to correct asset mispricing, to control fluctuations in the real business cycle and also to tame credit cycles in the majority of cases. Our results also show that for the majority of cases, in line with theory, local currencies appreciate following a positive monetary policy shock. Monetary policy intervention may indeed be successful in contributing to or achieving financial stability. However, the results show that monetary policy may not have the ability to maintain or re-establish financial stability in all cases. Alternative policy choices such as macroprudential policy tool frameworks which are aimed at targeting the financial system as a whole may be implemented as a means of fortifying the economy.
The third chapter looks at the institutional setting of the countries in question, the independence of the Central Bank, the political environment and the impact of these factors on financial Abstract stability. I substantiate the literature review discussion with a brief empirical analysis of the effect of Central Bank Independence on credit growth using an existing database created by Romelli (2018). The empirical results show that there is a positive relationship between credit growth and the level of Central Bank Independence (CBI) due to the positive and statistically significant coefficient on the interaction term between growth in domestic credit to the private sector and the level of CBI. When considering domestic credit by deposit money banks and other financial institutions, the interaction term is positive and statistically significant for the case of the UK for the third regression equation. A number of robustness checks show that the coefficient is positive and statistically significant for a number of cases when implementing a variety of estimation methods. Fluctuations in credit growth are larger for higher levels of CBI and hence, in periods of financial instability or ultimately financial crises, CBI would be reined back in an effort to re-establish financial stability. Based on the empirical results, and in an effort to slow down surging credit supply and to maintain financial stability, policy makers and governmental authorities should attempt to decrease the level of CBI when the economy shows signs of overheating and credit supply continues to increase.
The fourth chapter looks at the interaction between macroprudential policy and financial stability. The unexpected interconnectedness of the global economy and the economic blight that occurred as a result of this, recapitulated the need to implement an alternative policy framework aimed at targeting the financial system as a whole and hence, targeting the maintenance of financial stability. In this chapter, an index of domestic macroprudential policy tools is constructed and the effectiveness of these tools in controlling credit growth, managing GDP growth and stabilising inflation growth is studied using a dynamic panel data model for the period between 2000 and 2017. The empirical analysis includes two panels namely an EU panel of 27 countries and a Latin American panel of 7 countries, the chapter also looks at a case study of Japan, Portugal and the UK. Our main results find that a tighter macroprudential policy tool stance leads to a decrease in both credit growth and GDP growth while, a tighter macroprudential policy tool stance results in higher inflation in the majority of cases. Further, we find that capital openness plays a more important role in the case of Latin America, this may be due to the region’s dependence on foreign capital flows and exchange rate movements. Lastly, we find that, in times of higher perceived market volatility, GDP growth tends to be higher and inflation growth tends to be lower in the EU. In the other cases, higher levels of perceived market volatility result in higher inflation, higher credit growth and lower GDP Abstract growth. This is in line with expectations as an increase in perceived market volatility is met with an increased flow of assets into safer markets such as the EU.
This thesis establishes a relationship between financial stability and monetary policy by studying the response of Chile, Colombia, Japan, Portugal and the UK in the aftermath of the GFC as Central Banks lowered policy rates in an effort to rescue their economies. In short, the results of the work conducted in this thesis may be summarised as follows. Our results show that monetary policy contributes to the achievement of financial stability. Still, monetary policy alone is not sufficient and should be reinforced by less traditional policy choices such as macroprudential policy tools. Secondly, we find that the level of CBI should be reined in in times of surging credit supply in an effort to maintain financial stability. Finally, we conclude that macroprudential policy tools play an important role in the achievement of financial stability. These tools should complement traditional monetary policy frameworks and should be adapted for each region.info:eu-repo/semantics/publishedVersio
A hybrid model for day-ahead electricity price forecasting: Combining fundamental and stochastic modelling
The accurate prediction of short-term electricity prices is vital for
effective trading strategies, power plant scheduling, profit maximisation and
efficient system operation. However, uncertainties in supply and demand make
such predictions challenging. We propose a hybrid model that combines a
techno-economic energy system model with stochastic models to address this
challenge. The techno-economic model in our hybrid approach provides a deep
understanding of the market. It captures the underlying factors and their
impacts on electricity prices, which is impossible with statistical models
alone. The statistical models incorporate non-techno-economic aspects, such as
the expectations and speculative behaviour of market participants, through the
interpretation of prices. The hybrid model generates both conventional point
predictions and probabilistic forecasts, providing a comprehensive
understanding of the market landscape. Probabilistic forecasts are particularly
valuable because they account for market uncertainty, facilitating informed
decision-making and risk management. Our model delivers state-of-the-art
results, helping market participants to make informed decisions and operate
their systems more efficiently
A Spatio-temporal Decomposition Method for the Coordinated Economic Dispatch of Integrated Transmission and Distribution Grids
With numerous distributed energy resources (DERs) integrated into the
distribution networks (DNs), the coordinated economic dispatch (C-ED) is
essential for the integrated transmission and distribution grids. For large
scale power grids, the centralized C-ED meets high computational burden and
information privacy issues. To tackle these issues, this paper proposes a
spatio-temporal decomposition algorithm to solve the C-ED in a distributed and
parallel manner. In the temporal dimension, the multi-period economic dispatch
(ED) of transmission grid (TG) is decomposed to several subproblems by
introducing auxiliary variables and overlapping time intervals to deal with the
temporal coupling constraints. Besides, an accelerated alternative direction
method of multipliers (A-ADMM) based temporal decomposition algorithm with the
warm-start strategy, is developed to solve the ED subproblems of TG in
parallel. In the spatial dimension, a multi-parametric programming projection
based spatial decomposition algorithm is developed to coordinate the ED
problems of TG and DNs in a distributed manner. To further improve the
convergence performance of the spatial decomposition algorithm, the aggregate
equivalence approach is used for determining the feasible range of boundary
variables of TG and DNs. Moreover, we prove that the proposed spatio-temporal
decomposition method can obtain the optimal solution for bilevel convex
optimization problems with continuously differentiable objectives and
constraints. Numerical tests are conducted on three systems with different
scales, demonstrating the high computational efficiency and scalability of the
proposed spatio-temporal decomposition method
Dual dynamic programming for stochastic programs over an infinite horizon
We consider a dual dynamic programming algorithm for solving stochastic
programs over an infinite horizon. We show non-asymptotic convergence results
when using an explorative strategy, and we then enhance this result by reducing
the dependence of the effective planning horizon from quadratic to linear. This
improvement is achieved by combining the forward and backward phases from dual
dynamic programming into a single iteration. We then apply our algorithms to a
class of problems called hierarchical stationary stochastic programs, where the
cost function is a stochastic multi-stage program. The hierarchical program can
model problems with a hierarchy of decision-making, e.g., how long-term
decisions influence day-to-day operations. We show that when the subproblems
are solved inexactly via a dynamic stochastic approximation-type method, the
resulting hierarchical dual dynamic programming can find approximately optimal
solutions in finite time. Preliminary numerical results show the practical
benefits of using the explorative strategy for solving the Brazilian
hydro-thermal planning problem and economic dispatch, as well as the potential
to exploit parallel computing.Comment: 45 pages. New experiments for hierarchical problem and writing
update
Trainable Variational Quantum-Multiblock ADMM Algorithm for Generation Scheduling
The advent of quantum computing can potentially revolutionize how complex
problems are solved. This paper proposes a two-loop quantum-classical solution
algorithm for generation scheduling by infusing quantum computing, machine
learning, and distributed optimization. The aim is to facilitate employing
noisy near-term quantum machines with a limited number of qubits to solve
practical power system optimization problems such as generation scheduling. The
outer loop is a 3-block quantum alternative direction method of multipliers
(QADMM) algorithm that decomposes the generation scheduling problem into three
subproblems, including one quadratically unconstrained binary optimization
(QUBO) and two non-QUBOs. The inner loop is a trainable quantum approximate
optimization algorithm (T-QAOA) for solving QUBO on a quantum computer. The
proposed T-QAOA translates interactions of quantum-classical machines as
sequential information and uses a recurrent neural network to estimate
variational parameters of the quantum circuit with a proper sampling technique.
T-QAOA determines the QUBO solution in a few quantum-learner iterations instead
of hundreds of iterations needed for a quantum-classical solver. The outer
3-block ADMM coordinates QUBO and non-QUBO solutions to obtain the solution to
the original problem. The conditions under which the proposed QADMM is
guaranteed to converge are discussed. Two mathematical and three generation
scheduling cases are studied. Analyses performed on quantum simulators and
classical computers show the effectiveness of the proposed algorithm. The
advantages of T-QAOA are discussed and numerically compared with QAOA which
uses a stochastic gradient descent-based optimizer.Comment: 11 page
Preferentialism and the conditionality of trade agreements. An application of the gravity model
Modern economic growth is driven by international trade, and the preferential trade agreement constitutes the primary fit-for-purpose mechanism of choice for establishing, facilitating, and governing its flows. However, too little attention has been afforded to the differences in content and conditionality associated with different trade agreements. This has led to an under-considered mischaracterisation of the design-flow relationship. Similarly, while the relationship between trade facilitation and trade is clear, the way trade facilitation affects other areas of economic activity, with respect to preferential trade agreements, has received considerably less attention. Particularly, in light of an increasingly globalised and interdependent trading system, the interplay between trade facilitation and foreign direct investment is of particular importance.
Accordingly, this thesis explores the bilateral trade and investment effects of specific conditionality sets, as established within Preferential Trade Agreements (PTAs).
Chapter one utilises recent content condition-indexes for depth, flexibility, and constraints on flexibility, established by Dür et al. (2014) and Baccini et al. (2015), within a gravity framework to estimate the average treatment effect of trade agreement characteristics across bilateral trade relationships in the Association of Southeast Asian Nations (ASEAN) from 1948-2015. This chapter finds that the composition of a given ASEAN trade agreement’s characteristic set has significantly determined the concomitant bilateral trade flows. Conditions determining the classification of a trade agreements depth are positively associated with an increase to bilateral trade; hereby representing the furthered removal of trade barriers and frictions as facilitated by deeper trade agreements. Flexibility conditions, and constraint on flexibility conditions, are also identified as significant determiners for a given trade agreement’s treatment effect of subsequent bilateral trade flows. Given the political nature of their inclusion (i.e., the appropriate address to short term domestic discontent) this influence is negative as regards trade flows. These results highlight the longer implementation and time frame requirements for trade impediments to be removed in a market with higher domestic uncertainty.
Chapter two explores the incorporation of non-trade issue (NTI) conditions in PTAs. Such conditions are increasing both at the intensive and extensive margins. There is a concern from developing nations that this growth of NTI inclusions serves as a way for high-income (HI) nations to dictate the trade agenda, such that developing nations are subject to ‘principled protectionism’. There is evidence that NTI provisions are partly driven by protectionist motives but the effect on trade flows remains largely undiscussed. Utilising the Gravity Model for trade, I test Lechner’s (2016) comprehensive NTI dataset for 202 bilateral country pairs across a 32-year timeframe and find that, on average, NTIs are associated with an increase to bilateral trade. Primarily this boost can be associated with the market access that a PTA utilising NTIs facilitates. In addition, these results are aligned theoretically with the discussions on market harmonisation, shared values, and the erosion of artificial production advantages. Instead of inhibiting trade through burdensome cost, NTIs are acting to support a more stable production and trading environment, motivated by enhanced market access. Employing a novel classification to capture the power supremacy associated with shaping NTIs, this chapter highlights that the positive impact of NTIs is largely driven by the relationship between HI nations and middle-to-low-income (MTLI) counterparts.
Chapter Three employs the gravity model, theoretically augmented for foreign direct investment (FDI), to estimate the effects of trade facilitation conditions utilising indexes established by Neufeld (2014) and the bilateral FDI data curated by UNCTAD (2014). The resultant dataset covers 104 countries, covering a period of 12 years (2001–2012), containing 23,640 observations. The results highlight the bilateral-FDI enhancing effects of trade facilitation conditions in the ASEAN context, aligning itself with the theoretical branch of FDI-PTA literature that has outlined how the ratification of a trade agreement results in increased and positive economic prospect between partners (Medvedev, 2012) resulting from the interrelation between trade and investment as set within an improving regulatory environment. The results align with the expectation that an enhanced trade facilitation landscape (one in which such formalities, procedures, information, and expectations around trade facilitation are conditioned for) is expected to incentivise and attract FDI
Hunting Wildlife in the Tropics and Subtropics
The hunting of wild animals for their meat has been a crucial activity in the evolution of humans. It continues to be an essential source of food and a generator of income for millions of Indigenous and rural communities worldwide. Conservationists rightly fear that excessive hunting of many animal species will cause their demise, as has already happened throughout the Anthropocene. Many species of large mammals and birds have been decimated or annihilated due to overhunting by humans. If such pressures continue, many other species will meet the same fate. Equally, if the use of wildlife resources is to continue by those who depend on it, sustainable practices must be implemented. These communities need to remain or become custodians of the wildlife resources within their lands, for their own well-being as well as for biodiversity in general. This title is also available via Open Access on Cambridge Core
Scalable software and models for large-scale extracellular recordings
The brain represents information about the world through the electrical activity of
populations of neurons. By placing an electrode near a neuron that is firing (spiking), it
is possible to detect the resulting extracellular action potential (EAP) that is transmitted
down an axon to other neurons. In this way, it is possible to monitor the communication
of a group of neurons to uncover how they encode and transmit information. As the
number of recorded neurons continues to increase, however, so do the data processing
and analysis challenges. It is crucial that scalable software and analysis tools are developed
and made available to the neuroscience community to keep up with the large
amounts of data that are already being gathered.
This thesis is composed of three pieces of work which I develop in order to better
process and analyze large-scale extracellular recordings. My work spans all stages of extracellular
analysis from the processing of raw electrical recordings to the development
of statistical models to reveal underlying structure in neural population activity.
In the first work, I focus on developing software to improve the comparison and adoption
of different computational approaches for spike sorting. When analyzing neural
recordings, most researchers are interested in the spiking activity of individual neurons,
which must be extracted from the raw electrical traces through a process called
spike sorting. Much development has been directed towards improving the performance
and automation of spike sorting. This continuous development, while essential,
has contributed to an over-saturation of new, incompatible tools that hinders rigorous
benchmarking and complicates reproducible analysis. To address these limitations, I
develop SpikeInterface, an open-source, Python framework designed to unify preexisting
spike sorting technologies into a single toolkit and to facilitate straightforward
benchmarking of different approaches. With this framework, I demonstrate that modern,
automated spike sorters have low agreement when analyzing the same dataset, i.e.
they find different numbers of neurons with different activity profiles; This result holds
true for a variety of simulated and real datasets. Also, I demonstrate that utilizing a
consensus-based approach to spike sorting, where the outputs of multiple spike sorters
are combined, can dramatically reduce the number of falsely detected neurons.
In the second work, I focus on developing an unsupervised machine learning approach
for determining the source location of individually detected spikes that are
recorded by high-density, microelectrode arrays. By localizing the source of individual
spikes, my method is able to determine the approximate position of the recorded neuriii
ons in relation to the microelectrode array. To allow my model to work with large-scale
datasets, I utilize deep neural networks, a family of machine learning algorithms that
can be trained to approximate complicated functions in a scalable fashion. I evaluate
my method on both simulated and real extracellular datasets, demonstrating that it is
more accurate than other commonly used methods. Also, I show that location estimates
for individual spikes can be utilized to improve the efficiency and accuracy of spike
sorting. After training, my method allows for localization of one million spikes in approximately
37 seconds on a TITAN X GPU, enabling real-time analysis of massive
extracellular datasets.
In my third and final presented work, I focus on developing an unsupervised machine
learning model that can uncover patterns of activity from neural populations
associated with a behaviour being performed. Specifically, I introduce Targeted Neural
Dynamical Modelling (TNDM), a statistical model that jointly models the neural activity
and any external behavioural variables. TNDM decomposes neural dynamics (i.e.
temporal activity patterns) into behaviourally relevant and behaviourally irrelevant dynamics;
the behaviourally relevant dynamics constitute all activity patterns required
to generate the behaviour of interest while behaviourally irrelevant dynamics may be
completely unrelated (e.g. other behavioural or brain states), or even related to behaviour
execution (e.g. dynamics that are associated with behaviour generally but are not
task specific). Again, I implement TNDM using a deep neural network to improve its
scalability and expressivity. On synthetic data and on real recordings from the premotor
(PMd) and primary motor cortex (M1) of a monkey performing a center-out reaching
task, I show that TNDM is able to extract low-dimensional neural dynamics that are
highly predictive of behaviour without sacrificing its fit to the neural data
A universal approach to phenomenological compartment models of unit operations
A compartment model describes the transmission of materials and/or energies through a unit operation, as a network of flow connected sub-volumes. Each sub volume is a well-mixed compartment, formed based on the identification of negligible gradients in the system properties of interest. Ordinary differential equations describe the temporal phenomenological and flow effects imposed on the variables (species mass and compartment enthalpy) of the system. Along with the associated initial values of the system, the variable ODE’s are numerically solved over time. Compartment modelling is widely used in chemical engineering as it provides a balance between flow and phenomena resolution, and solution times.
From the profusion of compartment models in literature, the model development and thus solutions for this approach are both bespoke. Models are either hard coded ODE’s or built through the improvised use of available non-domain-specific tools; the former is especially error prone, and the latter restricts the model development to the capability of the tool used. For full modelling flexibility, modellers are required to have knowledge of software design for implementing and solving ODE’s with many variables.
CompArt - A universal compartment modelling tool for unit operations has been developed in this work, this is formed of (i) a universal input language used to describe unit operation compartment models, (ii) complemented by an interpretation algorithm for the conversion of the model description into ODE’s for solving (utilising a universal compartment modelling equation set developed in this work) and, (iii) the wrapping of choice numerical solvers targeting stiff non-linear problems. This addition to the field circumvents the need for modelers to have specialised skills to utilise this modelling approach allows focus upon their domain of model development to take priority.
The universal compartment modelling system, CompArt is validated against a benchmark set of 20 models ranging in structural make-up and applied phenomena
- …