303 research outputs found

    Bank deposits, liquidity management and macroeconomy

    Get PDF
    In this thesis, we investigate the role of deposits in bank liquidity management and macroeconomy empirically. The thesis is comprised of three main chapters as follows. This first chapter investigates how banks managed their liquidity during the COVID-19 pandemic. We evaluate three potential channels banks manage their liquidity: the supply-side through the exercise of market discipline, the demand-side through internal capital markets, and the balance-sheet channel through unused credit commitments and wholesale funding. We provide novel empirical evidence on the absence of market discipline theory and internal capital market theory during the pandemic. Furthermore, it is shown that banks exposed to higher liquidity risk tend to experience larger deposit outflows and increased their exposure in Fed's liquidity facilities during the pandemic. The second chapter examines the US tri-party repo market operation by investigating the role of dealer's riskiness on the repo volume and rate during the post-crisis period (2010:Q2-2019:Q4) and the starting quarter of the pandemic (2020:Q1). We find the market perception of dealer's risk has negative impact on the reverse repo amount the dealer undertake. In addition, the second chapter investigates the relationship between bank Liquidity Mismatch Index (LMI) and the repo volume and rate banks undertake. We provide empirical evidence on LMI has good explanatory power on both banks' repo volume and rate. Finally, this chapter investigates the role of heterogeneity in deposit rates on predicting the severity of crisis and output. We show an increase in the heterogeneity has strong predicting power on the future economic downturns. More importantly, it is found an increase in the heterogeneity in deposit rates coupled with a fragile financial condition, leads to a more severe crisis. In addition, we show the changes in effective federal funds rate and deposit rates have significant negative impact on household's consumption and income, and this effect is heterogeneous among households according to balance sheet positions. This thesis contributes to the ongoing debates on bank liquidity management and the deposits channel of monetary policy transmission. The findings have important policy implications by showing the unique role of bank deposits in bank liquidity management and macroeconomy. Our main findings suggest that policy makers should be aware of the importance of liquidity facilities provided by the Federal Reserve and the repo market, especially during the liquidity stressed periods, as they are playing an essential role in funding bank's liquidity. Moreover, our findings relating to households suggest policy makers should aware that the effects on households' consumption from interest rate changes are heterogeneous among households based on their balance sheet positions

    Topics in nonlinear filtering

    Get PDF
    In this dissertation, we study the implementation of nonlinear filtering algorithms that can be used in real time applications. In order to implement a filtering algorithm, one has to discretize the state space, the observation space and the time interval. If one discretizes the observation space first, the corresponding equation for the optimal filter is considerably less complicated than in the diffusion case. This is the starting point of our method;First, we focus on the development of a general procedure to solve the filtering problem for Markov semimartingale state processes and jump observation processes. We rewrite the resulting nonlinear equation for the optimal filter into two equations, one describes the evolution of the filter between the observation jump times, the other one updates the filter at the jump times. Then we ignore the nonlinear terms in both equations, and show that the resulting linear equations have at least one weak solution, which is a finite positive measure. It turns out that normalization of this solution yields the optimal nonlinear filter for the problem, which is the unique solution to the filter problem;Second, we consider the discretization of the state space, which leads to filtering equations that are a combination of ordinary differential equations and linear up-dating operations. For this we study the problem of dimension reduction and lower dimensional realizations, in order to reduce the number of variables in the equation, leading to a reduction in computation time. We study exact dimension reduction and approximate dimension reduction. We provide some necessary and sufficient conditions for the problem to have lower dimensional realizations by using three different approaches: invariant linear subspaces, invariant integral submanifolds and exact criteria. We also have developed an efficient and applicable procedure to get approximate dimension reduction and obtain conditions, under which the approximate optimal filter converges to the optimal filter of the problems under consideration. The numerical simulations show that our procedure to solve the nonlinear filtering problem and to reduce the dimension of the filter equations is efficient and applicable

    Impact of ownership type and firm size on organizational culture and on the organizational culture-effectiveness linkage

    Get PDF
    This paper aims to extend the extant (primarily Western) organizational culture literature to emerging economies by explicitly incorporating two key contextual variables-ownership type and firm size into organizational culture model. Based on the theoretical model developed by Denison and his colleagues, we examined the impact of ownership type and firm size on organizational culture, as well as the moderating effect of the two contextual variables on the linkage between organizational culture and firm effectiveness. Using survey data from foreign-invested and state-owned firms in China, we find that ownership type and firm size have significant influence on organizational culture. We also find that different ownership type and firm size result in different organizational cultural effect on performance

    ROBUSTNESS OF ASSET LOCATION DECISION CONSIDERATIONS

    Get PDF
    Asset location, which means locating different types of assets to accounts with different tax treatment, is one of the investment decisions investors should consider. Conventional wisdom believes that it is preferable to hold bonds in taxable accounts and stocks in Tax-Deferred Accounts, but recent studies reveal that it is not true (Reichenstein, Hora, and Jennings, 2012). Our study further investigates the robustness of results in aforementioned paper.Researchers believe that in most cases, it is preferable to hold bonds in TDAs and to stocks in taxable accounts. However, sensitivity analysis shows that return and risk profile of assets, risk tolerance of investors, and tax rate of different assets can reverse the preference, within reasonable ranges. The discussion also uses numerical examples to illustrate optimal weights under different assumptions, and historical data is used to prove the plausibility of numerical examples

    LW-ISP: A Lightweight Model with ISP and Deep Learning

    Full text link
    The deep learning (DL)-based methods of low-level tasks have many advantages over the traditional camera in terms of hardware prospects, error accumulation and imaging effects. Recently, the application of deep learning to replace the image signal processing (ISP) pipeline has appeared one after another; however, there is still a long way to go towards real landing. In this paper, we show the possibility of learning-based method to achieve real-time high-performance processing in the ISP pipeline. We propose LW-ISP, a novel architecture designed to implicitly learn the image mapping from RAW data to RGB image. Based on U-Net architecture, we propose the fine-grained attention module and a plug-and-play upsampling block suitable for low-level tasks. In particular, we design a heterogeneous distillation algorithm to distill the implicit features and reconstruction information of the clean image, so as to guide the learning of the student model. Our experiments demonstrate that LW-ISP has achieved a 0.38 dB improvement in PSNR compared to the previous best method, while the model parameters and calculation have been reduced by 23 times and 81 times. The inference efficiency has been accelerated by at least 15 times. Without bells and whistles, LW-ISP has achieved quite competitive results in ISP subtasks including image denoising and enhancement.Comment: 16 PAGES, ACCEPTED AS A CONFERENCE PAPER AT: BMVC 202

    Be Your Own Teacher: Improve the Performance of Convolutional Neural Networks via Self Distillation

    Full text link
    Convolutional neural networks have been widely deployed in various application scenarios. In order to extend the applications' boundaries to some accuracy-crucial domains, researchers have been investigating approaches to boost accuracy through either deeper or wider network structures, which brings with them the exponential increment of the computational and storage cost, delaying the responding time. In this paper, we propose a general training framework named self distillation, which notably enhances the performance (accuracy) of convolutional neural networks through shrinking the size of the network rather than aggrandizing it. Different from traditional knowledge distillation - a knowledge transformation methodology among networks, which forces student neural networks to approximate the softmax layer outputs of pre-trained teacher neural networks, the proposed self distillation framework distills knowledge within network itself. The networks are firstly divided into several sections. Then the knowledge in the deeper portion of the networks is squeezed into the shallow ones. Experiments further prove the generalization of the proposed self distillation framework: enhancement of accuracy at average level is 2.65%, varying from 0.61% in ResNeXt as minimum to 4.07% in VGG19 as maximum. In addition, it can also provide flexibility of depth-wise scalable inference on resource-limited edge devices.Our codes will be released on github soon.Comment: 10page

    Chiplet Actuary: A Quantitative Cost Model and Multi-Chiplet Architecture Exploration

    Full text link
    Multi-chip integration is widely recognized as the extension of Moore's Law. Cost-saving is a frequently mentioned advantage, but previous works rarely present quantitative demonstrations on the cost superiority of multi-chip integration over monolithic SoC. In this paper, we build a quantitative cost model and put forward an analytical method for multi-chip systems based on three typical multi-chip integration technologies to analyze the cost benefits from yield improvement, chiplet and package reuse, and heterogeneity. We re-examine the actual cost of multi-chip systems from various perspectives and show how to reduce the total cost of the VLSI system through appropriate multi-chiplet architecture.Comment: Accepted by and to be presented at DAC 202
    • …
    corecore