2,826 research outputs found

    Managing Mortgage Credit Risk: What Went Wrong With the Subprime and Alt-A Markets?

    Get PDF
    The purpose of this study is two-fold: first, to explain the demise of subprime and Alt-A mortgage markets in the U.S. from the viewpoint of measuring and managing mortgage credit risk; and secondly, to discuss several policy lessons that can be learned from the market meltdown. To that end, three tiers of mortgage credit models are elaborated, including the scoring (or risk rank-ordering), risk-based pricing, and ¥§sizing¥¨ (or the analytics used in determining subordination levels of credit-sensitive mortgage backed security (MBS) deals) models. Using these as conceptual underpinning, empirical evidence is surveyed to document key contributing factors to the market demise. Those that are identified include the non-availability of reliable mortgage performance data, lack of theory as well as industry best-practices in performing simulation-based mortgage risk assessments, complex and arcane structures of mortgage backed securities, and information asymmetry among the parties involved in the security transactions. The overall conclusion derived is that the participants to these market segments surpass their risk management capabilities in globalizing funding for subprime and Alt-A mortgages. The policy lessons emphasized are the importance of the infrastructure of proper risk assessment and risk-based pricing, as well as prudent and transparent MBS products along with periodic information disclosure.Subprime mortgage; Mortgage-backed securities; Mortgage default; Credit risk management

    Non-Asymptotic Convergence Analysis of Inexact Gradient Methods for Machine Learning Without Strong Convexity

    Full text link
    Many recent applications in machine learning and data fitting call for the algorithmic solution of structured smooth convex optimization problems. Although the gradient descent method is a natural choice for this task, it requires exact gradient computations and hence can be inefficient when the problem size is large or the gradient is difficult to evaluate. Therefore, there has been much interest in inexact gradient methods (IGMs), in which an efficiently computable approximate gradient is used to perform the update in each iteration. Currently, non-asymptotic linear convergence results for IGMs are typically established under the assumption that the objective function is strongly convex, which is not satisfied in many applications of interest; while linear convergence results that do not require the strong convexity assumption are usually asymptotic in nature. In this paper, we combine the best of these two types of results and establish---under the standard assumption that the gradient approximation errors decrease linearly to zero---the non-asymptotic linear convergence of IGMs when applied to a class of structured convex optimization problems. Such a class covers settings where the objective function is not necessarily strongly convex and includes the least squares and logistic regression problems. We believe that our techniques will find further applications in the non-asymptotic convergence analysis of other first-order methods
    • …
    corecore