327,092 research outputs found

    Term Default, Balloon Risk, and Credit Risk in Commercial Mortgages

    Get PDF
    Term default and balloon risk play an interactive role in the pricing of credit risk in commercial mortgages. Most commercial mortgage pricing studies assume a borrower\u27s default decision is based solely on the property value; the mortgage valuation model here also incorporates a property income trigger. The model considers both the risk of default during the term of the loan and the risk of loss at maturity (balloon risk). Monte Carlo simulation analyses reveal that pricing models based solely on property value overestimate the probability of term default and the resulting credit risk premium. Adding a property income default trigger without considering balloon risk, however, underestimates the overall credit risk premium. In essence, a double-trigger default model that incorporates balloon risk is critical for accurate assessment of the credit risk in commercial mortgages

    Who Bears the Balloon Risk in Commercial MBS?

    Get PDF
    Much of the literature on the pricing of commercial mortgages underlying commercial mortgage-backed securities pools focuses on the effect of term default (default during the term of the loan), and ignores the possibility of balloon risk, the borrower\u27s inability to pay off the mortgage at maturity through refinancing or property sale. A contingent-claims mortgage pricing model that includes two default triggers—a cash flow trigger and an asset value trigger—may be used to assess the effect of balloon risk on the pricing of CMBS tranches. Simulations of cash flows for individual loans in a CMBS framework reveal how individual tranches are affected by balloon risk. Balloon risk is low at the whole-loan level, but under a number of scenarios total credit risk and balloon risk creep into investment-grade CMBS tranches and significantly impact their valuation

    Testing multiple hypotheses with skewed alternatives

    Get PDF
    In many practical cases of multiple hypothesis problems, it can be expected that the alternatives are not symmetrically distributed. If it is known a priori that the distributions of the alternatives are skewed, we show that this information yields high power procedures as compared to the procedures based on symmetric alternatives when testing multiple hypotheses. We propose a Bayesian decision theoretic rule for multiple directional hypothesis testing, when the alternatives are distributed as skewed, under a constraint on a mixed directional false discovery rate. We compare the proposed rule with a frequentist\u27s rule of Benjamini and Yekutieli (2005) using simulations. We apply our method to a well-studied HIV dataset

    Borrower Self-Selection, Underwriting Costs, and Subprime Mortgage Credit Supply

    Get PDF
    In the U.S., households participate in two very different types of credit markets. Personal lending is characterized by continuous risk-based pricing in which lenders offer households a continuous distribution of borrowing possibilities based on estimates of their creditworthiness. This contrasts sharply with mortgage markets where lenders specialize in specific risk categories of borrowers and mortgage supply is stepwise linear. The contrast between continuous lending for personal loans and discrete lending by specialized lenders for mortgage credit has led to concerns regarding the efficiency and equity of mortgage lending. This paper sheds both theoretical and empirical light on the differences in the two credit markets. The theory section demonstrates why, in a perfectly competitive credit market where all lenders have the same underwriting technology, mortgage credit supply curves are stepwise linear and lenders specialize in prime or subprime lending. The empirical section then provides evidence that borrowers are being effectively sorted based on risk characteristics by the market

    A Network Coding Approach to Loss Tomography

    Get PDF
    Network tomography aims at inferring internal network characteristics based on measurements at the edge of the network. In loss tomography, in particular, the characteristic of interest is the loss rate of individual links and multicast and/or unicast end-to-end probes are typically used. Independently, recent advances in network coding have shown that there are advantages from allowing intermediate nodes to process and combine, in addition to just forward, packets. In this paper, we study the problem of loss tomography in networks with network coding capabilities. We design a framework for estimating link loss rates, which leverages network coding capabilities, and we show that it improves several aspects of tomography including the identifiability of links, the trade-off between estimation accuracy and bandwidth efficiency, and the complexity of probe path selection. We discuss the cases of inferring link loss rates in a tree topology and in a general topology. In the latter case, the benefits of our approach are even more pronounced compared to standard techniques, but we also face novel challenges, such as dealing with cycles and multiple paths between sources and receivers. Overall, this work makes the connection between active network tomography and network coding

    CapEst: A Measurement-based Approach to Estimating Link Capacity in Wireless Networks

    Full text link
    Estimating link capacity in a wireless network is a complex task because the available capacity at a link is a function of not only the current arrival rate at that link, but also of the arrival rate at links which interfere with that link as well as of the nature of interference between these links. Models which accurately characterize this dependence are either too computationally complex to be useful or lack accuracy. Further, they have a high implementation overhead and make restrictive assumptions, which makes them inapplicable to real networks. In this paper, we propose CapEst, a general, simple yet accurate, measurement-based approach to estimating link capacity in a wireless network. To be computationally light, CapEst allows inaccuracy in estimation; however, using measurements, it can correct this inaccuracy in an iterative fashion and converge to the correct estimate. Our evaluation shows that CapEst always converged to within 5% of the correct value in less than 18 iterations. CapEst is model-independent, hence, is applicable to any MAC/PHY layer and works with auto-rate adaptation. Moreover, it has a low implementation overhead, can be used with any application which requires an estimate of residual capacity on a wireless link and can be implemented completely at the network layer without any support from the underlying chipset

    Large scale probabilistic available bandwidth estimation

    Full text link
    The common utilization-based definition of available bandwidth and many of the existing tools to estimate it suffer from several important weaknesses: i) most tools report a point estimate of average available bandwidth over a measurement interval and do not provide a confidence interval; ii) the commonly adopted models used to relate the available bandwidth metric to the measured data are invalid in almost all practical scenarios; iii) existing tools do not scale well and are not suited to the task of multi-path estimation in large-scale networks; iv) almost all tools use ad-hoc techniques to address measurement noise; and v) tools do not provide enough flexibility in terms of accuracy, overhead, latency and reliability to adapt to the requirements of various applications. In this paper we propose a new definition for available bandwidth and a novel framework that addresses these issues. We define probabilistic available bandwidth (PAB) as the largest input rate at which we can send a traffic flow along a path while achieving, with specified probability, an output rate that is almost as large as the input rate. PAB is expressed directly in terms of the measurable output rate and includes adjustable parameters that allow the user to adapt to different application requirements. Our probabilistic framework to estimate network-wide probabilistic available bandwidth is based on packet trains, Bayesian inference, factor graphs and active sampling. We deploy our tool on the PlanetLab network and our results show that we can obtain accurate estimates with a much smaller measurement overhead compared to existing approaches.Comment: Submitted to Computer Network
    • …
    corecore