47 research outputs found
The Evolution of Securitization in Multifamily Mortgage markets and Its Effect on lending Rates
Loan purchase and securitization by Freddie Mac, Fannie Mae and private-label commercial mortgage-backed securities (CMBS) grew rapidly during the 1990s and accounted for more than one-half of the net growth in multifamily debt over the decade. By facilitating the integration of the multifamily mortgage market into the broader capital markets, securitization helped to create new sources of credit as some traditional portfolio investors—savings institutions and life insurers—reduced their share of loan holdings. A model of commercial mortgage rates at life insurers, expressed relative to a comparable-term Treasury yield, was estimated over a twenty-two-year period. The parameter estimates supported an option-based pricing model of rate determination; proxies for CMBS activity showed no significant effect.
Scientific Computing Meets Big Data Technology: An Astronomy Use Case
Scientific analyses commonly compose multiple single-process programs into a
dataflow. An end-to-end dataflow of single-process programs is known as a
many-task application. Typically, tools from the HPC software stack are used to
parallelize these analyses. In this work, we investigate an alternate approach
that uses Apache Spark -- a modern big data platform -- to parallelize
many-task applications. We present Kira, a flexible and distributed astronomy
image processing toolkit using Apache Spark. We then use the Kira toolkit to
implement a Source Extractor application for astronomy images, called Kira SE.
With Kira SE as the use case, we study the programming flexibility, dataflow
richness, scheduling capacity and performance of Apache Spark running on the
EC2 cloud. By exploiting data locality, Kira SE achieves a 2.5x speedup over an
equivalent C program when analyzing a 1TB dataset using 512 cores on the Amazon
EC2 cloud. Furthermore, we show that by leveraging software originally designed
for big data infrastructure, Kira SE achieves competitive performance to the C
implementation running on the NERSC Edison supercomputer. Our experience with
Kira indicates that emerging Big Data platforms such as Apache Spark are a
performant alternative for many-task scientific applications
GSE Activity, FHA Feedback, and Implications for the Efficacy of the Affordable Housing Goals
Abstract There is a seeming paradox about the "affordable housing goals": GSE activities in targeted communities have increased under the goals but there has been little measurable improvement in housing market conditions in these communities. This paper seeks to reconcile this paradox by focusing on linkage between GSE purchases and FHA activities. We build a simple model based on credit rationing theory that suggests that GSE activities can have a feedback effect on FHA. More aggressive GSE pursuit of targeted borrowers under the affordable housing goals induces potential FHA borrowers with best credit quality to use the conventional market. In response, the FHA applies more strict underwriting standards under new market equilibrium, which results in reduced loan volumes. On balance, these effects can offset and make credit supply and homeownership effectively unchanged. Empirical evidence on changes in GSE and FHA lending after affordable housing goals were made more binding is found to be consistent with the theoretical predictions
Recommended from our members
Scalable Systems and Algorithms for Genomic Variant Analysis
With the cost of sequencing a human genome dropping below $1,000, population-scale sequencing has become feasible. With projects that sequence more than 10,000 genomes becoming commonplace, there is a strong need for genome analysis tools that can scale across distributed computing resources while providing reduced analysis cost. Simultaneously, these tools must provide programming interfaces and deployment models that are easily usable by biologists.In this dissertation, we describe the ADAM system for processing large genomic datasets using distributed computing. ADAM provides a decoupled stack-based architecture that can accommodate many data formats, deployment models, and data access patterns. Additionally, ADAM defines schemas that describe common genomic datatypes. ADAM’s schemas and programming models enable the easy integration of disparate genomic datatypes and datasets into a single analysis.To validate the ADAM architecture, we implemented an end-to-end variant calling pipeline using ADAM’s APIs. To perform parallel alignment, we developed the Cannoli tool, which uses ADAM’s APIs to automatically parallelize single node aligners. We then implemented GATK-style alignment refinement as part of ADAM. Finally, we implemented a biallelic genotyping model, and novel reassembly algorithms in the Avocado variant caller. This pipeline provides state-of-the-art SNV calling accuracy, along with high (97%) INDEL calling accuracy. To further validate this pipeline, we reanalyzed 270 samples from the Simons Genome Diversity Dataset