137 research outputs found
Freddie Mac and Fannie Mae: An Exit Strategy for the Taxpayer
The Fannie Mae-Freddie Mac crisis may have been the most avoidable financial crisis in history. Economists have long complained that the risks posed by the government-sponsored enterprises were large relative to any social benefits. We now realize that the overall policy of promoting home ownership was carried to excess. Even taking as given the goal of expanding home ownership, the public policy case for subsidizing mortgage finance was weak. The case for using the GSEs as a vehicle to subsidize mortgage finance was weaker still. The GSE structure serves to privatize profits and socialize losses. And even if one thought that home ownership was worth encouraging, mortgage debt was worth subsidizing, and the GSE structure was viable, allowing the GSEs to assume a dominant role in mortgage finance was a mistake. The larger they grew, the more precarious our financial markets became. Regulators should contemplate freezing the mortgage purchase activities of the GSEs while at the same time loosening the capital requirements for banks to hold low-risk mortgages. The result would almost surely be an industry much less concentrated than the current duopoly. A housing finance system that does not rely so heavily on Freddie Mac and Fannie Mae will be more robust. We have to assume that sooner or later some of the institutions involved in mortgage finance will fail. The policy should be to promote a housing finance system where mortgage risk is spread among dozens of institutions. That way, the failure of some firms can be resolved through mergers and orderly restructuring, without exposing the financial system to the sort of catastrophic risk that is represented by Fannie Mae and Freddie Mac
Does the Doctor Need a Boss?
The traditional model of medical delivery, in which the doctor is trained, respected, and compensated as an independent craftsman, is anachronistic. When a patient has multiple ailments, there is no longer a simple doctor-patient or doctor-patient-specialist relationship. Instead, there are multiple specialists who have an impact on the patient, each with a set of interdependencies and difficult coordination issues that increase exponentially with the number of ailments involved. Patients with multiple diagnoses require someone who can organize the efforts of multiple medical professionals. It is not unreasonable to imagine that delivering health care effectively, particularly for complex patients, could require a corporate model of organization. At least two forces stand in the way of robust competition from corporate health care providers. First is the regime of third-party fee-for-service payment, which is heavily entrenched by Medicare, Medicaid, and the regulatory and tax distortions that tilt private health insurance in the same direction. Consumers should control the money that purchases their health insurance, and should be free to choose their insurer and health care providers. Second, state licensing regulations make it difficult for corporations to design optimal work flows for health care delivery. Under institutional licensing, regulators would instead evaluate how well a corporation treats its patients, not the credentials of the corporation's employees. Alternatively, states could recognize clinician licenses issued by other states. That would let corporations operate in multiple states under a single set of rules and put pressure on states to eliminate unnecessarily restrictive regulations
An Exact Universal Gravitational Lensing Equation
We first define what we mean by gravitational lensing equations in a general
space-time. A set of exact relations are then derived that can be used as the
gravitational lens equations in all physical situations. The caveat is that
into these equations there must be inserted a function, a two-parameter family
of solutions to the eikonal equation, not easily obtained, that codes all the
relevant (conformal) space-time information for this lens equation
construction. Knowledge of this two-parameter family of solutions replaces
knowledge of the solutions to the geodesic equations.
The formalism is then applied to the Schwarzschild lensing problemComment: 12 pages, submitted to Phys. Rev.
Impacts of climate change on hydrology, water quality and crop productivity in the Ohio-Tennessee River Basin
Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. The eastern part of this region is comprised of the Ohio-Tennessee River Basin (OTRB), which is considered a key contributing area for water pollution and the Northern Gulf of Mexico hypoxic zone. A point of crucial importance in this basin is therefore how intensive corn-based cropping systems for food and fuel production can be sustainable and coexist with a healthy water environment, not only under existing climate but also under climate change conditions in the future. To address this issue, a OTRB integrated modeling system has been built with a greatly refined 12-digit subbasin structure based on the Soil and Water Assessment Tool (SWAT) water quality model, which is capable of estimating landscape and in-stream water and pollutant yields in response to a wide array of alternative cropping and/or management strategies and climatic conditions. The effects of three agricultural management scenarios on crop production and pollutant loads exported from the crop land of the OTRB to streams and rivers were evaluated: (1) expansion of continuous corn across the entire basin, (2) adoption of no-till on all corn and soybean fields in the region, (3) implementation of a winter cover crop within the baseline rotations. The effects of each management scenario were evaluated both for current climate and projected mid-century (2046-2065) climates from seven global circulation models (GCMs). In both present and future climates each management scenario resulted in reduced erosion and nutrient loadings to surface water bodies compared to the baseline agricultural management, with cover crops causing the highest water pollution reduction. Corn and soybean yields in the region were negligibly influenced from the agricultural management scenarios. On the other hand, both water quality and crop yield numbers under climate change deviated considerably for all seven GCMs compared to the baseline climate. Future climates from all GCMs led to decreased corn and soybean yields by up to 20% on a mean annual basis, while water quality alterations were either positive or negative depending on the GCM. The study highlights the loss of productivity in the eastern Corn Belt under climate change, the need to consider a range of GCMs when assessing impacts of climate change, and the value of SWAT as a tool to analyze the effects of climate change on parameters of interest at the basin scale
Cost-effective targeting of conservation investments to reduce the northern Gulf of Mexico hypoxic zone
A seasonally occurring summer hypoxic (low oxygen) zone in the northern Gulf of Mexico is the second largest in the world. Reductions in nutrients from agricultural cropland in its watershed are needed to reduce the hypoxic zone size to the national policy goal of 5,000 km2 (as a 5-y running average) set by the national Gulf of Mexico Task Forceâs Action Plan. We develop an integrated assessment model linking the water quality effects of cropland conservation investment decisions on the more than 550 agricultural subwatersheds that deliver nutrients into the Gulf with a hypoxic zone model. We use this integrated assessment model to identify the most cost-effective subwatersheds to target for cropland conservation investments. We consider targeting of the location (which subwatersheds to treat) and the extent of conservation investment to undertake (how much cropland within a subwatershed to treat). We use process models to simulate the dynamics of the effects of cropland conservation investments on nutrient delivery to the Gulf and use an evolutionary algorithm to solve the optimization problem. Model results suggest that by targeting cropland conservation investments to the most cost-effective location and extent of coverage, the Action Plan goal of 5,000 km2 can be achieved at a cost of 5.6 billion annually)
Least-cost Control of Agricultural Nutrient Contributions to the Gulf of Mexico Hypoxic Zone
In 2008, the hypoxic zone in the Gulf of Mexico, measuring 20 720 km2, was one of the two largest reported since measurement of the zone began in 1985. The extent of the hypoxic zone is related to nitrogen and phosphorous loadings originating on agricultural fields in the upper Midwest. This study combines the tools of evolutionary computation with a water quality model and cost data to develop a trade-off frontier for the Upper Mississippi River Basin specifying the least cost of achieving nutrient reductions and the location of the agricultural conservation practices needed. The frontier allows policymakers and stakeholders to explicitly see the trade-offs between cost and nutrient reductions. For example, the cost of reducing annual nitrate-N loadings by 30% is estimated to be US370 million/year, with a concomitant 9% reduction in nitrate-N
LUMINATE: linking agricultural land use, local water quality and Gulf of Mexico hypoxia
In this paper, we discuss the importance of developing integrated assessment models to support the design and implementation of policies to address water quality problems associated with agricultural pollution. We describe a new modelling system, LUMINATE, which links land use decisions made at the field scale in the Upper Mississippi, Ohio and Tennessee Basins through both environmental and hydrological components to downstream water quality effects and hypoxia in the Gulf of Mexico. This modelling system can be used to analyse detailed policy scenarios identifying the costs of the policies and their resulting benefits for improved local and regional water quality. We demonstrate the model\u27s capabilities with a simple scenario where cover crops are incentivised with green payments over a large expanse of the watershed
A refined regional modeling approach for the Corn Belt â Experiences and recommendations for large-scale integrated modeling
Nonpoint source pollution from agriculture is the main source of nitrogen and phosphorus in the stream systems of the Corn Belt region in the Midwestern US. This region is comprised of two large river basins, the intensely row-cropped Upper Mississippi River Basin (UMRB) and Ohio-Tennessee River Basin (OTRB), which are considered the key contributing areas for the Northern Gulf of Mexico hypoxic zone according to the US Environmental Protection Agency. Thus, in this area it is of utmost importance to ensure that intensive agriculture for food, feed and biofuel production can coexist with a healthy water environment. To address these objectives within a river basin management context, an integrated modeling system has been constructed with the hydrologic Soil and Water Assessment Tool (SWAT) model, capable of estimating river basin responses to alternative cropping and/or management strategies. To improve modeling performance compared to previous studies and provide a spatially detailed basis for scenario development, this SWAT Corn Belt application incorporates a greatly refined subwatershed structure based on 12-digit hydrologic units or âsubwatershedsâ as defined by the US Geological Service. The model setup, calibration and validation are time-demanding and challenging tasks for these large systems, given the scale intensive data requirements, and the need to ensure the reliability of flow and pollutant load predictions at multiple locations. Thus, the objectives of this study are both to comprehensively describe this large-scale modeling approach, providing estimates of pollution and crop production in the region as well as to present strengths and weaknesses of integrated modeling at such a large scale along with how it can be improved on the basis of the current modeling structure and results. The predictions were based on a semi-automatic hydrologic calibration approach for large-scale and spatially detailed modeling studies, with the use of the Sequential Uncertainty Fitting algorithm (SUFI-2) and the SWAT-CUP interface, followed by a manual water quality calibration on a monthly basis. The refined modeling approach developed in this study led to successful predictions across most parts of the Corn Belt region and can be used for testing pollution mitigation measures and agricultural economic scenarios, providing useful information to policy makers and recommendations on similar efforts at the regional scale.This article is from Journal of Hydrology 524 (2015): 348â366, doi:10.1016/j.jhydrol.2015.02.039.</p
Research Needs and Challenges in the FEW System: Coupling Economic Models with Agronomic, Hydrologic, and Bioenergy Models for Sustainable Food, Energy, and Water Systems
On October 12â13, a workshop funded by the National Science Foundation was held at Iowa State University in Ames, Iowa with a goal of identifying research needs related to coupled economic and biophysical models within the FEW system. Approximately 80 people attended the workshop with about half representing the social sciences (primarily economics) and the rest from the physical and natural sciences. The focus and attendees were chosen so that findings would be particularly relevant to SBE research needs while taking into account the critical connectivity needed between social sciences and other disciplines.
We have identified several major gaps in existing scientific knowledge that present substantial impediments to understanding the FEW system. We especially recommend research in these areas as a priority for future funding:
1. Economic models of decision-making in coupled systems
Deliberate human activity has been the dominant factor driving environmental and land-use changes for hundreds of years. While economists have made great strides in modeling and understanding these choices, the coupled systems modeling literature, with some important exceptions, has not reflected these contributions. Several paths forward seem fruitful. First, baseline economic models that assume rationality can be used much more widely than they are currently. Moreover, the current generation of IAMs that include rational agents have emphasized partial equilibrium studies appropriate for smaller systems. To allow this approach to be used to study larger systems, the potential for (and consequences of) general equilibrium effects should be studied as well.
Second, it is important to address shortcomings in these models of economic decision-making. Valuable improvements could be gained from developing coupled models that draw insights from behavioral economics. Many decision-makers deviate systematically from actions that would be predicted by strict rationality, but very few IAMs incorporate this behavior, potentially leading to inaccurate predictions about the effects of policies and regulations. Improved models of human adaptation and induced technological change can also be incorporated into coupled models. Particularly for medium to long-run models, decisions about adaptation and technological change will have substantial effects on the conclusions and policy implications, but more compelling methods for incorporating these changes into modeling are sorely needed. In addition, some economic decisions are intrinsically dynamic yet few coupled models explicitly incorporate dynamic models. Economic models that address uncertainty in decision making are also underutilized in coupled models of the FEW system.
2. Coupling models across disciplines
Despite much recent progress, established models for one component of the FEW system often cannot currently produce outcomes that can be used as inputs for models of other components. This misalignment makes integrated modeling difficult and is especially apparent in linking models of natural phenomena with models of economic decision-making. Economic agents typically act to maximize a form of utility or welfare that is not directly linked to physical processes, and they typically require probabilistic forecasts as an input to their decision-making that many models in the natural sciences cannot directly produce.
We believe that an especially promising approach is the development of âbridgeâ models that convert outputs from one model into inputs for another. Such models can be viewed as application-specific, reduced-form distillations of a richer and more realistic underlying model. Ideally, these bridge models would be developed in collaborative research projects involving economists, statisticians, and disciplinary specialists, and would contribute to improved understanding in the scientific discipline as well.
3. Model validation and comparison
There is little clarity on how models should be evaluated and compared to each other, both within individual disciplines and as components of larger IAMs. This challenge makes larger integrated modeling exercises extremely difficult. Some potential ways to advance are by developing statistical criteria that measure model performance along the dimensions suitable for inclusion in an IAM as well as infrastructure and procedures to facilitate model comparisons. Focusing on the modelsâ out-of-sample distributional forecasting performance, as well as that of the IAM overall, is especially promising and of particular importance.
Moreover, applications of IAMs tend to estimate the effect of hypothetical future policy actions, but there have been very few studies that have used these models to estimate the effect of past policy actions. These exercises should be encouraged. They offer a well-understood test bed for the IAMs, and also contribute to fundamental scientific knowledge through better understanding of the episode in question. The retrospective nature of this form of analysis also presents the opportunity to combine reduced-form estimation strategies with the IAMs as an additional method of validation
Identifying amyloid pathologyârelated cerebrospinal fluid biomarkers for Alzheimer\u27s disease in a multicohort study
Introduction: The dynamic range of cerebrospinal fluid (CSF) amyloid ÎČ (AÎČ1â42) measurement does not parallel to cognitive changes in Alzheimer\u27s disease (AD) and cognitively normal (CN) subjects across different studies. Therefore, identifying novel proteins to characterize symptomatic AD samples is important. Methods: Proteins were profiled using a multianalyte platform by Rules Based Medicine (MAP-RBM). Due to underlying heterogeneity and unbalanced sample size, we combined subjects (344 AD and 325 CN) from three cohorts: Alzheimer\u27s Disease Neuroimaging Initiative, Penn Center for Neurodegenerative Disease Research of the University of Pennsylvania, and Knight Alzheimer\u27s Disease Research Center at Washington University in St. Louis. We focused on samples whose cognitive and amyloid status was consistent. We performed linear regression (accounted for age, gender, number of apolipoprotein E (APOE) e4 alleles, and cohort variable) to identify amyloid-related proteins for symptomatic AD subjects in this largest ever CSFâbased MAP-RBM study. ANOVA and Tukey\u27s test were used to evaluate if these proteins were related to cognitive impairment changes as measured by mini-mental state examination (MMSE). Results: Seven proteins were significantly associated with AÎČ1â42 levels in the combined cohort (false discovery rate adjusted P \u3c .05), of which lipoprotein a (Lp(a)), prolactin (PRL), resistin, and vascular endothelial growth factor (VEGF) have consistent direction of associations across every individual cohort. VEGF was strongly associated with MMSE scores, followed by pancreatic polypeptide and immunoglobulin A (IgA), suggesting they may be related to staging of AD. Discussion: Lp(a), PRL, IgA, and tissue factor/thromboplastin have never been reported for AD diagnosis in previous individual CSFâbased MAP-RBM studies. Although some of our reported analytes are related to AD pathophysiology, other\u27s roles in symptomatic AD samples worth further explorations
- âŠ