7,464 research outputs found

    Network Effects, Congestion Externalities, and Air Traffic Delays: Or Why All Delays Are Not Evil

    Get PDF
    We examine two factors that might explain the extent of air traffic delays in the United States: network benefits due to hubbing and congestion externalities. Airline hubs enable passengers to cross-connect to many destinations, thus creating network benefits that increase in the number of markets served from the hub. Delays are the equilibrium outcome of a hub airline equating high marginal benefits from hubbing with the marginal cost of delays. Congestion externalities are created when airlines do not consider that adding flights may lead to increased delays for other air carriers. In this case, delays represent a market failure. Using data on all domestic flights by major US carriers from 1988-2000, we find that delays are increasing in hubbing activity at an airport and decreasing in market concentration but the hubbing effect dominates empirically. In addition, most delays due to hubbing actually accrue to the hub carrier, primarily because the hub carrier clusters its flights in short spans of time in order to maximize passenger interconnections. Non hub flights at hub airports operate with minimal additional travel time by avoiding the congested peak connecting times of the hub carrier. These results suggest that an optimal congestion tax would have a relatively small impact on air traffic delays since hub carriers already internalize most of the costs of hubbing and a tax that did not take the network benefits of hubbing into account could reduce social welfare.

    Automated Classification of Transient Contamination in Stationary Acoustic Data

    Get PDF
    An automated procedure for the classification of transient contamination of stationary acoustic data is proposed and analyzed. The procedure requires the assumption that the stationary acoustic data of interest can be modeled as a band-limited, Gaussian random process. It also requires that the transient contamination be of higher variance than the acoustic data of interest. When these assumptions are satisfied, it is a blind separation procedure, aside from the initial input specifying how to subdivide the time series of interest. No a priori threshold criterion is required. Simulation results show that for a sufficient number of blocks, the method performs well, as long as the occasional false positive or false negative is acceptable. The effectiveness of the procedure is demonstrated with an application to experimental wind tunnel acoustic test data which are contaminated by hydrodynamic gusts

    Random Chance or Loaded Dice: The Politics of Judicial Designation

    Get PDF
    [Excerpt] “In the 1950s and 1960s, the southern states struggled to respond to the civil rights decisions being issued by the U.S. Supreme Court as well as the new civil rights laws being passed by Congress. The judicial battleground for this perfect storm of evasion and massive resistance was found in the “old” Fifth Circuit Court of Appeals, which encompassed the states of Alabama, Florida, Georgia, Louisiana, Mississippi, and Texas. In the “old” Fifth Circuit, a minority of liberal appeals court judges—sympathetic to the civil rights movement—used all legal and administrative power at their disposal to make sure that the federal district and appeals courts were complying with the U.S. Supreme Court’s mandate in Brown v. Board of Education. In their ground-breaking book A Court Divided: The Fifth Circuit Court of Appeals and the Politics of Judicial Reform, political scientists Deborah J. Barrow and Thomas G. Walker carefully examined the political behavior of these aforementioned liberal appeals court judges and found evidence that Elbert Parr Tuttle, the Fifth Circuit’s chief judge from 1960 to 1967, was manipulating, or “gerrymandering,” the assignment of appeals court judges to both three-judge district court panels, and three-judge appellate court panels to guarantee that the panels had at least two liberal judges who would enforce the Supreme Court’s desegregation rulings.

    Individual Learning About Consumption

    Get PDF
    The standard approach to modelling consumption/saving problems is to assume that the decisionmaker is solving a dynamic stochastic optimization problem However under realistic descriptions of utility and uncertainty the optimal consumption/saving decision is so difficult that only recently economists have managed to find solutions using numerical methods that require previously infeasible amounts of computation Yet empirical evidence suggests that household behavior conforms fairly well with the prescriptions of the optimal solution raising the question of how average households can solve problems that economists until recently could not This paper examines whether consumers might be able to find a reasonably good ’rule-of-thumb?approximation to optimal behavior by trial-and-error methods as Friedman (1953) proposed long ago We find that such individual learning methods can reliably identify reasonably good rules of thumb only if the consumer is able to spend absurdly large amounts of time searching for a good rule

    Superstar Cities

    Get PDF
    Differences in house price and income growth rates between 1950 and 2000 across metropolitan areas have led to an ever-widening gap in housing values and incomes between the typical and highest-priced locations. We show that the growing spatial skewness in house prices and incomes are related and can be explained, at least in part, by inelastic supply of land in some attractive locations combined with an increasing number of high-income households nationally. Scarce land leads to a bidding-up of land prices and a sorting of high-income families relatively more into those desirable, unique, low housing construction markets, which we label %u201Csuperstar cities.%u201D Continued growth in the number of high-income families in the U.S. provides support for ever-larger differences in house prices across inelastically supplied locations and income-based spatial sorting. Our empirical work confirms a number of equilibrium relationships implied by the superstar cities framework and shows that it occurs both at the metropolitan area level and at the sub-MSA level, controlling for MSA characteristics.

    Assessing High House Prices: Bubbles, Fundamentals, and Misperceptions

    Get PDF
    We construct measures of the annual cost of single-family housing for 46 metropolitan areas in the United States over the last 25 years and compare them with local rents and incomes as a way of judging the level of housing prices. Conventional metrics like the growth rate of house prices, the price-to-rent ratio, and the price-to-income ratio can be misleading because they fail to account both for the time series pattern of real long-term interest rates and predictable differences in the long-run growth rates of house prices across local markets. These factors are especially important in recent years because house prices are theoretically more sensitive to interest rates when rates are already low, and more sensitive still in those cities where the long-run rate of house price growth is high. During the 1980s, our measures show that houses looked most overvalued in many of the same cities that subsequently experienced the largest house price declines. We find that from the trough of 1995 to 2004, the cost of owning rose somewhat relative to the cost of renting, but not, in most cities, to levels that made houses look overvalued.
    • …
    corecore