7,487 research outputs found

    Use of composite rotations to correct systematic errors in NMR quantum computation

    Get PDF
    We implement an ensemble quantum counting algorithm on three NMR spectrometers with 1H resonance frequencies of 500, 600 and 750 MHz. At higher frequencies, the results deviate markedly from naive theoretical predictions. These systematic errors can be attributed almost entirely to off-resonance effects, which can be substantially corrected for using fully-compensating composite rotation pulse sequences originally developed by Tycko. We also derive an analytic expression for generating such sequences with arbitrary rotation angles.Comment: 8 pages RevTex including 7 PostScript figures (18 subfigures

    Analyzing Firm Performance in the Insurance Industry Using Frontier Efficiency Methods

    Get PDF
    In this introductory chapter to an upcoming book, the authors discuss the two principal types of efficiency frontier methodologies - the econometric (parametric) approach and the mathematical programming (non-parametric) approach. Frontier efficiency methodologies are discussed as useful in a variety of contexts: they can be used for testing economic hypotheses; providing guidance to regulators and policymakers; comparimg economic performance across countries; and informing management of the effects of procedures and strategies adapted by the firm. The econometric approach requires the specification of a production, cost, revenue, or profit function as well as assumptions about error terms. But this methodology is vulnerable to errors in the specification of the functional form or error term. The mathematical programming or linear programming approach avoids this type of error and measures any departure from the frontier as a relative inefficiency. Because each of these methods has advantages and disadvantages, it is recommended to estimate efficiency using more than one method. An important step in efficiency analysis is the definition of inputs and outputs and their prices. Insurer inputs can be classified into three principal groups: labor, business services and materials, and capital. Three principal approaches have been used to measure outputs in the financial services sector: the asset or intermediation approach, the user-cost approach, and the value-added approach. The asset approach treats firms as pure financial intermediaries and would be inappropriate for insurers because they provide other services. The user-cost method determines whether a financial product is an input or output based on its net contribution to the revenues of the firm. This method requires precise data on products, revenues and opportunity costs which are difficult to estimate in insurance. The value-added approach is judged the most appropriate method for studying insurance efficiency. it considers all asset and liability categories to have some output characteristics rather than distinguishing inputs from outputs. In order to measure efficiency in the insurance industry in which outputs are mostly intangible, measurable services must be defined. The three principal services provided by insurance companies are risk pooling and risk-bearing, "real" financial services relating to insured losses, and intermediation. The authors discuss how these services can be measured as outputs in value-added analysis. They then summarize existing efficiency literature.

    Can Insurers Pay for the "Big One"? Measuring the Capacity of an Insurance Market to Respond to Catastrophic Losses

    Get PDF
    This paper presents a theoretical and empirical analysis of the capacity of the U.S. property-liability insurance industry to finance major catastrophic property losses. The topic is important because catastrophic events such as the Northridge earthquake and Hurricane Andrew have raised questions about the ability of the insurance industry to respond to the "Big One," usually defined as a hurricane or earthquake in the 100billionrange.Atfirstglance,theU.S.propertyliabilityinsuranceindustry,withequitycapitalofmorethan100 billion range. At first glance, the U.S. property-liability insurance industry, with equity capital of more than 300 billion, should be able to sustain a loss of this magnitude. However, the reality could be different; depending on the distribution of damage and the spread of coverage as well as the correlations between insurer losses and industry losses. Thus, the prospect of a mega catastrophe brings the real threat of widespread insurance failures and unpaid insurance claims. Our theoretical analysis takes as its starting point the well-known article by Borch (1962), which shows that the Pareto optimal result in a market characterized by risk averse insurers is for each insurer to hold a proportion of the "market portfolio" of insurance contracts. Each insurer pays a proportion of total industry losses; and the industry behaves as a single firm, paying 100 percent of losses up to the point where industry net premiums and equity are exhausted. Borch's theorem gives rise to a natural definition of industry capacity as the amount of industry resources that are deliverable conditional on an industry loss of a given size. In our theoretical analysis, we show that the necessary condition for industry capacity to be maximized is that all insurers hold a proportionate share of the industry underwriting portfolio. The sufficient condition for capacity maximization, given a level of total resources in the industry, is for all insurers to hold a net of reinsurance underwriting portfolio which is perfectly correlated with aggregate industry losses. Based on these theoretical results, we derive an option-like model of insurer responses to catastrophes, leading to an insurer response-function where the total payout, conditional on total industry losses, is a function of the industry and company expected losses, industry and company standard deviation of losses, company net worth, and the correlation between industry and company losses. The industry response function is obtained by summing the company response functions, giving the capacity of the industry to respond to losses of various magnitudes. We utilize 1997 insurer financial statement data to estimate the capacity of the industry to respond to catastrophic losses. Two samples of insurers are utilized - a national sample, to measure the capacity of the industry as a whole to respond to a national event, and a Florida sample, to measure the capacity of the industry to respond to a Florida hurricane. The empirical analysis estimates the capacity of the industry to bear losses ranging from the expected value of loss up to a loss equal to total company resources. We develop a measure of industry efficiency equal to the difference between the loss that would be paid if the industry acts as a single firm and the actual estimated payment based on our option model. The results indicate that national industry efficiency ranges from about 78 to 85 percent, based on catastrophe losses ranging from zero to 300billion,andfrom70to77percent,basedoncatastrophelossesrangingfrom300 billion, and from 70 to 77 percent, based on catastrophe losses ranging from 200 to 300billion.Theindustryhasmorethanadequatecapacitytopayforcatastrophesofmoderatesize.E.g.,basedonboththenationalandFloridasamples,theindustrycouldpayatleast98.6percentofa300 billion. The industry has more than adequate capacity to pay for catastrophes of moderate size. E.g., based on both the national and Florida samples, the industry could pay at least 98.6 percent of a 20 billion catastrophe. For a catastrophe of 100billion,theindustrycouldpayatleast92.8percent.However,evenifmostlosseswouldbepaidforaneventofthismagnitude,asignificantnumberofinsolvencieswouldoccur,disruptingthenormalfunctioningoftheinsurancemarket,notonlyforpropertyinsurancebutalsoforothercoverages.Wealsocomparethecapacityoftheindustrytorespondtocatastrophiclossesbasedon1997capitalizationlevelswithitscapacitybasedon1991capitalizationlevels.ThecomparisonismotivatedbythesharpincreaseincapitalizaitonfollowingHurricaneAndrewandtheNorthridgeearthquake.In1991,theindustryhad100 billion, the industry could pay at least 92.8 percent. However, even if most losses would be paid for an event of this magnitude, a significant number of insolvencies would occur, disrupting the normal functioning of the insurance market, not only for property insurance but also for other coverages. We also compare the capacity of the industry to respond to catastrophic losses based on 1997 capitalization levels with its capacity based on 1991 capitalization levels. The comparison is motivated by the sharp increase in capitalizaiton following Hurricane Andrew and the Northridge earthquake. In 1991, the industry had .88 in equity capital per dollar of incurred losses, whereas in 1997 this ratio had increased to 1.56.Capacityresultsbasedonourmodelindicateadramaticincreaseincapacitybetween1991and1997.Foracatastropheof1.56. Capacity results based on our model indicate a dramatic increase in capacity between 1991 and 1997. For a catastrophe of 100 billion, our lower bound estimate of industry capacity in 1991 is only 79.6 percent, based on the national sample, compared to 92.8 percent in 1997. For the Florida sample, we estimate that insurers could have paid at least 72.2 percent of a $100 billion catastrophe in 1991 and 89.7 percent in 1997. Thus, the industry is clearly much better capitalized now than it was prior to Andrew. The results suggest that the gaps in catastrophic risk financing are presently not sufficient to justify Federal government intervention in private insurance markets in the form of Federally sponsored catastrophe reinsurance. However, even though the industry could adequately fund the "Big One," doing so would disrupt the functioning of insurance markets and cause price increases for all types of property-liability insurance. Thus, it appears that there is still a gap in capacity that provides a role for privately and publicly traded catastrophic loss derivative contracts.

    Organizational Form and Efficiency: An Analysis of Stock and Mutual Property-Liability Insurers

    Get PDF
    This paper analyzes the efficiency of stock and mutual organizational forms in the property-liability insurance industry using nonparametric frontier efficiency methods. We test the managerial discretion hypothesis, which predicts that the market will sort organizational forms into market segments where they have comparative advantages in minimizing the costs of production, including agency costs. Both production and cost frontiers are estimated. The results indicate that stocks and mutuals are operating on separate production and cost frontiers and thus represent distinct technologies. The stock technology dominates the mutual technology for producing stock output vectors and the mutual technology dominates the stock technology for producing mutual output vectors. However, the stock cost frontier dominates the mutual cost frontier for the majority of both stock and mutual firms. Thus, the mutuals' technological advantage is eroded because they are less successful than stocks in choosing cost-minimizing combinations of inputs. The finding of separate frontiers and organization specific technological advantages is consistent with the managerial discretion hypothesis, but we also find evidence that stocks are more successful than mutuals in minimizing costs.

    Consolidation and Efficiency in the U.S. Life Insurance Industry

    Get PDF
    This paper examines the relationship between mergers and acquisitions, efficiency, and scale economies in the US life insurance industry. We estimate cost and revenue efficiency over the period 1988-1995 using data envelopment analysis (DEA). The Malmquist methodology is used to measure changes in efficiency over time. We find that acquired firms achieve greater efficiency gains than firms that have not been involved in mergers or acquisitions. Firms operating with non-decreasing returns to scale and financially vulnerable firms are more likely to be acquisition targets. Overall, mergers and acquisitions in the life insurance industry have had a beneficial effect on efficiency. Journal of Economic Literature classification codes: G2, G22, G34. L11. Key Words: Efficiency, life insurance, mergers and acquisitions, scale economies, data envelopment analysis.

    The Incentive Effects of No Fault Automobile Insurance

    Get PDF
    This paper presents a theoretical and empirical analysis of the effects of no fault automobile insurance on accident rates. As a mechanism for compensating the victims of automobile accidents, no fault has several important advantages over the tort system. However, by restricting access to tort, no fault may weaken incentives for careful driving, leading to higher accident rates. We conduct an empirical analysis of automobile accident fatality rates in all U.S. states over the period 1982-1994, controlling for the potential endogeneity of no fault laws. The results support the hypothesis that no fault is significantly associated with higher fatal accident rates than tort.

    The Coexistence of Multiple Distributions Systems for Financial Services: The Case of Property-Liability Insurance

    Get PDF
    Property-liability insurance is distributed by two different types of firms, those that distribute their product through independent agents, who represent more than one insurer,and direct writing insurers that distribute insurance through exclusive agents, who represent only one insurer. This paper analyzes the reasons for the long term coexistence of the independent agency and direct writing distribution systems. Two primary hypotheses explain the coexistence of independent and exclusive agents. The market imperfections hypothesis suggests that firms that use independent agents survive while providing essentially the same service as firms using exclusive agents because of market imperfections such as price regulation, slow diffusion of information in insurance markets, or search costs that permit inefficient firms to survive alongside efficient firms. Efficient firms are expected to earn super-normal risk-adjusted profits, while inefficient firms will earn risk-adjusted profits closer to normal levels. The product quality hypothesis suggests that higher costs of independent agents represent unobserved differences in product quality or service intensity, such as the providing of additional customer assistance with claims settlement,offering a greater variety of product choice sand reducing policyholder search costs. This hypothesis predicts normal risk-adjusted profits for both independent and exclusive agency firms. Because product quality in insurance is essentially unobserved, researchers have been unable to reach consensus on whether the market imperfections hypothesis or the product quality hypothesis is more consistent with the observed cost data. This lack of consensus leaves open the economic question of whether the market works well in solving the problem of minimizing product distribution costs and leaves unresolved the policy issue of whether marketing costs in property-liability insurance are excessive and perhaps should receive regulatory attention. The authors propose a new methodology for distinguishing between market imperfection sand product quality using frontier efficiency methods. They estimate both profit efficiency and cost efficiency for a sample of independent and exclusive agency insurers. Measuring profit efficiency helps to identify unobserved product quality differences because customers should be willing to pay extra for higher quality. This approach allows for the possibility that some firms may incur additional costs providing superior service and be compensated for these costs through higher revenues. Profit efficiency also implicitly incorporates the qualities floss control and risk management services,since insurers that more effectively control losses and manage risk should have higher average risk-adjusted profits but not necessarily lower costs than less effective insurers. The empirical results confirm that independent agency firms have higher costs on average than do direct writers. The principal finding of the study is that most of the average differential between the two groups of firms disappears in the profit function analysis. This is a robust result that holds both in the authors tables of averages and in the regression analysis and applies to both the standard and non-standard profit functions. Based on averages, the profit efficiency differential is at most one-third as large as the profit efficiency differential.Based on the regression analysis, the profit inefficiency differential is at most one-fourth as large as the cost inefficiency differential,and the profit inefficiency differential is not statistically significant in the more fully specified models that control for size,organizational form and business mix. The results provide strong support for the product quality hypothesis and do not support the market imperfections hypothesis. The higher costs of independent agents appear to be due almost entirely to the provision of higher quality services, which are compensated for by additional revenues. A significant public policy implication is that regulatory decisions should not be based on costs alone. The authors findings imply that marketing cost differentials among insurers are mostly attributable to service differentials rather than to inefficiency and therefore do not represent social costs. The profit inefficiency results show that there is room for improvement in both the independent and direct writing segments of the industry. However, facilitating competition is likely to be a more effective approach to increasing efficiency than restrictive price regulation.

    Generating-function method for fusion rules

    Full text link
    This is the second of two articles devoted to an exposition of the generating-function method for computing fusion rules in affine Lie algebras. The present paper focuses on fusion rules, using the machinery developed for tensor products in the companion article. Although the Kac-Walton algorithm provides a method for constructing a fusion generating function from the corresponding tensor-product generating function, we describe a more powerful approach which starts by first defining the set of fusion elementary couplings from a natural extension of the set of tensor-product elementary couplings. A set of inequalities involving the level are derived from this set using Farkas' lemma. These inequalities, taken in conjunction with the inequalities defining the tensor products, define what we call the fusion basis. Given this basis, the machinery of our previous paper may be applied to construct the fusion generating function. New generating functions for sp(4) and su(4), together with a closed form expression for their threshold levels are presented.Comment: Harvmac (b mode : 47 p) and Pictex; to appear in J. Math. Phy
    corecore