9,612 research outputs found

    Use of composite rotations to correct systematic errors in NMR quantum computation

    Get PDF
    We implement an ensemble quantum counting algorithm on three NMR spectrometers with 1H resonance frequencies of 500, 600 and 750 MHz. At higher frequencies, the results deviate markedly from naive theoretical predictions. These systematic errors can be attributed almost entirely to off-resonance effects, which can be substantially corrected for using fully-compensating composite rotation pulse sequences originally developed by Tycko. We also derive an analytic expression for generating such sequences with arbitrary rotation angles.Comment: 8 pages RevTex including 7 PostScript figures (18 subfigures

    Analyzing Firm Performance in the Insurance Industry Using Frontier Efficiency Methods

    Get PDF
    In this introductory chapter to an upcoming book, the authors discuss the two principal types of efficiency frontier methodologies - the econometric (parametric) approach and the mathematical programming (non-parametric) approach. Frontier efficiency methodologies are discussed as useful in a variety of contexts: they can be used for testing economic hypotheses; providing guidance to regulators and policymakers; comparimg economic performance across countries; and informing management of the effects of procedures and strategies adapted by the firm. The econometric approach requires the specification of a production, cost, revenue, or profit function as well as assumptions about error terms. But this methodology is vulnerable to errors in the specification of the functional form or error term. The mathematical programming or linear programming approach avoids this type of error and measures any departure from the frontier as a relative inefficiency. Because each of these methods has advantages and disadvantages, it is recommended to estimate efficiency using more than one method. An important step in efficiency analysis is the definition of inputs and outputs and their prices. Insurer inputs can be classified into three principal groups: labor, business services and materials, and capital. Three principal approaches have been used to measure outputs in the financial services sector: the asset or intermediation approach, the user-cost approach, and the value-added approach. The asset approach treats firms as pure financial intermediaries and would be inappropriate for insurers because they provide other services. The user-cost method determines whether a financial product is an input or output based on its net contribution to the revenues of the firm. This method requires precise data on products, revenues and opportunity costs which are difficult to estimate in insurance. The value-added approach is judged the most appropriate method for studying insurance efficiency. it considers all asset and liability categories to have some output characteristics rather than distinguishing inputs from outputs. In order to measure efficiency in the insurance industry in which outputs are mostly intangible, measurable services must be defined. The three principal services provided by insurance companies are risk pooling and risk-bearing, "real" financial services relating to insured losses, and intermediation. The authors discuss how these services can be measured as outputs in value-added analysis. They then summarize existing efficiency literature.

    Feed intake pattern, behaviour, rumen characteristics and blood metabolites of finishing beef steers offered total mixed rations constituted at feeding or ensiling

    Get PDF
    peer-reviewedTwo experiments were undertaken. In Experiment 1, behaviour, intake pattern and blood metabolites, were recorded for steers offered total mixed rations (TMR) based on grass silage and concentrates, and constituted either at ensiling (E-TMR) or feedout (F-TMR). Fourteen continental crossbred steers (mean starting weight 505 (s.d. 41.5) kg) were assigned to each of the following eight treatments: grass silage offered ad libitum (SO), E-TMR diets constituted in approximate dry matter (DM) ratios of grass:concentrates of 75:25 (EL), 50:50 (EM) and 25:75 (EH), F-TMR diets constituted in approximate DM ratios of grass silage:concentrates of 75:25 (FL), 50:50 (FM) and 25:75 (FH), and finally concentrates ad libitum (AL). Total DM intake increased linearly (P < 0.001) and the time spent eating and ruminating decreased linearly (P < 0.001) with increasing concentrate proportion. Animals on the F-TMR diets had higher total DM intakes (P < 0.05) and plasma glucose (P < 0.05) and urea (P < 0.001) concentrations than animals on the corresponding E-TMR diets. No effect of method of feed preparation on intake pattern or behaviour was recorded. In Experiment 2, four ruminally cannulated Holstein-Friesian steers of mean initial live weight 630 (s.d. 23.2) kg were used to evaluate rumen characteristics for four of the above diets (FL, EL, FH and EH) in a 4 × 4 latin square design. Higher concentrate diets resulted in lower rumen pH (P < 0.05), higher lactic acid (P < 0.001) and ammonia (P < 0.05) concentrations and lower acetate:propionate (P < 0.05). F-TMR was associated with a higher (P < 0.05) rumen volatile fatty acid concentration but no difference in other rumen fermentation characteristics compared to E-TMR. Concentrate proportion and method of feed preparation had no effect (P > 0.05) on rumen pool sizes but animals consuming the high concentrate diet had a faster (P < 0.05) rumen passage rate of NDF than animals on the low concentrate diet.B. Cummins was in receipt of a Teagasc Walsh Fellowship

    Conservation characteristics of grass and dry sugar beet pulp co-ensiled after different degrees of mixing

    Get PDF
    peer-reviewedThe objective of this experiment was to quantify the effects of the degree of mixing of dry molassed sugar beet pulp (BP) with grass on silage conservation characteristics. Herbage from a timothy (Phleum pratense) sward was precision chopped and treated with a formic acid based additive (3 l/t grass). Units of 50 kg grass, without or with 2.5kg BP were randomly allocated among four replicates on each of seven treatments. The treatments were (1) no BP (NONE), (2) BP evenly mixed through the grass (EVEN), (3) BP evenly mixed through the lower 25 kg grass (LOWH), (4) BP evenly mixed through the lower 12.5 kg grass (LOWQ), (5) 0.625 kg BP mixed through the top 25 kg grass and 1.875 kg SBP mixed through the lower 25 kg grass (25/75), (6) BP placed in 0.5 kg layers beneath each 10 kg grass (LAYR), and (7) BP placed in a single layer under all of the grass (BOTM). Laboratory silos were filled and sealed, and stored at 15 °C for 163 days. Effluent was collected and weighed from each silo throughout the ensilage period. At opening, silage composition and aerobic stability measurements were made. Total outflow of effluent was reduced (P<0.001) by the addition of BP; LAYR had a greater effect (P<0.001) than any of the other treatments. Effluent dry matter (DM) concentration was highest (P<0.05) for BOTM and lowest (P<0.01) for NONE. All treatments underwent similar lactic-acid dominant fermentations. Incorporation of BP with grass increased silage DM concentration (P<0.001), in vitro DM digestibility (P<0.05) and water soluble carbohydrate (P<0.001) concentration and reduced acid detergent fibre (P<0.001) concentration. Aerobic stability was similar across treatments and aerobic deterioration at 192 h was higher (P<0.05) for LOWQ, 25/75, LAYR and BOTM than for NONE. In conclusion, the incorporation of BP increased silage DM digestibility but had relatively little effect on fermentation or aerobic stability. Placing BP in layers gave the largest and most sustained restriction in effluent output.B. Cummins acknowledges receipt of a Walsh Fellowship provided by Teagasc

    Effects of breed type, silage harvest date and pattern of offering concentrates on intake, performance and carcass traits of finishing steers

    Get PDF
    peer-reviewedThe objective of this experiment was to investigate the effects and interactions of breed type, silage harvest date and pattern of offering concentrates on intake, performance and carcass traits of finishing steers. Seventy-two steers (36 Friesian and 36 beef cross) were blocked on weight within breed type and assigned to a pre-experimental slaughter group or to one of 4 dietary treatments in a 2 (breed type) 2 (early- or late- cut silage) 2 (flat rate or varied pattern of offering concentrates) factorial arrangement of treatments. The flat-rate feeding pattern was silage ad libitum plus 5 kg concentrates per head daily to slaughter. The varied feeding pattern was silage only for 79 days followed by concentrates ad libitum to slaughter. All animals were slaughtered together after 164 days when the groups on the two feeding patterns had consumed the same total quantity of concentrates. Friesians had a higher (P < 0.001) silage dry matter (DM) intake and a higher (P < 0.01) total DM intake than the beef crosses. Live-weight gain was similar for both breed types but the beef-cross animals had a higher (P < 0.001) kill-out proportion, higher (P < 0.01) carcass gain, and better (P < 0.001) carcass conformation than the Friesians. The beef-cross type also had a higher (P < 0.001) proportion of muscle and a lower (P < 0.001) proportion of bone in the carcass. Silage harvest date had no effect on silage or total DM intakes but the early-cut silage did result in higher (P < 0.01) carcass gain. Animals on the varied feeding pattern consumed less (P < 0.01) silage DM and less (P < 0.001) total DM than those on the flat rate feeding pattern. Live-weight gain and carcass gain were similar for the two feeding patterns. It is concluded that Friesians had a higher intake, but had lower carcass gain than the beef-cross type. Animals on the early-cut silage had higher carcass gain than those on the late-cut silage. The varied feeding pattern resulted in lower DM intake but efficiency of feed energy utilisation was similar for both feeding patterns. Interactions were generally not statistically significant

    Can Insurers Pay for the "Big One"? Measuring the Capacity of an Insurance Market to Respond to Catastrophic Losses

    Get PDF
    This paper presents a theoretical and empirical analysis of the capacity of the U.S. property-liability insurance industry to finance major catastrophic property losses. The topic is important because catastrophic events such as the Northridge earthquake and Hurricane Andrew have raised questions about the ability of the insurance industry to respond to the "Big One," usually defined as a hurricane or earthquake in the 100billionrange.Atfirstglance,theU.S.propertyliabilityinsuranceindustry,withequitycapitalofmorethan100 billion range. At first glance, the U.S. property-liability insurance industry, with equity capital of more than 300 billion, should be able to sustain a loss of this magnitude. However, the reality could be different; depending on the distribution of damage and the spread of coverage as well as the correlations between insurer losses and industry losses. Thus, the prospect of a mega catastrophe brings the real threat of widespread insurance failures and unpaid insurance claims. Our theoretical analysis takes as its starting point the well-known article by Borch (1962), which shows that the Pareto optimal result in a market characterized by risk averse insurers is for each insurer to hold a proportion of the "market portfolio" of insurance contracts. Each insurer pays a proportion of total industry losses; and the industry behaves as a single firm, paying 100 percent of losses up to the point where industry net premiums and equity are exhausted. Borch's theorem gives rise to a natural definition of industry capacity as the amount of industry resources that are deliverable conditional on an industry loss of a given size. In our theoretical analysis, we show that the necessary condition for industry capacity to be maximized is that all insurers hold a proportionate share of the industry underwriting portfolio. The sufficient condition for capacity maximization, given a level of total resources in the industry, is for all insurers to hold a net of reinsurance underwriting portfolio which is perfectly correlated with aggregate industry losses. Based on these theoretical results, we derive an option-like model of insurer responses to catastrophes, leading to an insurer response-function where the total payout, conditional on total industry losses, is a function of the industry and company expected losses, industry and company standard deviation of losses, company net worth, and the correlation between industry and company losses. The industry response function is obtained by summing the company response functions, giving the capacity of the industry to respond to losses of various magnitudes. We utilize 1997 insurer financial statement data to estimate the capacity of the industry to respond to catastrophic losses. Two samples of insurers are utilized - a national sample, to measure the capacity of the industry as a whole to respond to a national event, and a Florida sample, to measure the capacity of the industry to respond to a Florida hurricane. The empirical analysis estimates the capacity of the industry to bear losses ranging from the expected value of loss up to a loss equal to total company resources. We develop a measure of industry efficiency equal to the difference between the loss that would be paid if the industry acts as a single firm and the actual estimated payment based on our option model. The results indicate that national industry efficiency ranges from about 78 to 85 percent, based on catastrophe losses ranging from zero to 300billion,andfrom70to77percent,basedoncatastrophelossesrangingfrom300 billion, and from 70 to 77 percent, based on catastrophe losses ranging from 200 to 300billion.Theindustryhasmorethanadequatecapacitytopayforcatastrophesofmoderatesize.E.g.,basedonboththenationalandFloridasamples,theindustrycouldpayatleast98.6percentofa300 billion. The industry has more than adequate capacity to pay for catastrophes of moderate size. E.g., based on both the national and Florida samples, the industry could pay at least 98.6 percent of a 20 billion catastrophe. For a catastrophe of 100billion,theindustrycouldpayatleast92.8percent.However,evenifmostlosseswouldbepaidforaneventofthismagnitude,asignificantnumberofinsolvencieswouldoccur,disruptingthenormalfunctioningoftheinsurancemarket,notonlyforpropertyinsurancebutalsoforothercoverages.Wealsocomparethecapacityoftheindustrytorespondtocatastrophiclossesbasedon1997capitalizationlevelswithitscapacitybasedon1991capitalizationlevels.ThecomparisonismotivatedbythesharpincreaseincapitalizaitonfollowingHurricaneAndrewandtheNorthridgeearthquake.In1991,theindustryhad100 billion, the industry could pay at least 92.8 percent. However, even if most losses would be paid for an event of this magnitude, a significant number of insolvencies would occur, disrupting the normal functioning of the insurance market, not only for property insurance but also for other coverages. We also compare the capacity of the industry to respond to catastrophic losses based on 1997 capitalization levels with its capacity based on 1991 capitalization levels. The comparison is motivated by the sharp increase in capitalizaiton following Hurricane Andrew and the Northridge earthquake. In 1991, the industry had .88 in equity capital per dollar of incurred losses, whereas in 1997 this ratio had increased to 1.56.Capacityresultsbasedonourmodelindicateadramaticincreaseincapacitybetween1991and1997.Foracatastropheof1.56. Capacity results based on our model indicate a dramatic increase in capacity between 1991 and 1997. For a catastrophe of 100 billion, our lower bound estimate of industry capacity in 1991 is only 79.6 percent, based on the national sample, compared to 92.8 percent in 1997. For the Florida sample, we estimate that insurers could have paid at least 72.2 percent of a $100 billion catastrophe in 1991 and 89.7 percent in 1997. Thus, the industry is clearly much better capitalized now than it was prior to Andrew. The results suggest that the gaps in catastrophic risk financing are presently not sufficient to justify Federal government intervention in private insurance markets in the form of Federally sponsored catastrophe reinsurance. However, even though the industry could adequately fund the "Big One," doing so would disrupt the functioning of insurance markets and cause price increases for all types of property-liability insurance. Thus, it appears that there is still a gap in capacity that provides a role for privately and publicly traded catastrophic loss derivative contracts.

    Organizational Form and Efficiency: An Analysis of Stock and Mutual Property-Liability Insurers

    Get PDF
    This paper analyzes the efficiency of stock and mutual organizational forms in the property-liability insurance industry using nonparametric frontier efficiency methods. We test the managerial discretion hypothesis, which predicts that the market will sort organizational forms into market segments where they have comparative advantages in minimizing the costs of production, including agency costs. Both production and cost frontiers are estimated. The results indicate that stocks and mutuals are operating on separate production and cost frontiers and thus represent distinct technologies. The stock technology dominates the mutual technology for producing stock output vectors and the mutual technology dominates the stock technology for producing mutual output vectors. However, the stock cost frontier dominates the mutual cost frontier for the majority of both stock and mutual firms. Thus, the mutuals' technological advantage is eroded because they are less successful than stocks in choosing cost-minimizing combinations of inputs. The finding of separate frontiers and organization specific technological advantages is consistent with the managerial discretion hypothesis, but we also find evidence that stocks are more successful than mutuals in minimizing costs.
    corecore