15,462 research outputs found

    Increased hazard of myocardial infarction with insulin‐provision therapy in actively smoking patients with diabetes mellitus and stable ischemic heart disease: The BARI 2D (Bypass Angioplasty Revascularization Investigation 2 Diabetes) trial

    Get PDF
    Background In the BARI 2D (Bypass Angioplasty Revascularization Investigation 2 Diabetes) trial, randomization of diabetic patients with stable ischemic heart disease to insulin provision ( IP ) therapy, as opposed to insulin sensitization ( IS ) therapy, resulted in biochemical evidence of impaired fibrinolysis but no increase in adverse clinical outcomes. We hypothesized that the prothrombotic effect of IP therapy in combination with the hypercoagulable state induced by active smoking would result in an increased risk of myocardial infarction ( MI ). Methods and Results We analyzed BARI 2D patients who were active smokers randomized to IP or IS therapy. The primary end point was fatal or nonfatal MI . PAI ‐1 (plasminogen activator inhibitor 1) activity was analyzed at 1, 3, and 5 years. Of 295 active smokers, MI occurred in 15.4% randomized to IP and in 6.8% randomized to IS over the 5.3 years ( P =0.023). IP therapy was associated with a 3.2‐fold increase in the hazard of MI compared with IS therapy (hazard ratio: 3.23; 95% confidence interval, 1.43–7.28; P =0.005). Baseline PAI ‐1 activity (19.0 versus 17.5 Au/mL, P =0.70) was similar in actively smoking patients randomized to IP or IS therapy. However, IP therapy resulted in significantly increased PAI ‐1 activity at 1 year (23.0 versus 16.0 Au/mL, P =0.001), 3 years (24.0 versus 18.0 Au/mL, P =0.049), and 5 years (29.0 versus 15.0 Au/mL, P =0.004) compared with IS therapy. Conclusions Among diabetic patients with stable ischemic heart disease who were actively smoking, IP therapy was independently associated with a significantly increased hazard of MI . This finding may be explained by higher PAI ‐1 activity in active smokers treated with IP therapy. Clinical Trial Registration URL : http://www.clinicaltrials.gov . Unique identifier: NCT 00006305. </jats:sec

    DEMAND ESTIMATION FOR AGRICULTURAL PROCESSING CO-PRODUCTS

    Get PDF
    Co-products of processing agricultural commodities are often marketed through private transaction rather than through public markets or those in which public transaction information is recorded or available. The resulting lack of historical price information prohibits the use of positive time series techniques to estimate demand. Demand estimates for co-products are of value to both livestock producers, who obtain them for use in livestock rations, and processors, who must sell or otherwise dispose of them. Linear programming has long been used, first by researchers and later as a mainstream tool for nutritionists and producers, to formulate least-cost livestock rations. Here it is used as a normative technique to estimate step function demand schedules for co-products by individual livestock classes within a crop-reporting district. Regression is then used to smooth step function demand schedules by fitting demand data to generalized Leontief cost functions. Seemingly unrelated regression is used to estimate factor demand first adjusted for data censoring using probit analysis. Demand by individual livestock classes is aggregated over the number of livestock within a region. Quantities demanded by beef cows for each of the three co-products considered, sugarbeet pulp, wheat middlings, and potato waste, are large relative to other species because of their predominance in the district. At the current price for sugarbeet pulp, quantity demanded by district livestock is low. However quantity demanded is price elastic and becomes much greater at lower prices. Wheat middlings can be an important component of livestock rations, even at higher prices. At a price slightly below the current price, local livestock demand would exhaust the wheat middlings produced at the district's only wheat processing plant. Potato waste is most appropriate for ruminant diets because these animals are able to consume a large quantity of this high moisture feedstuff. Potato waste can be a cost-effective component in beef and dairy rations. Practically, livestock markets for potato waste must be in close proximity to a potato processing plant. Its high moisture content limits the distance it can be economically transported. At current prices, potato waste can be economically included in the ration for beef cows on a farm nearly 100 miles from the processing plant, although storage challenges may restrict use of the feed to closer operations.co-products, demand estimation, econometrics, linear programming, Agribusiness,

    DEMAND ESTIMATION FOR AGRICULTURAL PROCESSING CO-PRODUCTS

    Get PDF
    Co-products of processing agricultural commodities are often marketed through private transaction rather than through public markets or those in which public transaction information is recorded or available. The resulting lack of historical price information prohibits the use of positive time series techniques to estimate demand. Demand estimates for co-products are of value to both livestock producers, who obtain them for use in livestock rations, and processors, who must sell or otherwise dispose of them. Linear programming has long been used, first by researchers and later as a mainstream tool for nutritionists and producers, to formulate least cost livestock rations. Here it is used as a normative technique to estimate step function demand schedules for co-products by individual livestock classes within a region. Regression is then used to smooth step function demand schedules by fitting demand data to generalized Leontief cost functions. Seemingly unrelated regression is used to estimate factor demand first adjusted for data censoring using probit analysis. Demand by individual livestock classes is aggregated over the number of livestock within a region. Species important to demand for each co-product are identified and own price elasticity for individual livestock classes and all livestock are estimated.Agribusiness, Demand and Price Analysis,

    Pressure on charged domain walls and additional imprint mechanism in ferroelectrics

    Full text link
    The impact of free charges on the local pressure on a charged ferroelectric domain wall produced by an electric field has been analyzed. A general formula for the local pressure on a charged domain wall is derived considering full or partial compensation of bound polarization charges by free charges. It is shown that the compensation can lead to a very strong reduction of the pressure imposed on the wall from the electric field. In some cases this pressure can be governed by small nonlinear effects. It is concluded that the free charge compensation of bound polarization charges can lead to substantial reduction of the domain wall mobility even in the case when the mobility of free charge carriers is high. This mobility reduction gives rise to an additional imprint mechanism which may play essential role in switching properties of ferroelectric materials. The effect of the pressure reduction on the compensated charged domain walls is illustrated for the case of 180-degree ferroelectric domain walls and of 90-degree ferroelectric domain walls with the head-to-head configuration of the spontaneous polarization vectors.Comment: subm. to PRB. This verion is extended by appendi

    Time relaxation of interacting single--molecule magnets

    Full text link
    We study the relaxation of interacting single--molecule magnets (SMMs) in both spatially ordered and disordered systems. The tunneling window is assumed to be, as in Fe8, much narrower than the dipolar field spread. We show that relaxation in disordered systems differs qualitatively from relaxation in fully occupied cubic and Fe_8 lattices. We also study how line shapes that develop in ''hole--digging'' experiments evolve with time t in these fully occupied lattices. We show (1) that the dipolar field h scales as t^p in these hole line shapes and show (2) how p varies with lattice structure. Line shapes are not, in general, Lorentzian. More specifically, in the lower portion of the hole, they behave as (h/t^p)^{(1/p)-1} if h is outside the tunnel window. This is in agreement with experiment and with our own Monte Carlo results.Comment: 21 LaTeX pages, 6 eps figures. Submitted to PRB on 15 June 2005. Accepted on 13 August 200

    The optimal polarizations for achieving maximum contrast in radar images

    Get PDF
    There is considerable interest in determining the optimal polarizations that maximize contrast between two scattering classes in polarimetric radar images. A systematic approach is presented for obtaining the optimal polarimetric matched filter, i.e., that filter which produces maximum contrast between two scattering classes. The maximization procedure involves solving an eigenvalue problem where the eigenvector corresponding to the maximum contrast ratio is an optimal polarimetric matched filter. To exhibit the physical significance of this filter, it is transformed into its associated transmitting and receiving polarization states, written in terms of horizontal and vertical vector components. For the special case where the transmitting polarization is fixed, the receiving polarization which maximizes the contrast ratio is also obtained. Polarimetric filtering is then applies to synthetic aperture radar images obtained from the Jet Propulsion Laboratory. It is shown, both numerically and through the use of radar imagery, that maximum image contrast can be realized when data is processed with the optimal polarimeter matched filter

    Radius Dependent Luminosity Evolution of Blue Galaxies in GOODS-N

    Get PDF
    We examine the radius-luminosity (R-L) relation for blue galaxies in the Team Keck Redshift Survey (TKRS) of GOODS-N. We compare with a volume-limited, Sloan Digital Sky Survey sample and find that the R-L relation has evolved to lower surface brightness since z=1. Based on the detection limits of GOODS this can not be explained by incompleteness in low surface-brightness galaxies. Number density arguments rule out a pure radius evolution. It can be explained by a radius dependent decline in B-band luminosity with time. Assuming a linear shift in M_B with z, we use a maximum likelihood method to quantify the evolution. Under these assumptions, large (R_{1/2} > 5 kpc), and intermediate sized (3 < R_{1/2} < 5 kpc) galaxies, have experienced Delta M_B =1.53 (-0.10,+0.13) and 1.65 (-0.18, +0.08) magnitudes of dimming since z=1. A simple exponential decline in star formation with an e-folding time of 3 Gyr can result in this amount of dimming. Meanwhile, small galaxies, or some subset thereof, have experienced more evolution, 2.55 (+/- 0.38) magnitudes. This factor of ten decline in luminosity can be explained by sub-samples of starbursting dwarf systems that fade rapidly, coupled with a decline in burst strength or frequency. Samples of bursting, luminous, blue, compact galaxies at intermediate redshifts have been identified by various previous studies. If there has been some growth in galaxy size with time, these measurements are upper limits on luminosity fading.Comment: 34 Total pages, 15 Written pages, 19 pages of Data Table, 13 Figures, accepted for publication in Ap

    Suspensions of supracolloidal magnetic polymers: self-assembly properties from computer simulations

    Full text link
    We study self-assembly in suspensions of supracolloidal polymer-like structures made of crosslinked magnetic particles. Inspired by self-assembly motifs observed for dipolar hard spheres, we focus on four different topologies of the polymer-like structures: linear chains, rings, Y-shaped and X-shaped polymers. We show how the presence of the crosslinkers, the number of beads in the polymer and the magnetic interparticle interaction affect the structure of the suspension. It turns out that for the same set of parameters, the rings are the least active in assembling larger structures, whereas the system of Y- and especially X-like magnetic polymers tend to form very large loose aggregates

    Core measures of inflation as predictors of total inflation

    Get PDF
    Two rationales offered for policymakers' focus on core measures of inflation as a guide to underlying inflation are that core inflation omits food and energy prices, which are thought to be more volatile than other components, and that core inflation is thought to be a better predictor of total inflation over time horizons of import to policymakers. The authors' investigation finds little support for either rationale. They find that food and energy prices are not the most volatile components of inflation and that depending on which inflation measure is used, core inflation is not necessarily the best predictor of total inflation. However, they do find that combining CPI and PCE inflation measures can lead to statistically significant more accurate forecasts of each inflation measure, suggesting that each measure includes independent information that can be exploited to yield better forecasts.Inflation (Finance)
    corecore