1,068 research outputs found

    Treatment of atherosclerotic renovascular hypertension: review of observational studies and a meta-analysis of randomized clinical trials.

    Get PDF
    open9Atherosclerotic renal artery stenosis can cause ischaemic nephropathy and arterial hypertension. We herein review the observational and randomized clinical trials (RCTs) comparing medical and endovascular treatment for control of hypertension and renal function preservation. Using the Population Intervention Comparison Outcome (PICO) strategy, we identified the relevant studies and performed a novel meta-analysis of all RCTs to determine the efficacy and safety of endovascular treatment when compared with medical therapy. The following outcomes were examined: baseline follow-up difference in mean systolic and diastolic blood pressure (BP), serum creatinine, number of drugs at follow-up, incident events (heart failure, stroke, and worsening renal function), mortality, cumulative relative risk of heart failure, stroke, and worsening renal function. Seven studies comprising a total of 2155 patients (1741 available at follow-up) were considered, including the recently reported CORAL Study. Compared with baseline, diastolic BP fell more at follow-up in patients in the endovascular than in the medical treatment arm (standard difference in means -0.21, 95% confidence interval (CI): -0.342 to -0.078, P = 0.002) despite a greater reduction in the mean number of antihypertensive drugs (standard difference in means -0.201, 95% CI: -0.302 to -0.1, P < 0.001). At variance, follow-up changes (from baseline) of systolic BP, serum creatinine, and incident cardiovascular event rates did not differ between treatment arms. Thus, patients with atherosclerotic renal artery stenosis receiving endovascular treatment required less anti-antihypertensive drugs at follow-up than those medically treated. Notwithstanding this, they evidenced a better control of diastolic BP.openopenCaielli P;Frigo AC;Pengo MF;Rossitto G;Maiolino G;Seccia TM;Calò LA;Miotto D;Rossi GPCaielli, P; Frigo, ANNA CHIARA; Pengo, Mf; Rossitto, G; Maiolino, G; Seccia, TERESA MARIA; Calò, La; Miotto, Diego; Rossi, Gianpaol

    Tracking power system events with accuracy-based PMU adaptive reporting rate

    Get PDF
    Fast dynamics and transient events are becoming more and more frequent in power systems, due to the high penetration of renewable energy sources and the consequent lack of inertia. In this scenario, Phasor Measurement Units (PMUs) are expected to track the monitored quantities. Such functionality is related not only to the PMU accuracy (as per the IEC/IEEE 60255-118-1 standard) but also to the PMU reporting rate (RR). High RRs allow tracking fast dynamics, but produce many redundant measurement data in normal conditions. In view of an effective tradeoff, the present paper proposes an adaptive RR mechanism based on a real-time selection of the measurements, with the target of preserving the information content while reducing the data rate. The proposed method has been tested considering real-world datasets and applied to four different PMU algorithms. The results prove the method effectiveness in reducing the average data throughput as well as its scalability at PMU concentrator or storage level

    New magnetron configurations for sputtered Nb onto Cu

    Get PDF
    Abstract Niobium sputtered film microstructure and morphology and consequently its superconducting properties, strongly depend on target-substrate deposition angle. In order to improve the Nb film quality for 1.5 GHz cavity coatings, we investigated the application of three main ideas to the sputtering process: (i) making niobium atoms impinging perpendicularly the substrate surface, (ii) promoting the effect of plasma bombardment of the growing film, and (iii) increasing the sputtering rate. Therefore, several different sputtering configurations are under development. The effect of Nb atoms arriving perpendicular to the substrate is explored either by using a cathode that follows the cavity shape or by increasing the plasma confinement efficiency by means of a target parallel to the magnetic field lines. The removal of adsorbed impurities from the film surface and the increase of the film density are investigated by a biased third electrode that promotes the positive ions bombardment of the growing film. A mixed bias-magnetron has been built using a positively charged metal grid positioned all around the cathode

    Bio-additives for CI engines from one-pot alcoholysis reaction of lignocellulosic biomass: An experimental activity

    Get PDF
    In the recent years the progressive decrease in fossil petroleum resources and gradual deprivation of the environment have attracted increasing interest towards the use of biomass as renewable carbon source for the production of chemicals and transportation fuels. In particular, lignocellulosic biomass represents an abundant and inexpensive renewable resource with high carbon sequestration ability and non-polluting. In this paper, the valorisation of mixtures made of n-butanol (n-BuOH), butyl levulinate (BL) and dibutyl ether (DBE), in different percentages, as additive fuel for compression ignition (CI) internal combustion engine (ICE) was studied. These mixtures can be directly obtained from the catalytic alcoholysis reaction of the cellulosic fraction of raw and pre-treated lignocellulosic biomasses. Moreover, the possibility to recycle and reutilize the excess alcohol (n-Butanol), during the catalytic alcoholysis reaction, has been considered since it represents an opportunity to reduce the overall costs of the process. Therefore, a blend constituted only by BL and DBE has been also tested. The model mixtures were prepared by using commercial reactants, characterized by compositions analogous to those of the reaction mixtures. These model mixtures were tested as blend with Diesel fuel in a CI-ICE with the measurement of pollutant emission and performance. Results have been compared with those obtained fuelling the engine with a commercial Diesel fuel. As a whole, tests results have evidenced the potentiality of these novel blending mixtures to reduce the emissions of particulate without any significant increase in the other pollutants and negligible changes in engine power and efficiency

    The nature and evolution of Nova Cygni 2006

    Full text link
    AIMS: Nova Cyg 2006 has been intensively observed throughout its full outburst. We investigate the energetics and evolution of the central source and of the expanding ejecta, their chemical abundances and ionization structure, and the formation of dust. METHOD: We recorded low, medium, and/or high-resolution spectra (calibrated into accurate absolute fluxes) on 39 nights, along with 2353 photometric UBVRcIc measures on 313 nights, and complemented them with IR data from the literature. RESULTS: The nova displayed initially the normal photometric and spectroscopic evolution of a fast nova of the FeII-type. Pre-maximum, principal, diffuse-enhanced, and Orion absorption systems developed in a normal way. After the initial outburst, the nova progressively slowed its fading pace until the decline reversed and a second maximum was reached (eight months later), accompanied by large spectroscopic changes. Following the rapid decline from second maximum, the nova finally entered the nebular phase and formed optically thin dust. We computed the amount of formed dust and performed a photo-ionization analysis of the emission-line spectrum during the nebular phase, which showed a strong enrichment of the ejecta in nitrogen and oxygen, and none in neon, in agreement with theoretical predictions for the estimated 1.0 Msun white dwarf in Nova Cyg 2006. The similarities with the poorly investigated V1493 Nova Aql 1999a are discussed.Comment: in press in Astronomy and Astrophysic

    Autotuning Algorithmic Choice for Input Sensitivity

    Get PDF
    Empirical autotuning is increasingly being used in many domains to achieve optimized performance in a variety of different execution environments. A daunting challenge faced by such autotuners is input sensitivity, where the best autotuned configuration may vary with different input sets. In this paper, we propose a two level solution that: first, clusters to find input sets that are similar in input feature space; then, uses an evolutionary autotuner to build an optimized program for each of these clusters; and, finally, builds an adaptive overhead aware classifier which assigns each input to a specific input optimized program. Our approach addresses the complex trade-off between using expensive features, to accurately characterize an input, and cheaper features, which can be computed with less overhead. Experimental results show that by adapting to different inputs one can obtain up to a 3x speedup over using a single configuration for all inputs

    High Frame-rate Imaging Based Photometry, Photometric Reduction of Data from Electron-multiplying Charge Coupled Devices (EMCCDs)

    Get PDF
    The EMCCD is a type of CCD that delivers fast readout times and negligible readout noise, making it an ideal detector for high frame rate applications which improve resolution, like lucky imaging or shift-and-add. This improvement in resolution can potentially improve the photometry of faint stars in extremely crowded fields significantly by alleviating crowding. Alleviating crowding is a prerequisite for observing gravitational microlensing in main sequence stars towards the galactic bulge. However, the photometric stability of this device has not been assessed. The EMCCD has sources of noise not found in conventional CCDs, and new methods for handling these must be developed. We aim to investigate how the normal photometric reduction steps from conventional CCDs should be adjusted to be applicable to EMCCD data. One complication is that a bias frame cannot be obtained conventionally, as the output from an EMCCD is not normally distributed. Also, the readout process generates spurious charges in any CCD, but in EMCCD data, these charges are visible as opposed to the conventional CCD. Furthermore we aim to eliminate the photon waste associated with lucky imaging by combining this method with shift-and-add. A simple probabilistic model for the dark output of an EMCCD is developed. Fitting this model with the expectation-maximization algorithm allows us to estimate the bias, readout noise, amplification, and spurious charge rate per pixel and thus correct for these phenomena. To investigate the stability of the photometry, corrected frames of a crowded field are reduced with a PSF fitting photometry package, where a lucky image is used as a reference. We find that it is possible to develop an algorithm that elegantly reduces EMCCD data and produces stable photometry at the 1% level in an extremely crowded field.Comment: Submitted to Astronomy and Astrophysic

    The Domain Chaos Puzzle and the Calculation of the Structure Factor and Its Half-Width

    Full text link
    The disagreement of the scaling of the correlation length xi between experiment and the Ginzburg-Landau (GL) model for domain chaos was resolved. The Swift-Hohenberg (SH) domain-chaos model was integrated numerically to acquire test images to study the effect of a finite image-size on the extraction of xi from the structure factor (SF). The finite image size had a significant effect on the SF determined with the Fourier-transform (FT) method. The maximum entropy method (MEM) was able to overcome this finite image-size problem and produced fairly accurate SFs for the relatively small image sizes provided by experiments. Correlation lengths often have been determined from the second moment of the SF of chaotic patterns because the functional form of the SF is not known. Integration of several test functions provided analytic results indicating that this may not be a reliable method of extracting xi. For both a Gaussian and a squared SH form, the correlation length xibar=1/sigma, determined from the variance sigma^2 of the SF, has the same dependence on the control parameter epsilon as the length xi contained explicitly in the functional forms. However, for the SH and the Lorentzian forms we find xibar ~ xi^1/2. Results for xi determined from new experimental data by fitting the functional forms directly to the experimental SF yielded xi ~ epsilon^-nu} with nu ~= 1/4 for all four functions in the case of the FT method, but nu ~= 1/2, in agreement with the GL prediction, in the the case of the MEM. Over a wide range of epsilon and wave number k, the experimental SFs collapsed onto a unique curve when appropriately scaled by xi.Comment: 15 pages, 26 figures, 1 tabl

    A Wavelet-Based Algorithm for the Spatial Analysis of Poisson Data

    Get PDF
    Wavelets are scaleable, oscillatory functions that deviate from zero only within a limited spatial regime and have average value zero. In addition to their use as source characterizers, wavelet functions are rapidly gaining currency within the source detection field. Wavelet-based source detection involves the correlation of scaled wavelet functions with binned, two-dimensional image data. If the chosen wavelet function exhibits the property of vanishing moments, significantly non-zero correlation coefficients will be observed only where there are high-order variations in the data; e.g., they will be observed in the vicinity of sources. In this paper, we describe the mission-independent, wavelet-based source detection algorithm WAVDETECT, part of the CIAO software package. Aspects of our algorithm include: (1) the computation of local, exposure-corrected normalized (i.e. flat-fielded) background maps; (2) the correction for exposure variations within the field-of-view; (3) its applicability within the low-counts regime, as it does not require a minimum number of background counts per pixel for the accurate computation of source detection thresholds; (4) the generation of a source list in a manner that does not depend upon a detailed knowledge of the point spread function (PSF) shape; and (5) error analysis. These features make our algorithm considerably more general than previous methods developed for the analysis of X-ray image data, especially in the low count regime. We demonstrate the algorithm's robustness by applying it to various images.Comment: Accepted for publication in Ap. J. Supp. (v. 138 Jan. 2002). 61 pages, 23 figures, expands to 3.8 Mb. Abstract abridged for astro-ph submissio
    • …
    corecore