488 research outputs found

    A Vision of a Decisional Model for Re-optimizing Query Execution Plans Based on Machine Learning Techniques

    Get PDF
    International audienceMany of the existing cloud database query optimization algorithms target reducing the monetary cost paid to cloud service providers in addition to query response time. These query optimization algorithms rely on an accurate cost estimation so that the optimal query execution plan (QEP) is selected. The cloud environment is dynamic, meaning the hardware configuration, data usage, and workload allocations are continuously changing. These dynamic changes make an accurate query cost estimation difficult to obtain. Concurrently, the query execution plan must be adjusted automatically to address these changes. In order to optimize the QEP with a more accurate cost estimation, the query needs to be optimized multiple times during execution. On top of this, the most updated estimation should be used for each optimization. However, issues arise when deciding to pause the execution for minimum overhead. In this paper, we present our vision of a method that uses machine learning techniques to predict the best timings for optimization during execution

    The Evolution of Helium and Hydrogen Ionization Corrections as HII Regions Age

    Get PDF
    Helium and hydrogen recombination lines observed in low-metallicity, extragalactic, HII regions provide the data used to infer the primordial helium mass fraction, Y_P. In deriving abundances from observations, the correction for unseen neutral helium or hydrogen is usually assumed to be absent; i.e., the ionization correction factor is taken to be unity (icf = 1). In a previous paper (VGS), we revisited the question of the icf, confirming a "reverse" ionization correction: icf < 1. In VGS the icf was calculated using more nearly realistic models of inhomogeneous HII regions, suggesting that the published values of Y_P needed to be reduced by an amount of order 0.003. As star clusters age, their stellar spectra evolve and so, too, will their icfs. Here the evolution of the icf is studied, along with that of two, alternate, measures of the "hardness" of the radiation spectrum. The differences between the icf for radiation-bounded and matter-bounded models are also explored, along with the effect on the icf of the He/H ratio (since He and H compete for some of the same ionizing photons). Particular attention is paid to the amount of doubly-ionized helium predicted, leading us to suggest that observations of, or bounds to, He++ may help to discriminate among models of HII regions ionized by starbursts of different ages and spectra. We apply our analysis to the Izotov & Thuan (IT) data set utilizing the radiation softness parameter, the [OIII]/[OI] ratio, and the presence or absence of He++ to find 0.95 < icf < 0.99. This suggests that the IT estimate of the primordial helium abundance should be reduced by Delta-Y = 0.006 +- 0.002, from 0.244 +- 0.002 to 0.238 +- 0.003.Comment: 27 double-spaced pages, 11 figures, 5 equations; revised to match the version accepted for publication in the Ap

    Temperature Fluctuations and Abundances in HII Galaxies

    Get PDF
    There is evidence for temperature fluctuations in Planetary Nebulae and in Galactic HII regions. If such fluctuations occur in the low-metallicity, extragalactic HII regions used to probe the primordial helium abundance, the derived 4He mass fraction, Y_P, could be systematically different from the true primordial value. For cooler, mainly high-metallicity HII regions the derived helium abundance may be nearly unchanged but the oxygen abundance could have been seriously underestimated. For hotter, mainly low-metallicity HII regions the oxygen abundance is likely accurate but the helium abundance could be underestimated. The net effect is to tilt the Y vs. Z relation, making it flatter and resulting in a higher inferred Y_P. Although this effect could be large, there are no data which allow us to estimate the size of the temperature fluctuations for the extragalactic HII regions. Therefore, we have explored this effect via Monte Carlos in which the abundances derived from a fiducial data set are modified by \Delta-T chosen from a distribution with 0 < \Delta-T < \Delta-T_max where \Delta-T_max is varied from 500K to 4000K. It is interesting that although this effect shifts the locations of the HII regions in Y vs. O/H plane, it does not introduce any significant additional dispersion.Comment: 11 pages, 9 postscript figures; submitted to the Ap

    Simulation of Main Memory Database Recovery

    Get PDF
    In a main memory database (MMDB), the primary copy of the database may reside permanently in a volatile memory. When a system failure occurs, the database must be reloaded efficiently from archive memory into main memory. This paper presents four different reload schemes and the simulation models constructed to compare the algorithms. Simulation results indicate that the reload scheme based on freguency of data access gives the best overall performance in terms of transaction response time and system throughput.Yeshttps://us.sagepub.com/en-us/nam/manuscript-submission-guideline

    From (p)reheating to nucleosynthesis

    Full text link
    This article gives a brief qualitative description of the possible evolution of the early Universe between the end of an inflationary epoch and the end of Big Bang nucleosynthesis. After a general introduction, establishing the minimum requirements cosmologists impose on this cosmic evolutionary phase, namely, successful baryogenesis, the production of cosmic dark matter, and successful light-element nucleosynthesis, a more detailed discussion on some recent developments follows. This latter includes the physics of preheating, the putative production of (alternative) dark matter, and the current status of Big Bang nucleosynthesis.Comment: 18 pages, 6 figures, to be published in "Classical and Quantum Gravity", article based on a talk presented at ``The Early Universe and Cosmological Observations: a Critical Review'', Cape Town, July 200

    The evolution of planetary nebulae IV. On the physics of the luminosity function

    Full text link
    The nebular evolution is followed from the vicinity of the asymptotic-giant branch across the Hertzsprung-Russell diagram until the white-dwarf domain is reached, using various central-star models coupled to different initial envelope configurations. Along each sequence the relevant line emissions of the nebulae are computed and analysed. Maximum line luminosities in Hbeta and [OIII] 5007A are achieved at stellar effective temperatures of about 65000K and 95000-100000K, respectively, provided the nebula remains optically thick for ionising photons. In the optically thin case, the maximum line emission occurs at or shortly after the thick/thin transition. Our models suggest that most planetary nebulae with hotter (>~ 45000K) central stars are optically thin in the Lyman continuum, and that their [OIII] 5007A emission fails to explain the bright end of the observed planetary nebulae luminosity function. However, sequences with central stars of >~ 0.6 Msun and rather dense initial envelopes remain virtually optically thick and are able to populate the bright end of the luminosity function. Individual luminosity functions depend strongly on the central-star mass and on the variation of the nebular optical depth with time. Hydrodynamical simulations of planetary nebulae are essential for any understanding of the basic physics behind their observed luminosity function. In particular, our models do not support the claim of Marigo et.al (2004) according to which the maximum 5007A luminosity occurs during the recombination phase well beyond 100 000K when the stellar luminosity declines and the nebular models become, at least partially, optically thick. Consequently, there is no need to invoke relatively massive central stars of, say > 0.7 Msun, to account for the bright end of the luminosity function.Comment: 19 pages, 20 figures, A&A, in pres

    Cohort analysis of programme data to estimate HIV incidence and uptake of HIV-related services among female sex workers in Zimbabwe, 2009-14.

    Get PDF
    BACKGROUND: HIV epidemiology and intervention uptake among female sex workers (FSW) in sub-Saharan Africa remain poorly understood. Data from outreach programmes are a neglected resource. METHODS: Analysis of data from FSW consultations with Zimbabwe's National Sex Work programme, 2009-14. At each visit, data were collected on socio-demographic characteristics, HIV testing history, HIV tests conducted by the programme and antiretroviral (ARV) history. Characteristics at first visit and longitudinal data on programme engagement, repeat HIV testing and HIV sero-conversion were analysed using a cohort approach. RESULTS: Data were available for 13360 women, 31389 visits, 14579 reported HIV tests, 2750 tests undertaken by the programme and 2387 reported ARV treatment initiations. At first visit, 72% of FSW had tested for HIV; 50% of these reported being HIV-positive. Among HIV-positive women, 41% reported being on ARV. 56% of FSW attended the programme only once. FSW who had not previously had an HIV positive test had been tested within the last 6 months 27% of the time during follow up. After testing HIV-positive, women started on ARV at a rate of 23 / 100 person years of follow-up. Among those with two or more HIV tests, the HIV sero-conversion rate was 9.8 / 100 person years of follow-up (95% confidence interval 7.1-15.9). CONCLUSIONS: Individual-level outreach programme data can be used to estimate HIV incidence and intervention uptake among FSW in Zimbabwe. Current data suggest very high HIV prevalence and incidence among this group and help identify areas for programme improvement. Further methodological validation is required.This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial License 4.0 (CCBY-NC), where it is permissible to download, share, remix, transform, and buildup the work provided it is properly cited. The work cannot be used commercially.<br/

    Power Spectrum Analysis of BNL Decay-Rate Data

    Full text link
    Evidence for an anomalous annual periodicity in certain nuclear decay data has led to speculation concerning a possible solar influence on nuclear processes. As a test of this hypothesis, we here search for evidence in decay data that might be indicative of a process involving solar rotation, focusing on data for 32Si and 36Cl decay rates acquired at the Brookhaven National Laboratory. Examination of the power spectrum over a range of frequencies (10 - 15 year^-1) appropriate for solar synodic rotation rates reveals several periodicities, the most prominent being one at 11.18 year^-1 with power 20.76. We evaluate the significance of this peak in terms of the false-alarm probability, by means of the shuffle test, and also by means of a new test (the "shake" test) that involves small random time displacements. The last two tests indicate that the peak at 11.18 year^-1 would arise by chance only once out of about 10^7 trials. Since there are several peaks in the search band, we also investigate the running mean of the power spectrum, and identify a major peak at 11.93 year^-1 with peak running-mean power 4.08. Application of the shuffle test and the shake test indicates that there is less than one chance in 10^11, and one chance in 10^15, respectively, finding by chance a value as large as 4.08.Comment: 12 pages, 17 figures, to be published in Astroparticle Physic
    • …
    corecore