3,651 research outputs found

    Kepler-91b: a planet at the end of its life. Planet and giant host star properties via light-curve variations

    Full text link
    The evolution of planetary systems is intimately linked to the evolution of their host star. Our understanding of the whole planetary evolution process is based on the large planet diversity observed so far. To date, only few tens of planets have been discovered orbiting stars ascending the Red Giant Branch. Although several theories have been proposed, the question of how planets die remains open due to the small number statistics. In this work we study the giant star Kepler-91 (KOI-2133) in order to determine the nature of a transiting companion. This system was detected by the Kepler Space Telescope. However, its planetary confirmation is needed. We confirm the planetary nature of the object transiting the star Kepler-91 by deriving a mass of Mp=0.880.33+0.17 MJup M_p=0.88^{+0.17}_{-0.33} ~M_{\rm Jup} and a planetary radius of Rp=1.3840.054+0.011 RJupR_p=1.384^{+0.011}_{-0.054} ~R_{\rm Jup}. Asteroseismic analysis produces a stellar radius of R=6.30±0.16 RR_{\star}=6.30\pm 0.16 ~R_{\odot} and a mass of M=1.31±0.10 MM_{\star}=1.31\pm 0.10 ~ M_{\odot} . We find that its eccentric orbit (e=0.0660.017+0.013e=0.066^{+0.013}_{-0.017}) is just 1.320.22+0.07 R1.32^{+0.07}_{-0.22} ~ R_{\star} away from the stellar atmosphere at the pericenter. Kepler-91b could be the previous stage of the planet engulfment, recently detected for BD+48 740. Our estimations show that Kepler-91b will be swallowed by its host star in less than 55 Myr. Among the confirmed planets around giant stars, this is the planetary-mass body closest to its host star. At pericenter passage, the star subtends an angle of 4848^{\circ}, covering around 10% of the sky as seen from the planet. The planetary atmosphere seems to be inflated probably due to the high stellar irradiation.Comment: 21 pages, 8 tables and 11 figure

    Retrodiction as a tool for micromaser field measurements

    Get PDF
    We use retrodictive quantum theory to describe cavity field measurements by successive atomic detections in the micromaser. We calculate the state of the micromaser cavity field prior to detection of sequences of atoms in either the excited or ground state, for atoms that are initially prepared in the excited state. This provides the POM elements, which describe such sequences of measurements.Comment: 20 pages, 4(8) figure

    Six Peaks Visible in the Redshift Distribution of 46,400 SDSS Quasars Agree with the Preferred Redshifts Predicted by the Decreasing Intrinsic Redshift Model

    Full text link
    The redshift distribution of all 46,400 quasars in the Sloan Digital Sky Survey (SDSS) Quasar Catalog III, Third Data Release, is examined. Six Peaks that fall within the redshift window below z = 4, are visible. Their positions agree with the preferred redshift values predicted by the decreasing intrinsic redshift (DIR) model, even though this model was derived using completely independent evidence. A power spectrum analysis of the full dataset confirms the presence of a single, significant power peak at the expected redshift period. Power peaks with the predicted period are also obtained when the upper and lower halves of the redshift distribution are examined separately. The periodicity detected is in linear z, as opposed to log(1+z). Because the peaks in the SDSS quasar redshift distribution agree well with the preferred redshifts predicted by the intrinsic redshift relation, we conclude that this relation, and the peaks in the redshift distribution, likely both have the same origin, and this may be intrinsic redshifts, or a common selection effect. However, because of the way the intrinsic redshift relation was determined it seems unlikely that one selection effect could have been responsible for both.Comment: 12 pages, 12 figure, accepted for publication in the Astrophysical Journa

    Towards virtual machine energy-aware cost prediction in clouds

    Get PDF
    Pricing mechanisms employed by different service providers significantly influence the role of cloud computing within the IT industry. With the increasing cost of electricity, Cloud providers consider power consumption as one of the major cost factors to be maintained within their infrastructures. Consequently, modelling a new pricing mechanism that allow Cloud providers to determine the potential cost of resource usage and power consumption has attracted the attention of many researchers. Furthermore, predicting the future cost of Cloud services can help the service providers to offer the suitable services to the customers that meet their requirements. This paper introduces an Energy-Aware Cost Prediction Framework to estimate the total cost of Virtual Machines (VMs) by considering the resource usage and power consumption. The VMs’ workload is firstly predicted based on an Autoregressive Integrated Moving Average (ARIMA) model. The power consumption is then predicted using regression models. The comparison between the predicted and actual results obtained in a real Cloud testbed shows that this framework is capable of predicting the workload, power consumption and total cost for different VMs with good prediction accuracy, e.g. with 0.06 absolute percentage error for the predicted total cost of the VM

    The equivalence of fluctuation scale dependence and autocorrelations

    Full text link
    We define optimal per-particle fluctuation and correlation measures, relate fluctuations and correlations through an integral equation and show how to invert that equation to obtain precise autocorrelations from fluctuation scale dependence. We test the precision of the inversion with Monte Carlo data and compare autocorrelations to conditional distributions conventionally used to study high-ptp_t jet structure.Comment: 10 pages, 9 figures, proceedings, MIT workshop on correlations and fluctuations in relativistic nuclear collision

    Damage function for historic paper. Part I: Fitness for use

    Get PDF
    Background In heritage science literature and in preventive conservation practice, damage functions are used to model material behaviour and specifically damage (unacceptable change), as a result of the presence of a stressor over time. For such functions to be of use in the context of collection management, it is important to define a range of parameters, such as who the stakeholders are (e.g. the public, curators, researchers), the mode of use (e.g. display, storage, manual handling), the long-term planning horizon (i.e. when in the future it is deemed acceptable for an item to become damaged or unfit for use), and what the threshold of damage is, i.e. extent of physical change assessed as damage. Results In this paper, we explore the threshold of fitness for use for archival and library paper documents used for display or reading in the context of access in reading rooms by the general public. Change is considered in the context of discolouration and mechanical deterioration such as tears and missing pieces: forms of physical deterioration that accumulate with time in libraries and archives. We also explore whether the threshold fitness for use is defined differently for objects perceived to be of different value, and for different modes of use. The data were collected in a series of fitness-for-use workshops carried out with readers/visitors in heritage institutions using principles of Design of Experiments. Conclusions The results show that when no particular value is pre-assigned to an archival or library document, missing pieces influenced readers/visitors’ subjective judgements of fitness-for-use to a greater extent than did discolouration and tears (which had little or no influence). This finding was most apparent in the display context in comparison to the reading room context. The finding also best applied when readers/visitors were not given a value scenario (in comparison to when they were asked to think about the document having personal or historic value). It can be estimated that, in general, items become unfit when text is evidently missing. However, if the visitor/reader is prompted to think of a document in terms of its historic value, then change in a document has little impact on fitness for use

    Stochastic simulations of conditional states of partially observed systems, quantum and classical

    Get PDF
    In a partially observed quantum or classical system the information that we cannot access results in our description of the system becoming mixed even if we have perfect initial knowledge. That is, if the system is quantum the conditional state will be given by a state matrix ρr(t)\rho_r(t) and if classical the conditional state will be given by a probability distribution Pr(x,t)P_r(x,t) where rr is the result of the measurement. Thus to determine the evolution of this conditional state under continuous-in-time monitoring requires an expensive numerical calculation. In this paper we demonstrating a numerical technique based on linear measurement theory that allows us to determine the conditional state using only pure states. That is, our technique reduces the problem size by a factor of NN, the number of basis states for the system. Furthermore we show that our method can be applied to joint classical and quantum systems as arises in modeling realistic measurement.Comment: 16 pages, 11 figure

    Physical Conditions of Fast Glacier Flow 1:Measurements From Boreholes Drilled to the Bed of Store Glacier, West Greenland

    Get PDF
    Marine-terminating outlet glaciers of the Greenland ice sheet make significant contributions to global sea level rise, yet the conditions that facilitate their fast flow remain poorly constrained owing to a paucity of data. We drilled and instrumented seven boreholes on Store Glacier, Greenland, to monitor subglacial water pressure, temperature, electrical conductivity and turbidity along with englacial ice temperature and deformation. These observations were supplemented by surface velocity and meteorological measurements to gain insight into the conditions and mechanisms of fast glacier flow. Located 30 km from the calving front, each borehole drained rapidly on attaining 600m depth indicating a direct connection with an active subglacial hydrological system. Persistently high subglacial water pressures indicate low effective pressure (180 - 280 kPa), with small amplitude variations correlated with notable peaks in surface velocity driven by the diurnal melt cycle and longer periods of melt and rainfall. The englacial deformation profile determined from borehole tilt measurements indicates that 63-71% of total ice motion occurred at the bed, with the remaining 29-37% predominantly attributed to enhanced deformation in the lowermost 50-100 m of the ice column. We interpret this lowermost 100m to be formed of warmer, pre-Holocene ice overlying a thin (0-8m) layer of temperate basal ice. Our observations are consistent with a spatially-extensive and persistently-inefficient subglacial drainage system that we hypothesize comprises drainage both at the ice-sediment interface and through subglacial sediments. This configuration has similarities to that interpreted beneath dynamically-analogous Antarctic ice streams, Alaskan tidewater glaciers, and glaciers in surge.This research was funded by UK National Environment Research Council grants NE/K006126 and NE/K005871/1 and an Aberystwyth University Capital Equipment grant to B. H. A. H. gratefully acknowledges support from the BBC's Operation Iceberg program for the deployment of the GPS reference station and a Professorial Fellowship from the Centre for Arctic Gas Hydrate, Environment and Climate, funded by the Research Council of Norway through its Centres of Excellence (grant 223259). The authors thank the crew of SV Gambo for logistical support, Ann Andreasen and the Uummannaq Polar Institute for hospitality, technicians Barry Thomas and Dave Kelly for assembly of the borehole sensors, Joe Todd for producing a bed elevation model from mass conservation that proved useful in selecting the drill site, and Leo Nathan for assistance in the field. NCEP/NCAR Reanalysis data were provided by the NOAA/OAR/ESRL PSD, Boulder, Colorado, USA, from www.esrl.noaa.gov/psd/. The data sets presented in this paper are available for download from https://doi.org/10.6084/m9.figshare.5745294

    Modeling and Optimization of Lactic Acid Synthesis by the Alkaline Degradation of Fructose in a Batch Reactor

    Get PDF
    The present work deals with the determination of the optimal operating conditions of lactic acid synthesis by the alkaline degradation of fructose. It is a complex transformation for which detailed knowledge is not available. It is carried out in a batch or semi-batch reactor. The ‘‘Tendency Modeling’’ approach, which consists of the development of an approximate stoichiometric and kinetic model, has been used. An experimental planning method has been utilized as the database for model development. The application of the experimental planning methodology allows comparison between the experimental and model response. The model is then used in an optimization procedure to compute the optimal process. The optimal control problem is converted into a nonlinear programming problem solved using the sequencial quadratic programming procedure coupled with the golden search method. The strategy developed allows simultaneously optimizing the different variables, which may be constrained. The validity of the methodology is illustrated by the determination of the optimal operating conditions of lactic acid production
    corecore