3,963 research outputs found
Kepler-91b: a planet at the end of its life. Planet and giant host star properties via light-curve variations
The evolution of planetary systems is intimately linked to the evolution of
their host star. Our understanding of the whole planetary evolution process is
based on the large planet diversity observed so far. To date, only few tens of
planets have been discovered orbiting stars ascending the Red Giant Branch.
Although several theories have been proposed, the question of how planets die
remains open due to the small number statistics. In this work we study the
giant star Kepler-91 (KOI-2133) in order to determine the nature of a
transiting companion. This system was detected by the Kepler Space Telescope.
However, its planetary confirmation is needed. We confirm the planetary nature
of the object transiting the star Kepler-91 by deriving a mass of and a planetary radius of
. Asteroseismic analysis produces a
stellar radius of and a mass of
. We find that its eccentric orbit
() is just away
from the stellar atmosphere at the pericenter. Kepler-91b could be the previous
stage of the planet engulfment, recently detected for BD+48 740. Our
estimations show that Kepler-91b will be swallowed by its host star in less
than 55 Myr. Among the confirmed planets around giant stars, this is the
planetary-mass body closest to its host star. At pericenter passage, the star
subtends an angle of , covering around 10% of the sky as seen from
the planet. The planetary atmosphere seems to be inflated probably due to the
high stellar irradiation.Comment: 21 pages, 8 tables and 11 figure
Retrodiction as a tool for micromaser field measurements
We use retrodictive quantum theory to describe cavity field measurements by
successive atomic detections in the micromaser. We calculate the state of the
micromaser cavity field prior to detection of sequences of atoms in either the
excited or ground state, for atoms that are initially prepared in the excited
state. This provides the POM elements, which describe such sequences of
measurements.Comment: 20 pages, 4(8) figure
Six Peaks Visible in the Redshift Distribution of 46,400 SDSS Quasars Agree with the Preferred Redshifts Predicted by the Decreasing Intrinsic Redshift Model
The redshift distribution of all 46,400 quasars in the Sloan Digital Sky
Survey (SDSS) Quasar Catalog III, Third Data Release, is examined. Six Peaks
that fall within the redshift window below z = 4, are visible. Their positions
agree with the preferred redshift values predicted by the decreasing intrinsic
redshift (DIR) model, even though this model was derived using completely
independent evidence. A power spectrum analysis of the full dataset confirms
the presence of a single, significant power peak at the expected redshift
period. Power peaks with the predicted period are also obtained when the upper
and lower halves of the redshift distribution are examined separately. The
periodicity detected is in linear z, as opposed to log(1+z). Because the peaks
in the SDSS quasar redshift distribution agree well with the preferred
redshifts predicted by the intrinsic redshift relation, we conclude that this
relation, and the peaks in the redshift distribution, likely both have the same
origin, and this may be intrinsic redshifts, or a common selection effect.
However, because of the way the intrinsic redshift relation was determined it
seems unlikely that one selection effect could have been responsible for both.Comment: 12 pages, 12 figure, accepted for publication in the Astrophysical
Journa
Towards virtual machine energy-aware cost prediction in clouds
Pricing mechanisms employed by different service providers significantly influence the role of cloud computing within the IT industry. With the increasing cost of electricity, Cloud providers consider power consumption as one of the major cost factors to be maintained within their infrastructures. Consequently, modelling a new pricing mechanism that allow Cloud providers to determine the potential cost of resource usage and power consumption has attracted the attention of many researchers. Furthermore, predicting the future cost of Cloud services can help the service providers to offer the suitable services to the customers that meet their requirements. This paper introduces an Energy-Aware Cost Prediction Framework to estimate the total cost of Virtual Machines (VMs) by considering the resource usage and power consumption. The VMs’ workload is firstly predicted based on an Autoregressive Integrated Moving Average (ARIMA) model. The power consumption is then predicted using regression models. The comparison between the predicted and actual results obtained in a real Cloud testbed shows that this framework is capable of predicting the workload, power consumption and total cost for different VMs with good prediction accuracy, e.g. with 0.06 absolute percentage error for the predicted total cost of the VM
Damage function for historic paper. Part I: Fitness for use
Background In heritage science literature and in preventive conservation practice, damage functions are used to model material behaviour and specifically damage (unacceptable change), as a result of the presence of a stressor over time. For such functions to be of use in the context of collection management, it is important to define a range of parameters, such as who the stakeholders are (e.g. the public, curators, researchers), the mode of use (e.g. display, storage, manual handling), the long-term planning horizon (i.e. when in the future it is deemed acceptable for an item to become damaged or unfit for use), and what the threshold of damage is, i.e. extent of physical change assessed as damage. Results In this paper, we explore the threshold of fitness for use for archival and library paper documents used for display or reading in the context of access in reading rooms by the general public. Change is considered in the context of discolouration and mechanical deterioration such as tears and missing pieces: forms of physical deterioration that accumulate with time in libraries and archives. We also explore whether the threshold fitness for use is defined differently for objects perceived to be of different value, and for different modes of use. The data were collected in a series of fitness-for-use workshops carried out with readers/visitors in heritage institutions using principles of Design of Experiments. Conclusions The results show that when no particular value is pre-assigned to an archival or library document, missing pieces influenced readers/visitors’ subjective judgements of fitness-for-use to a greater extent than did discolouration and tears (which had little or no influence). This finding was most apparent in the display context in comparison to the reading room context. The finding also best applied when readers/visitors were not given a value scenario (in comparison to when they were asked to think about the document having personal or historic value). It can be estimated that, in general, items become unfit when text is evidently missing. However, if the visitor/reader is prompted to think of a document in terms of its historic value, then change in a document has little impact on fitness for use
The equivalence of fluctuation scale dependence and autocorrelations
We define optimal per-particle fluctuation and correlation measures, relate
fluctuations and correlations through an integral equation and show how to
invert that equation to obtain precise autocorrelations from fluctuation scale
dependence. We test the precision of the inversion with Monte Carlo data and
compare autocorrelations to conditional distributions conventionally used to
study high- jet structure.Comment: 10 pages, 9 figures, proceedings, MIT workshop on correlations and
fluctuations in relativistic nuclear collision
Modeling and Optimization of Lactic Acid Synthesis by the Alkaline Degradation of Fructose in a Batch Reactor
The present work deals with the determination of the optimal operating conditions of lactic acid synthesis by the alkaline degradation of fructose. It is a complex transformation for which detailed knowledge is not available. It is carried out in a batch
or semi-batch reactor. The ‘‘Tendency Modeling’’ approach, which consists of the development of an approximate stoichiometric and kinetic model, has been used.
An experimental planning method has been utilized as the database for model development.
The application of the experimental planning methodology allows comparison between the experimental and model response. The model is then used in an optimization procedure to compute the optimal process. The optimal control problem is converted into a nonlinear programming problem solved using the sequencial quadratic programming procedure coupled with the golden search method. The strategy developed allows simultaneously optimizing the different variables, which may be constrained. The validity of the methodology is illustrated by the determination
of the optimal operating conditions of lactic acid production
The Distribution of Redshifts in New Samples of Quasi-stellar Objects
Two new samples of QSOs have been constructed from recent surveys to test the
hypothesis that the redshift distribution of bright QSOs is periodic in
. The first of these comprises 57 different redshifts among all
known close pairs or multiple QSOs, with image separations 10\arcsec,
and the second consists of 39 QSOs selected through their X-ray emission and
their proximity to bright comparatively nearby active galaxies. The redshift
distributions of the samples are found to exhibit distinct peaks with a
periodic separation of in identical to that claimed
in earlier samples but now extended out to higher redshift peaks and 4.47, predicted by the formula but never seen before. The periodicity
is also seen in a third sample, the 78 QSOs of the 3C and 3CR catalogues. It is
present in these three datasets at an overall significance level -
, and appears not to be explicable by spectroscopic or similar
selection effects. Possible interpretations are briefly discussed.Comment: submitted for publication in the Astronomical Journal, 15 figure
Stochastic simulations of conditional states of partially observed systems, quantum and classical
In a partially observed quantum or classical system the information that we
cannot access results in our description of the system becoming mixed even if
we have perfect initial knowledge. That is, if the system is quantum the
conditional state will be given by a state matrix and if classical
the conditional state will be given by a probability distribution
where is the result of the measurement. Thus to determine the evolution of
this conditional state under continuous-in-time monitoring requires an
expensive numerical calculation. In this paper we demonstrating a numerical
technique based on linear measurement theory that allows us to determine the
conditional state using only pure states. That is, our technique reduces the
problem size by a factor of , the number of basis states for the system.
Furthermore we show that our method can be applied to joint classical and
quantum systems as arises in modeling realistic measurement.Comment: 16 pages, 11 figure
R.A.Fisher, design theory, and the Indian connection
Design Theory, a branch of mathematics, was born out of the experimental
statistics research of the population geneticist R. A. Fisher and of Indian
mathematical statisticians in the 1930s. The field combines elements of
combinatorics, finite projective geometries, Latin squares, and a variety of
further mathematical structures, brought together in surprising ways. This
essay will present these structures and ideas as well as how the field came
together, in itself an interesting story.Comment: 11 pages, 3 figure
- …
