10,309 research outputs found
On the Mahler measure of hyperelliptic families
We prove Boyd's "unexpected coincidence" of the Mahler measures for two
families of two-variate polynomials defining curves of genus 2. We further
equate the same measures to the Mahler measures of polynomials
whose zero loci define elliptic curves for .Comment: 14 page
Is economic convergence in New Member States sufficient for an adoption of the Euro?
The New European Member States (NMS) are expected to adopt the euro as soon as they fulfil the Maastricht criteria, which means that their nominal convergence has been achieved; but the question is: should those new European members adopt the euro as soon as possible or should they join the euro zone later on, when the real convergence of their economies is well underway? In the mean time, what currency system should the new European members adopt before joining the euro zone? Besides, where exactly do these NMS stand in terms of nominal convergence? In terms of real convergence, is the Optimal Currency Area (OCA) theory relevant concerning the new European members? The OCA theory states that countries are more suited to belong to a monetary union when they meet certain criteria related to the real convergence of an economy: a high degree of external openness, mobility of factors of production, and diversification of production structures. According to this theory, if there is a clear convergence between business cycles of countries that are willing to join the monetary union and the business cycle within the currency area, then this tends to prove that these countries are ready to enter the currency area. In this paper, we shall see where NMS stand regarding the Maastricht criteria; then we will try to find out whether these NMS fulfil the criteria identified by the OCA theory, which are linked to the real convergence of an economy. Then, after having gone through a survey of the literature devoted to business cycles synchronisation, we will seek to determine if there is a clear correlation between those countries' business cycles and the European cycle, which would stand in favour of an early adoption of the euro in these countries.New European Member States ; Euro ; Enlargement of EMU ; Maastricht criteria ; Central and Eastern European Countries ; CEECs ; Optimal Currency Area theory
On the second moment of the number of crossings by a stationary Gaussian process
Cram\'{e}r and Leadbetter introduced in 1967 the sufficient condition
to have a
finite variance of the number of zeros of a centered stationary Gaussian
process with twice differentiable covariance function . This condition is
known as the Geman condition, since Geman proved in 1972 that it was also a
necessary condition. Up to now no such criterion was known for counts of
crossings of a level other than the mean. This paper shows that the Geman
condition is still sufficient and necessary to have a finite variance of the
number of any fixed level crossings. For the generalization to the number of a
curve crossings, a condition on the curve has to be added to the Geman
condition.Comment: Published at http://dx.doi.org/10.1214/009117906000000142 in the
Annals of Probability (http://www.imstat.org/aop/) by the Institute of
Mathematical Statistics (http://www.imstat.org
Improving Workplace Expertise to Meet Increasing Customer Requirements: The Impact of Training
This article focuses upon the training of engineers at a factory producing integrated circuits. Inadequate use of statistical process techniques by the engineers meant that the production process was not being optimised in the context of increasing customer requirements. A training needs analysis was undertaken and a training programme was developed, implemented and evaluated. The results of this programme are presented and conclusions drawn
Relative Entailment Among Probabilistic Implications
We study a natural variant of the implicational fragment of propositional
logic. Its formulas are pairs of conjunctions of positive literals, related
together by an implicational-like connective; the semantics of this sort of
implication is defined in terms of a threshold on a conditional probability of
the consequent, given the antecedent: we are dealing with what the data
analysis community calls confidence of partial implications or association
rules. Existing studies of redundancy among these partial implications have
characterized so far only entailment from one premise and entailment from two
premises, both in the stand-alone case and in the case of presence of
additional classical implications (this is what we call "relative entailment").
By exploiting a previously noted alternative view of the entailment in terms of
linear programming duality, we characterize exactly the cases of entailment
from arbitrary numbers of premises, again both in the stand-alone case and in
the case of presence of additional classical implications. As a result, we
obtain decision algorithms of better complexity; additionally, for each
potential case of entailment, we identify a critical confidence threshold and
show that it is, actually, intrinsic to each set of premises and antecedent of
the conclusion
Numerical study of Bose-Einstein condensation in the Kaniadakis-Quarati model for bosons
Kaniadakis and Quarati (1994) proposed a Fokker--Planck equation with
quadratic drift as a PDE model for the dynamics of bosons in the spatially
homogeneous setting. It is an open question whether this equation has solutions
exhibiting condensates in finite time. The main analytical challenge lies in
the continuation of exploding solutions beyond their first blow-up time while
having a linear diffusion term. We present a thoroughly validated time-implicit
numerical scheme capable of simulating solutions for arbitrarily large time,
and thus enabling a numerical study of the condensation process in the
Kaniadakis--Quarati model. We show strong numerical evidence that above the
critical mass rotationally symmetric solutions of the Kaniadakis--Quarati model
in 3D form a condensate in finite time and converge in entropy to the unique
minimiser of the natural entropy functional at an exponential rate. Our
simulations further indicate that the spatial blow-up profile near the origin
follows a universal power law and that transient condensates can occur for
sufficiently concentrated initial data.Comment: To appear in Kinet. Relat. Model
Modelling Spatial Compositional Data: Reconstructions of past land cover and uncertainties
In this paper, we construct a hierarchical model for spatial compositional
data, which is used to reconstruct past land-cover compositions (in terms of
coniferous forest, broadleaved forest, and unforested/open land) for five time
periods during the past years over Europe. The model consists of a
Gaussian Markov Random Field (GMRF) with Dirichlet observations. A block
updated Markov chain Monte Carlo (MCMC), including an adaptive Metropolis
adjusted Langevin step, is used to estimate model parameters. The sparse
precision matrix in the GMRF provides computational advantages leading to a
fast MCMC algorithm. Reconstructions are obtained by combining pollen-based
estimates of vegetation cover at a limited number of locations with scenarios
of past deforestation and output from a dynamic vegetation model. To evaluate
uncertainties in the predictions a novel way of constructing joint confidence
regions for the entire composition at each prediction location is proposed. The
hierarchical model's ability to reconstruct past land cover is evaluated
through cross validation for all time periods, and by comparing reconstructions
for the recent past to a present day European forest map. The evaluation
results are promising and the model is able to capture known structures in past
land-cover compositions
- âŠ