1,004 research outputs found
Bias Analysis in Entropy Estimation
We consider the problem of finite sample corrections for entropy estimation.
New estimates of the Shannon entropy are proposed and their systematic error
(the bias) is computed analytically. We find that our results cover correction
formulas of current entropy estimates recently discussed in literature. The
trade-off between bias reduction and the increase of the corresponding
statistical error is analyzed.Comment: 5 pages, 3 figure
Random perfect lattices and the sphere packing problem
Motivated by the search for best lattice sphere packings in Euclidean spaces
of large dimensions we study randomly generated perfect lattices in moderately
large dimensions (up to d=19 included). Perfect lattices are relevant in the
solution of the problem of lattice sphere packing, because the best lattice
packing is a perfect lattice and because they can be generated easily by an
algorithm. Their number however grows super-exponentially with the dimension so
to get an idea of their properties we propose to study a randomized version of
the algorithm and to define a random ensemble with an effective temperature in
a way reminiscent of a Monte-Carlo simulation. We therefore study the
distribution of packing fractions and kissing numbers of these ensembles and
show how as the temperature is decreased the best know packers are easily
recovered. We find that, even at infinite temperature, the typical perfect
lattices are considerably denser than known families (like A_d and D_d) and we
propose two hypotheses between which we cannot distinguish in this paper: one
in which they improve Minkowsky's bound phi\sim 2^{-(0.84+-0.06) d}, and a
competitor, in which their packing fraction decreases super-exponentially,
namely phi\sim d^{-a d} but with a very small coefficient a=0.06+-0.04. We also
find properties of the random walk which are suggestive of a glassy system
already for moderately small dimensions. We also analyze local structure of
network of perfect lattices conjecturing that this is a scale-free network in
all dimensions with constant scaling exponent 2.6+-0.1.Comment: 19 pages, 22 figure
Study of the 12C+12C fusion reactions near the Gamow energy
The fusion reactions 12C(12C,a)20Ne and 12C(12C,p)23Na have been studied from
E = 2.10 to 4.75 MeV by gamma-ray spectroscopy using a C target with ultra-low
hydrogen contamination. The deduced astrophysical S(E)* factor exhibits new
resonances at E <= 3.0 MeV, in particular a strong resonance at E = 2.14 MeV,
which lies at the high-energy tail of the Gamow peak. The resonance increases
the present non-resonant reaction rate of the alpha channel by a factor of 5
near T = 8x10^8 K. Due to the resonance structure, extrapolation to the Gamow
energy E_G = 1.5 MeV is quite uncertain. An experimental approach based on an
underground accelerator placed in a salt mine in combination with a high
efficiency detection setup could provide data over the full E_G energy range.Comment: 4 Pages, 4 figures, accepted for publication in Phys. Rev. Let
Improved treatment of the molecular final-states uncertainties for the KATRIN neutrino-mass measurement
The KArlsruhe TRItium Neutrino experiment (KATRIN) aims to determine the
effective mass of the electron antineutrino via a high-precision measurement of
the tritium beta-decay spectrum in its end-point region. The target
neutrino-mass sensitivity of 0.2 eV / c^2 at 90% C.L. can only be achieved in
the case of high statistics and a good control of the systematic uncertainties.
One key systematic effect originates from the calculation of the molecular
final states of T_2 beta decay. In the first neutrino-mass analyses of KATRIN
the contribution of the uncertainty of the molecular final-states distribution
(FSD) was estimated via a conservative phenomenological approach to be 0.02
eV^2 / c^4. In this paper a new procedure is presented for estimating the
FSD-related uncertainties by considering the details of the final-states
calculation, i.e. the uncertainties of constants, parameters, and functions
used in the calculation as well as its convergence itself as a function of the
basis-set size used in expanding the molecular wave functions. The calculated
uncertainties are directly propagated into the experimental observable, the
squared neutrino mass m_nu^2. With the new procedure the FSD-related
uncertainty is constrained to 0.0013 eV^2 / c^4, for the experimental
conditions of the first KATRIN measurement campaign
Logic, logical form and the disunity of truth
Monists say that the nature of truth is invariant, whichever sentence you consider; pluralists say that the nature of truth varies between different sets of sentences. The orthodoxy is that logic and logical form favour monism: there must be a single property that is preserved in any valid inference; and any truth-functional complex must be true in the same way as its components. The orthodoxy, I argue, is mistaken. Logic and logical form impose only structural constraints on a metaphysics of truth. Monistic theories are not guaranteed to satisfy these constraints, and there is a pluralistic theory that does so
Использование средств e-learning в высшей школе
Применение информационных технологий в образовательном процессе делает его более увлекательным и более эффективным, позволяет формировать творческое мышление, стимулирует студентов к саморазвитию и самостоятельности. В статье раскрывается потенциал средств электронного обучения в оптимизации учебного процесса в высшей школе
A New Elimination Rule for the Calculus of Inductive Constructions
Published in the post-proceedings of TYPES but actually not presented orally to the conferenceInternational audienceIn Type Theory, definition by dependently-typed case analysis can be expressed by means of a set of equations — the semantic approach — or by an explicit pattern-matching construction — the syntactic approach. We aim at putting together the best of both approaches by extending the pattern-matching construction found in the Coq proof assistant in order to obtain the expressivity and flexibility of equation-based case analysis while remaining in a syntax-based setting, thus making dependently-typed programming more tractable in the Coq system. We provide a new rule that permits the omission of impossible cases, handles the propagation of inversion constraints, and allows to derive Streicher's K axiom. We show that subject reduction holds, and sketch a proof of relative consistency
Algebraic totality, towards completeness
Finiteness spaces constitute a categorical model of Linear Logic (LL) whose
objects can be seen as linearly topologised spaces, (a class of topological
vector spaces introduced by Lefschetz in 1942) and morphisms as continuous
linear maps. First, we recall definitions of finiteness spaces and describe
their basic properties deduced from the general theory of linearly topologised
spaces. Then we give an interpretation of LL based on linear algebra. Second,
thanks to separation properties, we can introduce an algebraic notion of
totality candidate in the framework of linearly topologised spaces: a totality
candidate is a closed affine subspace which does not contain 0. We show that
finiteness spaces with totality candidates constitute a model of classical LL.
Finally, we give a barycentric simply typed lambda-calculus, with booleans
and a conditional operator, which can be interpreted in this
model. We prove completeness at type for
every n by an algebraic method
- …