793 research outputs found
Algorithmic statistics: forty years later
Algorithmic statistics has two different (and almost orthogonal) motivations.
From the philosophical point of view, it tries to formalize how the statistics
works and why some statistical models are better than others. After this notion
of a "good model" is introduced, a natural question arises: it is possible that
for some piece of data there is no good model? If yes, how often these bad
("non-stochastic") data appear "in real life"?
Another, more technical motivation comes from algorithmic information theory.
In this theory a notion of complexity of a finite object (=amount of
information in this object) is introduced; it assigns to every object some
number, called its algorithmic complexity (or Kolmogorov complexity).
Algorithmic statistic provides a more fine-grained classification: for each
finite object some curve is defined that characterizes its behavior. It turns
out that several different definitions give (approximately) the same curve.
In this survey we try to provide an exposition of the main results in the
field (including full proofs for the most important ones), as well as some
historical comments. We assume that the reader is familiar with the main
notions of algorithmic information (Kolmogorov complexity) theory.Comment: Missing proofs adde
Recommended from our members
Model pruning in depth completion CNNs for forestry robotics with simulated annealing
In this article, we present an analysis of model compression in depth completion neural networks for forestry robotics, considering the increasing demands of real time autonomous solutions. Specifically, we implement a single state simulated annealing meta-heuristic for model pruning in the ENet and MSG-CHN neural networks for depth completion. We run experiments in three different datasets and analyze how different levels of pruning affect the accuracy and speed of the models. Experimental tests show that increasing sparsity has different effects depending on the neural network and dataset. ENet has negligible difference in accuracy and it would greatly benefit from lowering the amount of FLOPs, while MSG-CHN displays an inconsistent behavior depending on the dataset. This suggests that while both models benefit from model compression techniques, the optimal sparsity level depends on environment, dataset and neural network
Depth, Highness and DNR Degrees
A sequence is Bennett deep [5] if every recursive approximation of the
Kolmogorov complexity of its initial segments from above satisfies that the difference
between the approximation and the actual value of the Kolmogorov complexity of
the initial segments dominates every constant function. We study for different lower
bounds r of this difference between approximation and actual value of the initial segment
complexity, which properties the corresponding r(n)-deep sets have. We prove
that for r(n) = Îľn, depth coincides with highness on the Turing degrees. For smaller
choices of r, i.e., r is any recursive order function, we show that depth implies either
highness or diagonally-non-recursiveness (DNR). In particular, for left-r.e. sets, order
depth already implies highness. As a corollary, we obtain that weakly-useful sets are
either high or DNR. We prove that not all deep sets are high by constructing a low
order-deep set.
Bennett's depth is defined using prefix-free Kolmogorov complexity. We show that
if one replaces prefix-free by plain Kolmogorov complexity in Bennett's depth definition,
one obtains a notion which no longer satisfies the slow growth law (which
stipulates that no shallow set truth-table computes a deep set); however, under this
notion, random sets are not deep (at the unbounded recursive order magnitude). We
improve Bennett's result that recursive sets are shallow by proving all K-trivial sets
are shallow; our result is close to optimal.
For Bennett's depth, the magnitude of compression improvement has to be achieved
almost everywhere on the set. Bennett observed that relaxing to infinitely often is
meaningless because every recursive set is infinitely often deep. We propose an alternative
infinitely often depth notion that doesn't suffer this limitation (called i.o.
depth).We show that every hyperimmune degree contains a i.o. deep set of magnitude
Îľn, and construct a Ď01- class where every member is an i.o. deep set of magnitude
Îľn. We prove that every non-recursive, non-DNR hyperimmune-free set is i.o. deep
of constant magnitude, and that every nonrecursive many-one degree contains such
a set
Techniques for Arbuscular Mycorrhiza Inoculum Reduction
It is well established that arbuscular mycorrhizal (AM) fungi can play a significant role in sustainable crop production and environmental conservation. With the increasing awareness of the ecological significance of mycorrhizas and their diversity, research needs to be directed away from simple records of their occurrence or casual speculation of their function (Smith and Read 1997). Rather, the need is for empirical studies and investigations of the quantitative aspects of the distribution of different types and their contribution to the function of ecosystems.
There is no such thing as a fungal effect or a plant effect, but there is an interaction between both symbionts. This results from the AM fungi and plant community size and structure, soil and climatic conditions, and the interplay between all these factors (Kahiluoto et al. 2000). Consequently, it is readily understood that it is the problems associated with methodology that limit our understanding of the functioning and effects of AM fungi within field communities.
Given the ubiquous presence of AM fungi, a major constraint to the evaluation of the activity of AM colonisation has been the need to account for the indigenous soil native inoculum. This has to be controlled (i.e. reduced or eliminated) if we are to obtain a true control treatment for analysis of arbuscular mycorrhizas in natural substrates. There are various procedures possible for achieving such an objective, and the purpose of this chapter is to provide details of a number of techniques and present some evaluation of their advantages and disadvantages.
Although there have been a large number of experiments to investigated the effectiveness of different sterilization procedures for reducing pathogenic soil fungi, little information is available on their impact on beneficial organisms such as AM fungi. Furthermore, some of the techniques have been shown to affect physical and chemical soil characteristics as well as eliminate soil microorganisms that can interfere with the development of mycorrhizas, and this creates difficulties in the interpretation of results simply in terms of possible mycorrhizal activity.
An important subject is the differentiation of methods that involve sterilization from those focussed on indigenous inoculum reduction. Soil sterilization aims to destroy or eliminate microbial cells while maintaining the existing chemical and physical characteristics of the soil (Wolf and Skipper 1994). Consequently, it is often used for experiments focussed on specific AM fungi, or to establish a negative control in some other types of study. In contrast, the purpose of inoculum reduction techniques is to create a perturbation that will interfere with mycorrhizal formation, although not necessarily eliminating any component group within the inoculum. Such an approach allows the establishment of different degrees of mycorrhizal formation between treatments and the study of relative effects.
Frequently the basic techniques used to achieve complete sterilization or just an inoculum reduction may be similar but the desired outcome is accomplished by adjustments of the dosage or intensity of the treatment. The ultimate choice of methodology for establishing an adequate non-mycorrhizal control depends on the design of the particular experiments, the facilities available and the amount of soil requiring treatment
Production of Medical Radioisotopes with High Specific Activity in Photonuclear Reactions with Beams of High Intensity and Large Brilliance
We study the production of radioisotopes for nuclear medicine in
photonuclear reactions or ()
photoexcitation reactions with high flux [()/s], small
diameter m and small band width () beams produced by Compton back-scattering of laser
light from relativistic brilliant electron beams. We compare them to (ion,np) reactions with (ion=p,d,) from particle accelerators like
cyclotrons and (n,) or (n,f) reactions from nuclear reactors. For
photonuclear reactions with a narrow beam the energy deposition in the
target can be managed by using a stack of thin target foils or wires, hence
avoiding direct stopping of the Compton and pair electrons (positrons).
isomer production via specially selected cascades
allows to produce high specific activity in multiple excitations, where no
back-pumping of the isomer to the ground state occurs. We discuss in detail
many specific radioisotopes for diagnostics and therapy applications.
Photonuclear reactions with beams allow to produce certain
radioisotopes, e.g. Sc, Ti, Cu, Pd, Sn,
Er, Pt or Ac, with higher specific activity and/or
more economically than with classical methods. This will open the way for
completely new clinical applications of radioisotopes. For example Pt
could be used to verify the patient's response to chemotherapy with platinum
compounds before a complete treatment is performed. Also innovative isotopes
like Sc, Cu and Ac could be produced for the first time
in sufficient quantities for large-scale application in targeted radionuclide
therapy.Comment: submitted to Appl. Phys.
QED3 theory of underdoped high temperature superconductors
Low-energy theory of d-wave quasiparticles coupled to fluctuating vortex
loops that describes the loss of phase coherence in a two dimensional d-wave
superconductor at T=0 is derived. The theory has the form of 2+1 dimensional
quantum electrodynamics (QED3), and is proposed as an effective description of
the T=0 superconductor-insulator transition in underdoped cuprates. The
coupling constant ("charge") in this theory is proportional to the dual order
parameter of the XY model, which is assumed to be describing the quantum
fluctuations of the phase of the superconducting order parameter. The principal
result is that the destruction of phase coherence in d-wave superconductors
typically, and immediately, leads to antiferromagnetism. The transition can be
understood in terms of the spontaneous breaking of an approximate "chiral"
SU(2) symmetry, which may be discerned at low enough energies in the standard
d-wave superconductor. The mechanism of the symmetry breaking is analogous to
the dynamical mass generation in the QED3, with the "mass" here being
proportional to staggered magnetization. Other insulating phases that break
chiral symmetry include the translationally invariant "d+ip" and "d+is"
insulators, and various one dimensional charge-density and spin-density waves.
The theory offers an explanation for the rounded d-wave-like dispersion seen in
ARPES experiments on Ca2CuO2Cl2 (F. Ronning et. al., Science 282, 2067 (1998)).Comment: Revtex, 20 pages, 5 figures; this is a much extended follow-up to the
Phys. Rev. Lett. vol.88, 047006 (2002) (cond-mat/0110188); improved
presentation, many additional explanations, comments, and references added,
sec. IV rewritten. Final version, to appear in Phys. Rev.
Recommended from our members
Bioavailability in soils
The consumption of locally-produced vegetables by humans may be an important exposure pathway for soil contaminants in many urban settings and for agricultural land use. Hence, prediction of metal and metalloid uptake by vegetables from contaminated soils is an important part of the Human Health Risk Assessment procedure. The behaviour of metals (cadmium, chromium, cobalt, copper, mercury, molybdenum, nickel, lead and zinc) and metalloids (arsenic, boron and selenium) in contaminated soils depends to a large extent on the intrinsic charge, valence and speciation of the contaminant ion, and soil properties such as pH, redox status and contents of clay and/or organic matter. However, chemistry and behaviour of the contaminant in soil alone cannot predict soil-to-plant transfer. Root uptake, root selectivity, ion interactions, rhizosphere processes, leaf uptake from the atmosphere, and plant partitioning are important processes that ultimately govern the accumulation ofmetals and metalloids in edible vegetable tissues. Mechanistic models to accurately describe all these processes have not yet been developed, let alone validated under field conditions. Hence, to estimate risks by vegetable consumption, empirical models have been used to correlate concentrations of metals and metalloids in contaminated soils, soil physico-chemical characteristics, and concentrations of elements in vegetable tissues. These models should only be used within the bounds of their calibration, and often need to be re-calibrated or validated using local soil and environmental conditions on a regional or site-specific basis.Mike J. McLaughlin, Erik Smolders, Fien Degryse, and Rene Rietr
Performance of CMS muon reconstruction in pp collision events at sqrt(s) = 7 TeV
The performance of muon reconstruction, identification, and triggering in CMS
has been studied using 40 inverse picobarns of data collected in pp collisions
at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection
criteria covering a wide range of physics analysis needs have been examined.
For all considered selections, the efficiency to reconstruct and identify a
muon with a transverse momentum pT larger than a few GeV is above 95% over the
whole region of pseudorapidity covered by the CMS muon system, abs(eta) < 2.4,
while the probability to misidentify a hadron as a muon is well below 1%. The
efficiency to trigger on single muons with pT above a few GeV is higher than
90% over the full eta range, and typically substantially better. The overall
momentum scale is measured to a precision of 0.2% with muons from Z decays. The
transverse momentum resolution varies from 1% to 6% depending on pseudorapidity
for muons with pT below 100 GeV and, using cosmic rays, it is shown to be
better than 10% in the central region up to pT = 1 TeV. Observed distributions
of all quantities are well reproduced by the Monte Carlo simulation.Comment: Replaced with published version. Added journal reference and DO
Performance of CMS muon reconstruction in pp collision events at sqrt(s) = 7 TeV
The performance of muon reconstruction, identification, and triggering in CMS
has been studied using 40 inverse picobarns of data collected in pp collisions
at sqrt(s) = 7 TeV at the LHC in 2010. A few benchmark sets of selection
criteria covering a wide range of physics analysis needs have been examined.
For all considered selections, the efficiency to reconstruct and identify a
muon with a transverse momentum pT larger than a few GeV is above 95% over the
whole region of pseudorapidity covered by the CMS muon system, abs(eta) < 2.4,
while the probability to misidentify a hadron as a muon is well below 1%. The
efficiency to trigger on single muons with pT above a few GeV is higher than
90% over the full eta range, and typically substantially better. The overall
momentum scale is measured to a precision of 0.2% with muons from Z decays. The
transverse momentum resolution varies from 1% to 6% depending on pseudorapidity
for muons with pT below 100 GeV and, using cosmic rays, it is shown to be
better than 10% in the central region up to pT = 1 TeV. Observed distributions
of all quantities are well reproduced by the Monte Carlo simulation.Comment: Replaced with published version. Added journal reference and DO
X-ray emission from the Sombrero galaxy: discrete sources
We present a study of discrete X-ray sources in and around the
bulge-dominated, massive Sa galaxy, Sombrero (M104), based on new and archival
Chandra observations with a total exposure of ~200 ks. With a detection limit
of L_X = 1E37 erg/s and a field of view covering a galactocentric radius of ~30
kpc (11.5 arcminute), 383 sources are detected. Cross-correlation with Spitler
et al.'s catalogue of Sombrero globular clusters (GCs) identified from HST/ACS
observations reveals 41 X-rays sources in GCs, presumably low-mass X-ray
binaries (LMXBs). We quantify the differential luminosity functions (LFs) for
both the detected GC and field LMXBs, whose power-low indices (~1.1 for the
GC-LF and ~1.6 for field-LF) are consistent with previous studies for
elliptical galaxies. With precise sky positions of the GCs without a detected
X-ray source, we further quantify, through a fluctuation analysis, the GC LF at
fainter luminosities down to 1E35 erg/s. The derived index rules out a
faint-end slope flatter than 1.1 at a 2 sigma significance, contrary to recent
findings in several elliptical galaxies and the bulge of M31. On the other
hand, the 2-6 keV unresolved emission places a tight constraint on the field
LF, implying a flattened index of ~1.0 below 1E37 erg/s. We also detect 101
sources in the halo of Sombrero. The presence of these sources cannot be
interpreted as galactic LMXBs whose spatial distribution empirically follows
the starlight. Their number is also higher than the expected number of cosmic
AGNs (52+/-11 [1 sigma]) whose surface density is constrained by deep X-ray
surveys. We suggest that either the cosmic X-ray background is unusually high
in the direction of Sombrero, or a distinct population of X-ray sources is
present in the halo of Sombrero.Comment: 11 figures, 5 tables, ApJ in pres
- âŚ