80,669 research outputs found
Numerical Representations of Acceptance
Accepting a proposition means that our confidence in this proposition is
strictly greater than the confidence in its negation. This paper investigates
the subclass of uncertainty measures, expressing confidence, that capture the
idea of acceptance, what we call acceptance functions. Due to the monotonicity
property of confidence measures, the acceptance of a proposition entails the
acceptance of any of its logical consequences. In agreement with the idea that
a belief set (in the sense of Gardenfors) must be closed under logical
consequence, it is also required that the separate acceptance o two
propositions entail the acceptance of their conjunction. Necessity (and
possibility) measures agree with this view of acceptance while probability and
belief functions generally do not. General properties of acceptance functions
are estabilished. The motivation behind this work is the investigation of a
setting for belief revision more general than the one proposed by Alchourron,
Gardenfors and Makinson, in connection with the notion of conditioning.Comment: Appears in Proceedings of the Eleventh Conference on Uncertainty in
Artificial Intelligence (UAI1995
SU(3) lattice gauge theory with a mixed fundamental and adjoint plaquette action: Lattice artefacts
We study the four-dimensional SU(3) gauge model with a fundamental and an
adjoint plaquette term in the action. We investigate whether corrections to
scaling can be reduced by using a negative value of the adjoint coupling. To
this end, we have studied the finite temperature phase transition, the static
potential and the mass of the 0^{++} glueball. In order to compute these
quantities we have implemented variance reduced estimators that have been
proposed recently. Corrections to scaling are analysed in dimensionless
combinations such as T_c/\sqrt{\sigma} and m_{0^{++}}/T_c. We find that indeed
the lattice artefacts in e.g. m_{0^{++}}/T_c can be reduced considerably
compared with the pure Wilson (fundamental) gauge action at the same lattice
spacing.Comment: 36 pages, 12 figure
Random numbers from the tails of probability distributions using the transformation method
The speed of many one-line transformation methods for the production of, for
example, Levy alpha-stable random numbers, which generalize Gaussian ones, and
Mittag-Leffler random numbers, which generalize exponential ones, is very high
and satisfactory for most purposes. However, for the class of decreasing
probability densities fast rejection implementations like the Ziggurat by
Marsaglia and Tsang promise a significant speed-up if it is possible to
complement them with a method that samples the tails of the infinite support.
This requires the fast generation of random numbers greater or smaller than a
certain value. We present a method to achieve this, and also to generate random
numbers within any arbitrary interval. We demonstrate the method showing the
properties of the transform maps of the above mentioned distributions as
examples of stable and geometric stable random numbers used for the stochastic
solution of the space-time fractional diffusion equation.Comment: 17 pages, 7 figures, submitted to a peer-reviewed journa
Music and Speech in Auditory Interfaces: When is One Mode More Appropriate Than the Other?
A number of experiments, which have been carried out using non-speech auditory interfaces, are reviewed and the advantages and disadvantages of each are discussed. The possible advantages of using non-speech audio media such as music are discussed – richness of the representations possible, the aesthetic appeal, and the possibilities of such interfaces being able to handle abstraction and consistency across the interface
Nonparametric Weight Initialization of Neural Networks via Integral Representation
A new initialization method for hidden parameters in a neural network is
proposed. Derived from the integral representation of the neural network, a
nonparametric probability distribution of hidden parameters is introduced. In
this proposal, hidden parameters are initialized by samples drawn from this
distribution, and output parameters are fitted by ordinary linear regression.
Numerical experiments show that backpropagation with proposed initialization
converges faster than uniformly random initialization. Also it is shown that
the proposed method achieves enough accuracy by itself without backpropagation
in some cases.Comment: For ICLR2014, revised into 9 pages; revised into 12 pages (with
supplements
CoFeD: A visualisation framework for comparative quality evaluation
Evaluation for the purpose of selection can be a challenging task particularly when there is a plethora of choices available. Short-listing, comparisons and eventual choice(s) can be aided by visualisation techniques. In this paper we use Feature Analysis, Tabular and Tree Representations and Composite Features Diagrams (CFDs) for profiling user requirements and for top-down profiling and evaluation of items (methods, tools, techniques, processes and so on) under evaluation. The resulting framework CoFeD enables efficient visual comparison and initial short-listing. The second phase uses bottom-up quantitative evaluation which aids the elimination of the weakest items and hence the effective selection of the most appropriate item.
The versatility of the framework is illustrated by a case study comparison and evaluation of two agile methodologies. The paper concludes with limitations and indications of further work
Strong Double Higgs Production at the LHC
The hierarchy problem and the electroweak data, together, provide a plausible
motivation for considering a light Higgs emerging as a pseudo-Goldstone boson
from a strongly-coupled sector. In that scenario, the rates for Higgs
production and decay differ significantly from those in the Standard Model.
However, one genuine strong coupling signature is the growth with energy of the
scattering amplitudes among the Goldstone bosons, the longitudinally polarized
vector bosons as well as the Higgs boson itself. The rate for double Higgs
production in vector boson fusion is thus enhanced with respect to its
negligible rate in the SM. We study that reaction in pp collisions, where the
production of two Higgs bosons at high pT is associated with the emission of
two forward jets. We concentrate on the decay mode hh -> WW^(*)WW^(*) and study
the semi-leptonic decay chains of the W's with 2, 3 or 4 leptons in the final
states. While the 3 lepton final states are the most relevant and can lead to a
3 sigma signal significance with 300 fb^{-1} collected at a 14 TeV LHC, the two
same-sign lepton final states provide complementary information. We also
comment on the prospects for improving the detectability of double Higgs
production at the foreseen LHC energy and luminosity upgrades.Comment: 54 pages, 26 figures. v2: typos corrected, a few comments and one
table added. Version published in JHE
A multi-resolution, non-parametric, Bayesian framework for identification of spatially-varying model parameters
This paper proposes a hierarchical, multi-resolution framework for the
identification of model parameters and their spatially variability from noisy
measurements of the response or output. Such parameters are frequently
encountered in PDE-based models and correspond to quantities such as density or
pressure fields, elasto-plastic moduli and internal variables in solid
mechanics, conductivity fields in heat diffusion problems, permeability fields
in fluid flow through porous media etc. The proposed model has all the
advantages of traditional Bayesian formulations such as the ability to produce
measures of confidence for the inferences made and providing not only
predictive estimates but also quantitative measures of the predictive
uncertainty. In contrast to existing approaches it utilizes a parsimonious,
non-parametric formulation that favors sparse representations and whose
complexity can be determined from the data. The proposed framework in
non-intrusive and makes use of a sequence of forward solvers operating at
various resolutions. As a result, inexpensive, coarse solvers are used to
identify the most salient features of the unknown field(s) which are
subsequently enriched by invoking solvers operating at finer resolutions. This
leads to significant computational savings particularly in problems involving
computationally demanding forward models but also improvements in accuracy. It
is based on a novel, adaptive scheme based on Sequential Monte Carlo sampling
which is embarrassingly parallelizable and circumvents issues with slow mixing
encountered in Markov Chain Monte Carlo schemes
- …