7,856 research outputs found
Neutrino masses and Neutrinoless Double Beta Decay: Status and expectations
Two most outstanding questions are puzzling the world of neutrino Physics:
the possible Majorana nature of neutrinos and their absolute mass scale. Direct
neutrino mass measurements and neutrinoless double beta decay (0nuDBD) are the
present strategy to solve the puzzle. Neutrinoless double beta decay violates
lepton number by two units and can occurr only if neutrinos are massive
Majorana particles. A positive observation would therefore necessarily imply a
new regime of physics beyond the standard model, providing fundamental
information on the nature of the neutrinos and on their absolute mass scale.
After the observation of neutrino oscillations and given the present knowledge
of neutrino masses and mixing parameters, a possibility to observe 0nuDBDD at a
neutrino mass scale in the range 10-50 meV could actually exist. This is a real
challenge faced by a number of new proposed projects. Present status and future
perpectives of neutrinoless double-beta decay experimental searches is
reviewed. The most important parameters contributing to the experimental
sensitivity are outlined. A short discussion on nuclear matrix element
calculations is also given. Complementary measurements to assess the absolute
neutrino mass scale (cosmology and single beta decays) are also discussed.Comment: Presented at the "European Strategy for Future Neutrino Physics"
Workshop, CERN October 1-3 200
Challenges in Double Beta Decay
After nearly 80 years since the first guess on its existence, neutrino still
escapes our insight: the mass and the true nature (Majorana or Dirac) of this
particle is still unknown. In the past ten years, neutrino oscillation
experiments have finally provided the incontrovertible evidence that neutrinos
mix and have finite masses. These results represent the strongest demonstration
that the Standard Model of electroweak interactions is incomplete and that new
Physics beyond it must exist. None of these experimental efforts could however
shade light on some of the basic features of neutrinos. Indeed, absolute scale
and ordering of the masses of the three generations as well as charge
conjugation and lepton number conservation properties are still unknown. In
this scenario, a unique role is played by the Neutrinoless Double Beta Decay
searches: these experiments can probe lepton number conservation, investigate
the Dirac/Majorana nature of the neutrinos and their absolute mass scale
(hierarchy problem) with unprecedented sensitivity. Today Neutrinoless Double
Beta Decay faces a new era where large scale experiments with a sensitivity
approaching the so-called degenerate-hierarchy region are nearly ready to start
and where the challenge for the next future is the construction of detectors
characterized by a tonne-scale size and an incredibly low background, to fully
probe the inverted-hierarchy region. A number of new proposed projects took up
this challenge. These are based either on large expansions of the present
experiments or on new ideas to improve the technical performance and/or reduce
the background contributions. n this paper, a review of the most relevant
ongoing experiments is given. The most relevant parameters contributing to the
experimental sensitivity are discussed and a critical comparison of the future
projects is proposed.Comment: 70 pages, 16 figures, 6 tables. arXiv admin note: text overlap with
arXiv:1109.5515, arXiv:hep-ex/0501010, arXiv:0910.2994 by other author
Expectations for a new calorimetric neutrino mass experiment
A large calorimetric neutrino mass experiment using thermal detectors is
expected to play a crucial role in the challenge for directly assessing the
neutrino mass. We discuss and compare here two approaches to the estimation of
the experimental sensitivity of such an experiment. The first method uses an
analytic formulation and allows to readily obtain a sensible estimate over a
wide range of experimental configurations. The second method is based on a
frequentist Montecarlo technique and is more precise and reliable. The
Montecarlo approach is then exploited to study the main sources of systematic
uncertainties peculiar to calorimetric experiments. Finally, the tools are
applied to investigate the optimal experimental configuration for a
calorimetric experiment with Rhenium based thermal detectors.Comment: 25 pagers, 16 figure
Compact Markov-modulated models for multiclass trace fitting
Markov-modulated Poisson processes (MMPPs) are stochastic models for fitting empirical traces for simulation, workload characterization and queueing analysis purposes. In this paper, we develop the first counting process fitting algorithm for the marked MMPP (M3PP), a generalization of the MMPP for modeling traces with events of multiple types. We initially explain how to fit two-state M3PPs to empirical traces of counts. We then propose a novel form of composition, called interposition, which enables the approximate superposition of several two-state M3PPs without incurring into state space explosion. Compared to exact superposition, where the state space grows exponentially in the number of composed processes, in interposition the state space grows linearly in the number of composed M3PPs. Experimental results indicate that the proposed interposition methodology provides accurate results against artificial and real-world traces, with a significantly smaller state space than superposed processes
Deriving item features relevance from collaborative domain knowledge
An Item based recommender system works by computing a similarity between
items, which can exploit past user interactions (collaborative filtering) or
item features (content based filtering). Collaborative algorithms have been
proven to achieve better recommendation quality then content based algorithms
in a variety of scenarios, being more effective in modeling user behaviour.
However, they can not be applied when items have no interactions at all, i.e.
cold start items. Content based algorithms, which are applicable to cold start
items, often require a lot of feature engineering in order to generate useful
recommendations. This issue is specifically relevant as the content descriptors
become large and heterogeneous. The focus of this paper is on how to use a
collaborative models domain-specific knowledge to build a wrapper feature
weighting method which embeds collaborative knowledge in a content based
algorithm. We present a comparative study for different state of the art
algorithms and present a more general model. This machine learning approach to
feature weighting shows promising results and high flexibility
- …