104 research outputs found
Experimental Requirements to Determine the Neutrino Mass Hierarchy Using Reactor Neutrinos
This paper presents experimental requirements to determine the neutrino mass
hierarchy using reactor neutrinos. The detector shall be located at a baseline
around 58 km from the reactor(s) to measure the energy spectrum of electron
antineutrinos () precisely. By applying Fourier cosine and sine
transform to the L/E spectrum, features of the neutrino mass hierarchy can be
extracted from the and oscillations.
To determine the neutrino mass hierarchy above 90% probability, requirements to
the baseline, the energy resolution, the energy scale uncertainty, the detector
mass and the event statistics are studied at different values of
Comment: Update Fig.
Structure-Preserving Graph Representation Learning
Though graph representation learning (GRL) has made significant progress, it
is still a challenge to extract and embed the rich topological structure and
feature information in an adequate way. Most existing methods focus on local
structure and fail to fully incorporate the global topological structure. To
this end, we propose a novel Structure-Preserving Graph Representation Learning
(SPGRL) method, to fully capture the structure information of graphs.
Specifically, to reduce the uncertainty and misinformation of the original
graph, we construct a feature graph as a complementary view via k-Nearest
Neighbor method. The feature graph can be used to contrast at node-level to
capture the local relation. Besides, we retain the global topological structure
information by maximizing the mutual information (MI) of the whole graph and
feature embeddings, which is theoretically reduced to exchanging the feature
embeddings of the feature and the original graphs to reconstruct themselves.
Extensive experiments show that our method has quite superior performance on
semi-supervised node classification task and excellent robustness under noise
perturbation on graph structure or node features.Comment: Accepted by the IEEE International Conference on Data Mining (ICDM)
2022. arXiv admin note: text overlap with arXiv:2108.0482
A New Optical Model for Photomultiplier Tubes
It is critical to construct an accurate optical model of photomultiplier
tubes (PMTs) in many applications to describe the angular and spectral
responses of the photon detection efficiency (PDE) of the PMTs in their working
media. In this study, we propose a new PMT optical model to describe both light
interactions with the PMT window and optical processes inside PMTs with
reasonable accuracy based on the optics theory and a GEANT4-based simulation
toolkit. The proposed model builds a relationship between the PDE and the
underlying processes that the PDE relies on. This model also provides a tool to
transform the PDE measured in one working medium (like air) to the PDE in other
media (like water, liquid scintillator, etc). Using two 20" MCP-PMTs and one
20" dynode PMT, we demonstrate a complete procedure to obtain the key
parameters used in the model from experimental data, such as the optical
properties of the antireflective coating and photocathode of the three PMTs.
The proposed model can effectively reproduce the angular responses of the
quantum efficiency of PMTs, even though an ideally uniform photocathode is
assumed in the model. Interestingly, the proposed model predicts a similar
level () of light yield excess observed in the experimental data
of many liquid scintillator-based neutrino detectors, compared with that
predicted at the stage of detector design. However, this excess has never been
explained, and the proposed PMT model provides a good explanation for it, which
highlights the imperfections of PMT models used in their detector simulations
Self-Supervision Can Be a Good Few-Shot Learner
Existing few-shot learning (FSL) methods rely on training with a large
labeled dataset, which prevents them from leveraging abundant unlabeled data.
From an information-theoretic perspective, we propose an effective unsupervised
FSL method, learning representations with self-supervision. Following the
InfoMax principle, our method learns comprehensive representations by capturing
the intrinsic structure of the data. Specifically, we maximize the mutual
information (MI) of instances and their representations with a low-bias MI
estimator to perform self-supervised pre-training. Rather than supervised
pre-training focusing on the discriminable features of the seen classes, our
self-supervised model has less bias toward the seen classes, resulting in
better generalization for unseen classes. We explain that supervised
pre-training and self-supervised pre-training are actually maximizing different
MI objectives. Extensive experiments are further conducted to analyze their FSL
performance with various training settings. Surprisingly, the results show that
self-supervised pre-training can outperform supervised pre-training under the
appropriate conditions. Compared with state-of-the-art FSL methods, our
approach achieves comparable performance on widely used FSL benchmarks without
any labels of the base classes.Comment: ECCV 2022, code: https://github.com/bbbdylan/unisia
- …