1,395 research outputs found
Tentative evidence of spatially extended GeV emission from SS433/W50
We analyze 10 years of Fermi-LAT data towards the SS433/W50 region. With the
latest source catalog and diffuse background models, the gamma-ray excess from
SS433/W50 is detected with a significance of 6{\sigma} in the photon energy
range of 500 MeV - 10 GeV. Our analysis indicates that an extended flat disk
morphology is preferred over a point-source description, suggesting that the
GeV emission region is much larger than that of the TeV emission detected by
HAWC. The size of the GeV emission is instead consistent with the extent of the
radio nebula W50, a supernova remnant being distorted by the jets, so we
suggest that the GeV emission may originate from this supernova remnant. The
spectral result of the GeV emission is also consistent with an supernova
remnant origin. We also derive the GeV flux upper limits on the TeV emission
region, which put moderate constrains on the leptonic models to explain the
multiwavelength data.Comment: 7 pages, 4 figures, accepted for publication in A&
VLBI astrometry of two millisecond pulsars
We present astrometric results on two millisecond pulsars, PSR B1257+12 and
PSR J1022+1001, as carried out through VLBI. For PSR B1257+12, a
model-independent distance of pc and proper motion of
( mas/yr,
mas/yr) were obtained from 5 epochs of VLBA and 4 epochs of EVN observations,
spanning about 2 years. The two dimensional proper motion of PSR J1022+1001
( mas/yr, mas/yr) was
also estimated, using 3 epochs of EVN observations. Based on our results, the
X-ray efficiency of PSR B1257+12 should be in the same range as other
millisecond pulsars, and not as low as previously thought.Comment: Proceedings of IAUS 291 "Neutron Stars and Pulsars: Challenges and
Opportunities after 80 years", J. van Leeuwen (ed.); 3 page
Certified Monotonic Neural Networks
Learning monotonic models with respect to a subset of the inputs is a
desirable feature to effectively address the fairness, interpretability, and
generalization issues in practice. Existing methods for learning monotonic
neural networks either require specifically designed model structures to ensure
monotonicity, which can be too restrictive/complicated, or enforce monotonicity
by adjusting the learning process, which cannot provably guarantee the learned
model is monotonic on selected features. In this work, we propose to certify
the monotonicity of the general piece-wise linear neural networks by solving a
mixed integer linear programming problem.This provides a new general approach
for learning monotonic neural networks with arbitrary model structures. Our
method allows us to train neural networks with heuristic monotonicity
regularizations, and we can gradually increase the regularization magnitude
until the learned network is certified monotonic. Compared to prior works, our
approach does not require human-designed constraints on the weight space and
also yields more accurate approximation. Empirical studies on various datasets
demonstrate the efficiency of our approach over the state-of-the-art methods,
such as Deep Lattice Networks
- …