320 research outputs found
Unveiling the Cosmic History of Light
The Universe was created with a Big Bang 13.7 billion years ago while the first stars and galaxies came after 400 million years. All the light that was ever emitted in the Universe at ultraviolet, optical, and infrared wavelengths from the period of those first stars till present date makes up the Extragalactic Background Light (EBL). This diffuse background interacts with photons emitted by distant high energy sources, in the GeV and TeV regime, via photon-photon interaction annihilating the high energy photon and producing an electron-positron pair. This gives researchers a powerful and highly effective technique to study the EBL by analyzing the imprint it leaves on the spectra of distant gamma-ray sources. For my PhD thesis project, I made use of this method to study the attenuated spectra of two major high energy sources - gamma ray bursts and active galactic nuclei - observed using the {\it Fermi}-Large Area Telescope and Cherenkov Telescopes. While similar studies have been performed in the past, most of the derived measurements came from just scaling the optical depth due to the EBL according to the observed spectra, making the estimated EBL spectral intensity uncertain. To tackle this, we have recently developed a dedicated technique which deconvolves the EBL into smaller energy and redshift bins. Using this technique along with an extensive GeV+TeV source sample, we were able to obtain the first homogeneous set of measurements of the EBL spectral intensity covering the UV-IR wavelengths. Additionally, we used this result to investigate several, still debated, astrophysical topics like measurements of the star formation history of the Universe, Hubble constant () and matter density ()
Exploring the Galactic neutrino flux origins using IceCube datasets
Astrophysical neutrinos detected by the IceCube observatory can be of
Galactic or extragalactic origin. The collective contribution of all the
detected neutrinos allows us to measure the total diffuse neutrino Galactic and
extragalactic signal. In this work, we describe a simulation package that makes
use of this diffuse Galactic contribution information to simulate a population
of Galactic sources distributed in a manner similar to our own galaxy. This is
then compared with the sensitivities reported by different IceCube data samples
to estimate the number of sources that IceCube can detect. We provide the
results of the simulation that allows us to make statements about the nature of
the sources contributing to the IceCube diffuse signal.Comment: Presented at the 38th International Cosmic Ray Conference (ICRC2023).
See arXiv:2307.13047 for all IceCube contribution
Probing the EBL evolution at high redshift using GRBs detected with the Fermi-LAT
The extragalactic background light (EBL), from ultraviolet to infrared
wavelengths, is predominantly due to emission from stars, accreting black holes
and reprocessed light due to Galactic dust. The EBL can be studied through the
imprint it leaves, via - absorption of high-energy photons, in
the spectra of distant -ray sources. The EBL has been probed through
the search for the attenuation it produces in the spectra of BL Lacertae (BL
Lac) objects and individual -ray bursts (GRBs). GRBs have significant
advantages over blazars for the study of the EBL especially at high redshifts.
Here we analyze a combined sample of twenty-two GRBs, detected by the Fermi
Large Area Telescope between 65 MeV and 500 GeV. We report a marginal detection
(at the ~2.8 level) of the EBL attenuation in the stacked spectra of
the source sample. This measurement represents a first constraint of the EBL at
an effective redshift of ~1.8. We combine our results with prior EBL
constraints and conclude that Fermi-LAT is instrumental to constrain the UV
component of the EBL. We discuss the implications on existing empirical models
of EBL evolution.Comment: on behalf of the Fermi-LAT collaboration, accepted for publication on
Ap
Native Kidney Renal Cell Carcinoma in Renal Allograft Transplant Patients – Our Experience
The immunosuppression administered to renal transplant recipients to safeguard renal function elevates their susceptibility to renal cancer, which is estimated to be 15 times higher than in the general population. The current study aimed to analyze various aspects of native kidney renal cell carcinoma (RCC) in renal transplant recipients. This study involved a retrospective analysis of 11 patients who underwent nephrectomy for RCC in native kidneys among renal transplant recipients at our institution since 1992. Our institutional incidence was 0.4%. Median age at presentation was 57 (49–60) years. The ratio of male: female was 10:1. Most patients were asymptomatic at presentation and native kidney disease before transplantation was undetermined. In our study, the median time interval between diagnosis of RCC and transplant was 9.1 (8.4–11.2) years. All patients underwent native kidney nephrectomy. Clear cell type was more common than papillary type, 3.5 (2.5–4.2). Ten patients were diagnosed with stage I disease and one patient had stage IV disease. Fuhrman nuclear grading revealed low grades in nine patients and three patients had Grade 3. Immunosuppressive therapy modification was done in nine patients. Meticulous follow-up of renal transplant patients is essential for earlier diagnosis and appropriate treatment of native kidney RCC in transplant recipients. Authors recommend every year follow-up in transplant recipients with special emphasis on ultrasound of native kidney
Oracle-Efficient Differentially Private Learning with Public Data
Due to statistical lower bounds on the learnability of many function classes
under privacy constraints, there has been recent interest in leveraging public
data to improve the performance of private learning algorithms. In this model,
algorithms must always guarantee differential privacy with respect to the
private samples while also ensuring learning guarantees when the private data
distribution is sufficiently close to that of the public data. Previous work
has demonstrated that when sufficient public, unlabelled data is available,
private learning can be made statistically tractable, but the resulting
algorithms have all been computationally inefficient. In this work, we present
the first computationally efficient, algorithms to provably leverage public
data to learn privately whenever a function class is learnable non-privately,
where our notion of computational efficiency is with respect to the number of
calls to an optimization oracle for the function class. In addition to this
general result, we provide specialized algorithms with improved sample
complexities in the special cases when the function class is convex or when the
task is binary classification
A new measurement of the Hubble constant and matter content of the Universe using extragalactic background light -ray attenuation
The Hubble constant and matter density of the Universe
are measured using the latest -ray attenuation results from Fermi-LAT
and Cherenkov telescopes. This methodology is based upon the fact that the
extragalactic background light supplies opacity for very high energy photons
via photon-photon interaction. The amount of -ray attenuation along the
line of sight depends on the expansion rate and matter content of the Universe.
This novel strategy results in a value of
~km~s~Mpc and
. These estimates are independent and
complementary to those based on the distance ladder, cosmic microwave
background (CMB), clustering with weak lensing, and strong lensing data. We
also produce a joint likelihood analysis of our results from rays and
these from more mature methodologies, excluding the CMB, yielding a combined
value of ~km~s~Mpc and .Comment: 9 pages, 6 figures, 1 table. Accepted by Ap
Autonomous Robotic Reinforcement Learning with Asynchronous Human Feedback
Ideally, we would place a robot in a real-world environment and leave it
there improving on its own by gathering more experience autonomously. However,
algorithms for autonomous robotic learning have been challenging to realize in
the real world. While this has often been attributed to the challenge of sample
complexity, even sample-efficient techniques are hampered by two major
challenges - the difficulty of providing well "shaped" rewards, and the
difficulty of continual reset-free training. In this work, we describe a system
for real-world reinforcement learning that enables agents to show continual
improvement by training directly in the real world without requiring
painstaking effort to hand-design reward functions or reset mechanisms. Our
system leverages occasional non-expert human-in-the-loop feedback from remote
users to learn informative distance functions to guide exploration while
leveraging a simple self-supervised learning algorithm for goal-directed policy
learning. We show that in the absence of resets, it is particularly important
to account for the current "reachability" of the exploration policy when
deciding which regions of the space to explore. Based on this insight, we
instantiate a practical learning system - GEAR, which enables robots to simply
be placed in real-world environments and left to train autonomously without
interruption. The system streams robot experience to a web interface only
requiring occasional asynchronous feedback from remote, crowdsourced,
non-expert humans in the form of binary comparative feedback. We evaluate this
system on a suite of robotic tasks in simulation and demonstrate its
effectiveness at learning behaviors both in simulation and the real world.
Project website https://guided-exploration-autonomous-rl.github.io/GEAR/.Comment: Project website
https://guided-exploration-autonomous-rl.github.io/GEAR
- …