55 research outputs found
Dark matter capture in celestial objects: light mediators, self-interactions, and complementarity with direct detection
We generalize the formalism for DM capture in celestial bodies to account for
arbitrary mediator mass, and update the existing and projected astrophysical
constraints on DM-nucleon scattering cross section from observations of neutron
stars. We show that the astrophysical constraints on the DM-nucleon interaction
strength, that were thought to be the most stringent, drastically weaken for
light mediators and can be completely voided. For asymmetric DM, existing
astrophysical constraints are completely washed out for mediators lighter than
5 MeV, and for annihilating DM the projected constraints are washed out for
mediators lighter than 0.25 MeV. Related terrestrial direct detection bounds
also weaken, but in a complementary fashion; they supersede the astrophysical
capture bounds for small or large DM mass, respectively for asymmetric or
annihilating DM. Repulsive self-interactions of DM have an insignificant impact
on the total capture rate, but a significant impact on the black hole formation
criterion. This further weakens the constraints on DM-nucleon interaction
strength for asymmetric self-repelling DM, whereas constraints remain unaltered
for annihilating self-repelling DM. We use the correct Hawking evaporation rate
of the newly formed black hole, that was approximated as a blackbody in
previous studies, and show that, despite a more extensive alleviation of
collapse as a result, the observation of a neutron star collapse can probe a
wide range of DM self-interaction strengths.Comment: v1: 28 pages, 9 figures, Comments welcom
DIVAS: An LLM-based End-to-End Framework for SoC Security Analysis and Policy-based Protection
Securing critical assets in a bus-based System-On-Chip (SoC) is imperative to
mitigate potential vulnerabilities and prevent unauthorized access, ensuring
the integrity, availability, and confidentiality of the system. Ensuring
security throughout the SoC design process is a formidable task owing to the
inherent intricacies in SoC designs and the dispersion of assets across diverse
IPs. Large Language Models (LLMs), exemplified by ChatGPT (OpenAI) and BARD
(Google), have showcased remarkable proficiency across various domains,
including security vulnerability detection and prevention in SoC designs. In
this work, we propose DIVAS, a novel framework that leverages the knowledge
base of LLMs to identify security vulnerabilities from user-defined SoC
specifications, map them to the relevant Common Weakness Enumerations (CWEs),
followed by the generation of equivalent assertions, and employ security
measures through enforcement of security policies. The proposed framework is
implemented using multiple ChatGPT and BARD models, and their performance was
analyzed while generating relevant CWEs from the SoC specifications provided.
The experimental results obtained from open-source SoC benchmarks demonstrate
the efficacy of our proposed framework.Comment: 15 pages, 7 figures, 8 table
Rankers, Rankees, & Rankings: Peeking into the Pandora's Box from a Socio-Technical Perspective
Algorithmic rankers have a profound impact on our increasingly data-driven
society. From leisurely activities like the movies that we watch, the
restaurants that we patronize; to highly consequential decisions, like making
educational and occupational choices or getting hired by companies -- these are
all driven by sophisticated yet mostly inaccessible rankers. A small change to
how these algorithms process the rankees (i.e., the data items that are ranked)
can have profound consequences. For example, a change in rankings can lead to
deterioration of the prestige of a university or have drastic consequences on a
job candidate who missed out being in the list of the preferred top-k for an
organization. This paper is a call to action to the human-centered data science
research community to develop principled methods, measures, and metrics for
studying the interactions among the socio-technical context of use,
technological innovations, and the resulting consequences of algorithmic
rankings on multiple stakeholders. Given the spate of new legislations on
algorithmic accountability, it is imperative that researchers from social
science, human-computer interaction, and data science work in unison for
demystifying how rankings are produced, who has agency to change them, and what
metrics of socio-technical impact one must use for informing the context of
use.Comment: Accepted for Interrogating Human-Centered Data Science workshop at
CHI'2
Enhanced defect generation in gate oxides of P-channel MOS transistors in the presence of water
- …