6,474 research outputs found

    Monochromatic Gamma Rays from Dark Matter Annihilation to Leptons

    Get PDF
    We investigate the relation between the annihilation of dark matter (DM) particles into lepton pairs and into 2-body final states including one or two photons. We parametrize the DM interactions with leptons in terms of contact interactions, and calculate the loop-level annihilation into monochromatic gamma rays, specifically computing the ratio of the DM annihilation cross sections into two gamma rays versus lepton pairs. While the loop-level processes are generically suppressed in comparison with the tree-level annihilation into leptons, we find that some choices for the mediator spin and coupling structure lead to large branching fractions into gamma-ray lines. This result has implications for a dark matter contribution to the AMS-02 positron excess. We also explore the possibility of mediators which are charged under a dark symmetry and find that, for these loop-level processes, an effective field theory description is accurate for DM masses up to about half the mediator mass.Comment: 21 pages plus appendices, 7 figures. v2: added experimental constraints from CMB and Fermi, expanded and reorganized discussion throughout. Accepted by JHE

    Use of Controlled Vocabularies: Potential applications to time series data [talk]

    Get PDF
    Presented at EarthCube Ocean Time Series Workshop, Honolulu, HI, September 13-15, 2019Use of Controlled Vocabularies: Potential applications to time series dataNSF #1435578, #192461

    Mayall:a framework for desktop JavaScript auditing and post-exploitation analysis

    Get PDF
    Writing desktop applications in JavaScript offers developers the opportunity to write cross-platform applications with cutting edge capabilities. However in doing so, they are potentially submitting their code to a number of unsanctioned modifications from malicious actors. Electron is one such JavaScript application framework which facilitates this multi-platform out-the-box paradigm and is based upon the Node.js JavaScript runtime --- an increasingly popular server-side technology. In bringing this technology to the client-side environment, previously unrealized risks are exposed to users due to the powerful system programming interface that Node.js exposes. In a concerted effort to highlight previously unexposed risks in these rapidly expanding frameworks, this paper presents the Mayall Framework, an extensible toolkit aimed at JavaScript security auditing and post-exploitation analysis. The paper also exposes fifteen highly popular Electron applications and demonstrates that two thirds of applications were found to be using known vulnerable elements with high CVSS scores. Moreover, this paper discloses a wide-reaching and overlooked vulnerability within the Electron Framework which is a direct byproduct of shipping the runtime unaltered with each application, allowing malicious actors to modify source code and inject covert malware inside verified and signed applications without restriction. Finally, a number of injection vectors are explored and appropriate remediations are proposed

    Repository as a service (RaaS)

    No full text
    In his oft-quoted seminal paper ‘Institutional Repositories: Essential Infrastructure For Scholarship In The Digital Age’ Clifford Lynch (2003) described the Institutional Repository as “a set of services that a university offers to the members of its community for the management and dissemination of digital materials created by the institution and its community members.” This paper seeks instead to define the repository service at a more primitive level, without the specialism of being an ‘Institutional Repository’, and looks at how it can viewed as providing a service within appropriate boundaries, and what that could mean for the future development of repositories, our expectations of what repositories should be, and how they could fit into the set of services required to deliver an Institutional Repository service as describe by Lynch.<br/

    Geoscience data publication: practices and perspectives on enabling the FAIR guiding principles

    Get PDF
    © The Author(s), 2021. This article is distributed under the terms of the Creative Commons Attribution License. The definitive version was published in Kinkade, D., & Shepherd, A. Geoscience data publication: practices and perspectives on enabling the FAIR guiding principles. Geoscience Data Journal, (2021): https://doi.org/10.1002/gdj3.120.ntroduced in 2016, the FAIR Guiding Principles endeavour to significantly improve the process of today's data-driven research. The Principles present a concise set of fundamental concepts that can facilitate the findability, accessibility, interoperability and reuse (FAIR) of digital research objects by both machines and human beings. The emergence of FAIR has initiated a flurry of activity within the broader data publication community, yet the principles are still not fully understood by many community stakeholders. This has led to challenges such as misinterpretation and co-opted use, along with persistent gaps in current data publication culture, practices and infrastructure that need to be addressed to achieve a FAIR data end-state. This paper presents an overview of the practices and perspectives related to the FAIR Principles within the Geosciences and offers discussion on the value of the principles in the larger context of what they are trying to achieve. The authors of this article recommend using the principles as a tool to bring awareness to the types of actions that can improve the practice of data publication to meet the needs of all data consumers. FAIR Guiding Principles should be interpreted as an aspirational guide to focus behaviours that lead towards a more FAIR data environment. The intentional discussions and incremental changes that bring us closer to these aspirations provide the best value to our community as we build the capacity that will support and facilitate new discovery of earth systems.The writing of this article was supported by the NSF, grant no. 1924618

    Leveraging the GeoLink Knowledge Base for Cruise Information

    Get PDF
    Open Linked Data (LOD) is providing an excellent opportunity for repositories, libraries, and archives to expand the use of their holdings and advance the work of researchers. The implementation of the GeoLink Knowledgebase has created an exciting LOD framework for organizations specializing in Earth Sciences. As an NSF EarthCube Building Block, GeoLink brings together several powerful data sources, such as BCO-DMO, Rolling Deck to Repository (R2R), Data One, IEDA, IODP, and LTER, with publication providers such as the MBLWHOI Library’s Woods Hole Open Access Server (WHOAS), ESIP, and AGU. While publishing to the GeoLink knowledgebase offers a great way to make collections and metadata more findable and relevant, becoming a linked data publisher is not the only way to engage with linked data or the GeoLink project. Any repository can use simple, easily customizable code developed by members of the GeoLink team to add live GeoLink content to a page based on the item's metadata, leveraging GeoLink’s powerful framework for searching across repositories, organizations, and disciplines.GeoLink was funded by the National Science Foundation, EAGER: Collaborative Research: Building Blocks, Leveraging Semantics and Linked Data for Geoscience Data Sharing and Discovery EarthCube Building Blocks: Collaborative Proposal: GeoLink – Leveraging Semantics and Linked Data for Data Sharing and Discovery in the Geoscienc

    Apparatus and method for gelling liquefied gasses

    Get PDF
    The present invention is a method and apparatus for gelling liquid propane and other liquefied gasses. The apparatus includes a temperature controlled churn mixer, vacuum pump, liquefied gas transfer tank, and means for measuring amount of material entering the mixer. The method uses gelling agents such as silicon dioxide, clay, carbon, or organic or inorganic polymers, as well as dopants such as titanium, aluminum, and boron powders. The apparatus and method are particularly useful for the production of high quality rocket fuels and propellants

    The Frictionless Data Package : data containerization for addressing big data challenges [poster]

    Get PDF
    Presented at AGU Ocean Sciences, 11 - 16 February 2018, Portland, ORAt the Biological and Chemical Oceanography Data Management Office (BCO-DMO) Big Data challenges have been steadily increasing. The sizes of data submissions have grown as instrumentation improves. Complex data types can sometimes be stored across different repositories . This signals a paradigm shift where data and information that is meant to be tightly-coupled and has traditionally been stored under the same roof is now distributed across repositories and data stores. For domain-specific repositories like BCO-DMO, a new mechanism for assembling data, metadata and supporting documentation is needed. Traditionally, data repositories have relied on a human's involvement throughout discovery and access workflows. This human could assess fitness for purpose by reading loosely coupled, unstructured information from web pages and documentation. Distributed storage was something that could be communicated in text that a human could read and understand. However, as machines play larger roles in the process of discovery and access of data, distributed resources must be described and packaged in ways that fit into machine automated workflows of discovery and access for assessing fitness for purpose by the end-user. Once machines have recommended a data resource as relevant to an investigator's needs, the data should be easy to integrate into that investigator's toolkits for analysis and visualization. BCO-DMO is exploring the idea of data containerization, or packaging data and related information for easier transport, interpretation, and use. Data containerization reduces not only the friction data repositories experience trying to describe complex data resources, but also for end-users trying to access data with their own toolkits. In researching the landscape of data containerization, the Frictionlessdata Data Package (http://frictionlessdata.io/) provides a number of valuable advantages over similar solutions. This presentation will focus on these advantages and how the Frictionlessdata Data Package addresses a number of real-world use cases faced for data discovery, access, analysis and visualization in the age of Big Data.NSF #1435578, NSF #163971
    corecore