74 research outputs found

    Collisional formation of massive exomoons of super-terrestrial exoplanets

    Get PDF
    Exomoons orbiting terrestrial or super-terrestrial exoplanets have not yet been discovered; their possible existence and properties are therefore still an unresolved question. Here we explore the collisional formation of exomoons through giant planetary impacts. We make use of smooth particle hydrodynamical (SPH) collision simulations and survey a large phase-space of terrestrial/super-terrestrial planetary collisions. We characterize the properties of such collisions, finding one rare case in which an exomoon forms through a graze&capture scenario, in addition to a few graze&merge or hit&run scenarios. Typically however, our collisions form massive circumplanetary discs, for which we use follow-up N-body simulations in order to derive lower-limit mass estimates for the ensuing exomoons. We investigate the mass, long-term tidal-stability, composition and origin of material in both the discs and the exomoons. Our giant-impact models often generate relatively iron-rich moons, that form beyond the synchronous radius of the planet, and would thus tidally evolve outward with stable orbits, rather than be destroyed. Our results suggest that it is extremely difficult to collisionally form currently-detectable exomoons orbiting super-terrestrial planets, through single giant impacts. It might be possible to form massive, detectable exomoons through several mergers of smaller exomoons, formed by multiple impacts, however more studies are required in order to reach a conclusion. Given the current observational initiatives, the search should focus primarily on more massive planet categories. However, about a quarter of the exomoons predicted by our models are approximately Mercury-mass or more, and are much more likely to be detectable given a factor 2 improvement in the detection capability of future instruments, providing further motivation for their development

    The NASA Astrophysics Data System: Architecture

    Full text link
    The powerful discovery capabilities available in the ADS bibliographic services are possible thanks to the design of a flexible search and retrieval system based on a relational database model. Bibliographic records are stored as a corpus of structured documents containing fielded data and metadata, while discipline-specific knowledge is segregated in a set of files independent of the bibliographic data itself. The creation and management of links to both internal and external resources associated with each bibliography in the database is made possible by representing them as a set of document properties and their attributes. To improve global access to the ADS data holdings, a number of mirror sites have been created by cloning the database contents and software on a variety of hardware and software platforms. The procedures used to create and manage the database and its mirrors have been written as a set of scripts that can be run in either an interactive or unsupervised fashion. The ADS can be accessed at http://adswww.harvard.eduComment: 25 pages, 8 figures, 3 table

    Crystallography Open Database (COD): an open-access collection of crystal structures and platform for world-wide collaboration

    Get PDF
    Using an open-access distribution model, the Crystallography Open Database (COD, http://www.crystallography.net) collects all known ‘small molecule / small to medium sized unit cell’ crystal structures and makes them available freely on the Internet. As of today, the COD has aggregated ∼150 000 structures, offering basic search capabilities and the possibility to download the whole database, or parts thereof using a variety of standard open communication protocols. A newly developed website provides capabilities for all registered users to deposit published and so far unpublished structures as personal communications or pre-publication depositions. Such a setup enables extension of the COD database by many users simultaneously. This increases the possibilities for growth of the COD database, and is the first step towards establishing a world wide Internet-based collaborative platform dedicated to the collection and curation of structural knowledge

    Decentralization of multimedia content in a heterogeneous environment

    Get PDF
    The aim of this study has been the decentralization of multimedia content in a heterogeneous environment. The environment consisted of the research networks connecting the European Organization for Nuclear Research and the Finnish University and Research Network. The European Organization for Nuclear Research produces multimedia content which can be used as studying material all over the world. The Web University pilot in the European Organization for Nuclear Research has been developing a multimedia content delivery service for years. Delivering the multimedia content requires plenty of capacity from the network infrastructure. Different content of the material can have different demands for the network. In a heterogeneous environment, like the Internet, fulfilling all the demands can be a problem. Several methods exist to improve the situation. Decentralization of the content is one of the most popular solutions. Mirroring and caching are the main methods for decentralization. Recently developed content delivery networks are using both of these techniques to satisfy the demands of the content. The practical application consisted of measurements of the network connection between the multimedia server in the European Organization for Nuclear Research and the Finnish University and Research Network, planning and building a decentralization system for the multimedia content. After the measurements, it became clear that there is n o need for decentralization of the multimedia content for users that are able to utilise the Finnish University and Research Network. There could be double today's usage, and still there would be no problems with the capacity. However, the European Organization for Nuclear Research routes all traffic that comes from outside research networks through a gateway in the USA. This affects every connection that is made from Finland: users are not able to use the international connection offered by the Finnish University and Research Network. For these users I designed and built a simple, modular and portable decentralization system

    ANSI/NISO Z39.99-2017 ResourceSync Framework Specification

    Get PDF
    This ResourceSync specification describes a synchronization framework for the web consisting of various capabilities that allow third-party systems to remain synchronized with a server’s evolving resources. The capabilities may be combined in a modular manner to meet local or community requirements. This specification also describes how a server should advertise the synchronization capabilities it supports and how third-party systems may discover this information. The specification repurposes the document formats defined by the Sitemap protocol and introduces extensions for them

    Proposal for Persistent & Unique Entity Identifiers

    Get PDF
    This proposal argues for the establishment of persistent and unique identifiers for page level content. The page is a key conceptual entity within the HathiTrust Research Center (HTRC) framework. Volumes are composed of pages and pages are the size of the portions of data that the HTRC’s analytics modules consume and execute algorithms across. The need for infrastructure that supports persistent and unique identity for is best described by seven use cases: 1. Persistent Citability: Scholars engaging in the analysis of HTRC resources have a clear need to cite those resources in a persistent manner independent of those resources’ relative positions within other entities. 2. Point-in-time Citability: Scholars engaging in the analysis of HTRC resources have a clear need to cite resources in an unambiguous way that is persistent with respect to time. 3. Reproducibility: Scholars need methods by which the resources that they cite can be shared so that their work conforms to the norms of peer-review and reproducibility of results. 4. Supporting “non-consumptive” Usage: Anonymizing page-level content by disassociating it from the volumes that it is conceptually a part of increases the difficulty of leveraging HTRC analytics modules for the direct reproduction of HathiTrust (HT) content. 5. Improved Granularity: Since many features that scholars are interested in exist at the conceptual level of a page rather than at the level of a volume, unique page-level entities expand the types of methods by which worksets can be gathered and by which analytics modules can be constructed. 6. Expanded Workset Membership: In the near future we would like to empower scholars with options for creating worksets from arbitrary resources at arbitrary levels of granularity, including constructing worksets from collections of arbitrary pages. 7. Supporting Graph Representations: Unique identifiers for page-level content facilitate the creation of more conceptually accurate and functional graph representations of the HT corpus. There several waysOpe

    Towards usable and fine-grained security for HTTPS with middleboxes

    Get PDF
    Over the past few years, technology firms have inlined end-to-end encryption for their services and implored for increased in-network functionality. Most firms deploy TLS and middleboxes by performing man-in-the-middle (MITM) of network sessions. In practice, there are no official guidelines for performing MITM and often several tweaks are used resulting in less secure systems. TLS was designed for exactly two parties and introducing a third party by doing MITM breaks TLS and the security benefits it offers. With increasing debate in finding a clean way to deploy middleboxes with TLS, our work surveys the literature and introduces a benchmark based on the Usability-Deployability-Security (UDS) framework for evaluating existing TLS middlebox interception proposals. Our benchmark encompasses and helps understand the current benefits, solutions and challenges in the existing solutions for incorporating TLS with middleboxes. We perform a comparative and qualitative evaluation for the schemes and summarize the results in a single table. We propose: Triraksha, an alternative to the currently deployed middlebox interception models. Triraksha provides a packet inspection service for end-to-end encrypted connections while maintaining fine-grained confidentiality for end points. We evaluate a prototype implementation of our scheme on local and remote servers and show that the overhead in terms of latency and throughput is minimal. Our scheme is easily deployable as only a few software additions are made at the middlebox and client end

    RRS Discovery Cruise DY054, 27 Jul - 17 Aug 2016, Reykjavik to Southampton. OSNAP 2016 mooring refurbishment cruise, Leg 2

    Get PDF
    Cruise DY054 was the second leg of the 2016 UK OSNAP mooring refurbishment programme on RRS Discovery. Following the first leg (DY053, Cunningham et al 2016) which serviced and re-deployed US and UK moorings in the Iceland Basin and Rockall Trough, the scientific objectives of DY054 were: To service (recover and re-deploy) the 5 UK OSNAP moorings in the western Irminger Sea (the Deep Western Boundary Array, M1-M5) To service the 5 Dutch OSNAP moorings in the eastern Irminger Sea (the Irminger Currrent Array, IC0-IC4) To service the Dutch LOCO mooring in the central Irminger Sea To complete a CTD/LADCP section across the Irminger Basin, from the Greenland coast to the mid-Atlantic Ridge To collect and freeze nutrient samples from CTD stations for later analysis by NIOZ To deploy an Argos float for the UK Met Office To deploy a series of OSNAP RAFOS floats in the overflow waters To deploy a new WHOI sound source in the Maury Channel of the Iceland Basin To collect material for outreach programes, including film footage, audio recordings and photography for a US OSNAP website, and material to be used in an art project All objectives were achieved. All OSNAP moorings were previously deployed in 2014, refurbished in 2015, and will be recovered in 2018. The moorings and the CTD profiles will be used to measure the mean and variability of the surface-to-seafloor currents, and to compute the volume, heat and freshwater transport within the currents. They are part of a large international programme, OSNAP (Overturning in the Subpolar North Atlantic Programme) which has other moorings in the Labrador Sea, Iceland Basin and Rockall Trough
    corecore