733 research outputs found

    SciTokens: Capability-Based Secure Access to Remote Scientific Data

    Full text link
    The management of security credentials (e.g., passwords, secret keys) for computational science workflows is a burden for scientists and information security officers. Problems with credentials (e.g., expiration, privilege mismatch) cause workflows to fail to fetch needed input data or store valuable scientific results, distracting scientists from their research by requiring them to diagnose the problems, re-run their computations, and wait longer for their results. In this paper, we introduce SciTokens, open source software to help scientists manage their security credentials more reliably and securely. We describe the SciTokens system architecture, design, and implementation addressing use cases from the Laser Interferometer Gravitational-Wave Observatory (LIGO) Scientific Collaboration and the Large Synoptic Survey Telescope (LSST) projects. We also present our integration with widely-used software that supports distributed scientific computing, including HTCondor, CVMFS, and XrootD. SciTokens uses IETF-standard OAuth tokens for capability-based secure access to remote scientific data. The access tokens convey the specific authorizations needed by the workflows, rather than general-purpose authentication impersonation credentials, to address the risks of scientific workflows running on distributed infrastructure including NSF resources (e.g., LIGO Data Grid, Open Science Grid, XSEDE) and public clouds (e.g., Amazon Web Services, Google Cloud, Microsoft Azure). By improving the interoperability and security of scientific workflows, SciTokens 1) enables use of distributed computing for scientific domains that require greater data protection and 2) enables use of more widely distributed computing resources by reducing the risk of credential abuse on remote systems.Comment: 8 pages, 6 figures, PEARC '18: Practice and Experience in Advanced Research Computing, July 22--26, 2018, Pittsburgh, PA, US

    BlogForever D3.2: Interoperability Prospects

    Get PDF
    This report evaluates the interoperability prospects of the BlogForever platform. Therefore, existing interoperability models are reviewed, a Delphi study to identify crucial aspects for the interoperability of web archives and digital libraries is conducted, technical interoperability standards and protocols are reviewed regarding their relevance for BlogForever, a simple approach to consider interoperability in specific usage scenarios is proposed, and a tangible approach to develop a succession plan that would allow a reliable transfer of content from the current digital archive to other digital repositories is presented

    Metajournals. A federalist proposal for scholarly communication and data aggregation

    Get PDF
    While the EU is building an open access infrastructure of archives (e.g. OpenAIRE) and it is trying to implement it in the Horizon 2020 program, the gap between the tools and the human beings – researchers, citizen scientists, students, ordinary people – is still wide. The necessity to dictate open access publishing as a mandate for the EU funded research – ten years after the BOAI - is an obvious symptom of it: there is a chasm between the net and the public use of reason. To escalate the advancement and the reuse of research, we should federate the multitude of already existing open access journals in federal open overlay journals that receive their contents from the member journals and boost it with their aggregation power and their semantic web tools. The article contains both the theoretical basis and the guidelines for a project whose goals are: 1. making open access journals visible, highly cited and powerful, by federating them into wide disciplinary overlay journals; 2. avoiding the traps of the “authors pay” open access business model, by exploiting one of the virtue of federalism: the federate journals can remain little and affordable, if they gain visibility from the power of the federal overlay journal aggregating them; 3. enriching the overlay journals both through semantic annotation tools and by means of open platforms dedicated to host ex post peer review and experts comments; 4. making the selection and evaluation processes and their resulting data as much as possible public and open, to avoid the pitfalls (e. g, the serials price crisis) experienced by the closed access publishing model. It is about time to free academic publishing from its expensive walled gardens and to put to test the tools that can help us to transform it in one open forest, with one hundred flowers – and one hundred trailblazers

    Metajournals. A federalist proposal for scholarly communication and data aggregation

    Get PDF
    Abstract   While the EU is building an open access infrastructure of archives (e.g., Openaire) and it is trying to implement it in the Horizon 2020 program, the gap between the tools and the human beings – researchers, citizen scientists, students, ordinary people – is still wide. The necessity to dictate open access publishing as a mandate for the EU funded research – ten years after the BOAI - is an obvious symptom of it: there is a chasm between the net and the public use of reason. To escalate the advancement and the reuse of research, we should federate the multitude of already existing open access journals in federal open overlay journals that receive their contents from the member journals and boost it with their aggregation power and their semantic web tools.The article contains both the theoretical basis and the guidelines for a project whose goals are:making open access journals visible, highly cited and powerful, by federating them into wide disciplinary overlay journals; avoiding the traps of the “authors pay” open access business model, by exploiting one of the virtue of federalism: the federate journals can remain little and affordable, if they gain visibility from the power of the federal overlay journal aggregating them;enriching the overlay journals both through semantic annotation tools and by means of open platforms dedicated to host ex post peer review and experts comments;making the selection and evaluation processes and their resulting data as much as possible public and open, to avoid the pitfalls (e.g., the serials price crisis) experienced by the closed access publishing model.It is about time to free academic publishing from its expensive walled gardens and to put to test the tools that can help us to transform it in one open forest, with one hundred flowers – and one hundred trailblazers

    Building an Open-Source Archive for Born-Digital Dissertations

    No full text
    This proposal for a Level I Digital Humanities Start-Up Grant would support an interdisciplinary workshop aimed at identifying the issues, opportunities and requirements for developing an open-source system into which born-digital dissertations (e.g., interactive webtexts, software, games, etc.) can be deposited and maintained, and through which they can be accessed and cross-referenced. The workshop will build upon the framework set up by the Networked Digital Library of Theses and Dissertations (NDLDT) and the United States Electronic Thesis and Dissertation Association (USETDA), which support the creation and dissemination of digital dissertations, but, despite best efforts, do not currently offer a comprehensive, central repository or index of born-digital dissertations such as exists for print (e.g., Proquest). One of the primary goals for this workshop will be to develop a plan for the development of such a tool as well as the identification of a project advisory board

    Enabling Lightweight Video Annotation and Presentation for Cultural Heritage

    Get PDF
    Collaboration-intensive research is increasingly becoming the norm in the humanities and social science arenas. eResearch tools such as online repositories offer researchers the opportunity to access and interact with data online. For the last 20 years video has formed an important part of humanities research, although dealing with multimedia in an online setting has proven difficult with existing tools. File size limitations, lack of interoperability with existing security systems, and the inability to include rich supportive detail regarding files have hampered the use of video. This paper describes a collaborative and data management solution for video and other files using a combination of existing tools (SRB and Plone integrated with Shibboleth) and a custom application for video upload and annotation (Mattotea). Rather than creating new proprietary systems, this development has examined the reuse of existing technologies with the addition of custom extensions to provide fullfeatured access to research data

    A Distributed Software Platform for Additive Manufacturing

    Get PDF
    Additive Manufacturing (AM), a cornerstone of Industry 4.0, is expected to revolutionise production in practically all industries. However, multiple production challenges still exist, preventing its diffusion. In recent years, Machine Learning algorithms have been employed to overcome these hurdles. Nonetheless, the usage of these algorithms is constrained by the scarcity of data together with the challenges associated with accessing and integrating the information generated during the AM pipeline. In this work, we present a vendor-agnostic platform for AM that enables collecting, storing, analysing and linking the heterogeneous data of the complete AM process. We conducted an extensive analysis of the different AM datatypes and identified the most suitable technologies for storing them. Furthermore, we performed an in-depth study of the requirements of different AM stakeholders to develop a rich and intuitive Graphical User Interface. We showcased the specific usage of the platform for Powder Bed Fusion, one of the most popular AM processes, in a real industrial scenario, integrating specific existing modules for in-situ monitoring and real-time defect detection

    Maintenance of Enterprise Architecture Models

    Get PDF
    Enterprise architecture (EA) models are tools of analysis, communication, and support towards enterprise transformation. These models need a suitable maintenance process to support comprehensive knowledge of the enterprise’s structure and dynamics. This study aims to identify and discuss the existing approaches to EA model maintenance published in the scientific literature. A systematic literature review was employed as the research method. A keyword-based search in six databases identified a total of 4495 papers in which 31 primary studies were included. A total of nine categories of EA model maintenance approaches were identified from both information systems and enterprise engineering fields of research. The increasing amount of research in EA model maintenance suggests that the topic still presents opportunities for research contributions. This study also proposes future lines of research according to the results identified in the theoretical corpus
    • 

    corecore