1,645 research outputs found

    SciTokens: Capability-Based Secure Access to Remote Scientific Data

    Full text link
    The management of security credentials (e.g., passwords, secret keys) for computational science workflows is a burden for scientists and information security officers. Problems with credentials (e.g., expiration, privilege mismatch) cause workflows to fail to fetch needed input data or store valuable scientific results, distracting scientists from their research by requiring them to diagnose the problems, re-run their computations, and wait longer for their results. In this paper, we introduce SciTokens, open source software to help scientists manage their security credentials more reliably and securely. We describe the SciTokens system architecture, design, and implementation addressing use cases from the Laser Interferometer Gravitational-Wave Observatory (LIGO) Scientific Collaboration and the Large Synoptic Survey Telescope (LSST) projects. We also present our integration with widely-used software that supports distributed scientific computing, including HTCondor, CVMFS, and XrootD. SciTokens uses IETF-standard OAuth tokens for capability-based secure access to remote scientific data. The access tokens convey the specific authorizations needed by the workflows, rather than general-purpose authentication impersonation credentials, to address the risks of scientific workflows running on distributed infrastructure including NSF resources (e.g., LIGO Data Grid, Open Science Grid, XSEDE) and public clouds (e.g., Amazon Web Services, Google Cloud, Microsoft Azure). By improving the interoperability and security of scientific workflows, SciTokens 1) enables use of distributed computing for scientific domains that require greater data protection and 2) enables use of more widely distributed computing resources by reducing the risk of credential abuse on remote systems.Comment: 8 pages, 6 figures, PEARC '18: Practice and Experience in Advanced Research Computing, July 22--26, 2018, Pittsburgh, PA, US

    Temporal Structuring of Web Content for Adaptive Reduction in Obsolescence Delivery over Internet

    Get PDF
    With increased cyber penetration in the civil society and popularity of web services, websites have now been taken as web based Information Systems [1]. The smooth online services have increased flow of information many folds. Simultaneously lack of appropriate technology to sense content obsolescence [2] has also increased flow of obsolete or stale information many folds. That further leads to many litigations among stake holders. Since transfer of web contents between two nodes over Internet takes place in the form of HTML documents [3], adequate temporal structuring of HTML documents may enable delivery platforms [4] to detect and filter obsolescence in the under-delivery web content automatically. Though this feature can be achieved using server-side scripts [5] but it is difficult for naïve users as server-side scripting demands advance web programming skill. The present paper addresses this problem of naïve user by proposing extended attribute sets of HTML TAGs [6] that beginner can use in web authoring. In this paper two new attributes ‘pubDate’ for date of publish the web content and ‘expDate’ for date of expiry of web content have been introduced in a very simple format. These two new attributes can be used for defining age of web content i.e. life span of web content at the time of web authoring [2]. This paper demonstrates detection and filtering of web obsolescence from the delivery of static HTML documents by proposed framework of content delivery platform. Also some more significant benefits of proposed framework have also been highlighted. DOI: 10.17762/ijritcc2321-8169.15020

    TOFEC: Achieving Optimal Throughput-Delay Trade-off of Cloud Storage Using Erasure Codes

    Full text link
    Our paper presents solutions using erasure coding, parallel connections to storage cloud and limited chunking (i.e., dividing the object into a few smaller segments) together to significantly improve the delay performance of uploading and downloading data in and out of cloud storage. TOFEC is a strategy that helps front-end proxy adapt to level of workload by treating scalable cloud storage (e.g. Amazon S3) as a shared resource requiring admission control. Under light workloads, TOFEC creates more smaller chunks and uses more parallel connections per file, minimizing service delay. Under heavy workloads, TOFEC automatically reduces the level of chunking (fewer chunks with increased size) and uses fewer parallel connections to reduce overhead, resulting in higher throughput and preventing queueing delay. Our trace-driven simulation results show that TOFEC's adaptation mechanism converges to an appropriate code that provides the optimal delay-throughput trade-off without reducing system capacity. Compared to a non-adaptive strategy optimized for throughput, TOFEC delivers 2.5x lower latency under light workloads; compared to a non-adaptive strategy optimized for latency, TOFEC can scale to support over 3x as many requests

    Optimal Cache Allocation Policies in Competitive Content Distribution Networks

    Get PDF
    Exponential expansion in network dimensionality and user traffic has created substantial traffic congestion on the Internet. This congestion causes increased delays perceived by the users while downloading web pages. Users have considerably short patience, and when they do not start receiving information in a short while, they stop browsing the requested web page. As the commercial value of the Internet has become prevalent, the importance of keeping users at a site started to have direct translation into business value. Proxy caching can alleviate problems caused by increased user traffic. In this paper, we consider the effects of real-world non-cooperative behavior of the network agents (servers and proxy caches) in overall network performance. Specifically, we consider a system where the proxy caches sell their caching space to the servers, and servers invest in these caches to provide lower latency to their users to keep them browsing their web pages and in turn to increase their revenues. We determine optimal strategies of the agents that maximize their benefits. We show that such a system has an equilibrium point when no agent can increase its benefit by unilaterally updating its strategy. We show that under certain conditions this equilibrium leads to optimal cache allocation. We also show that an algorithm derived from this analysis is superior to currently implemented caching algorithms
    • …
    corecore