141 research outputs found
Proxy dynamic delegation in grid gateway
Nowadays one of the main obstacles the research comes up against is the
difficulty in accessing the required computational resources. Grid is able to
offer the user a wide set of resources, even if they are often too hard to
exploit for non expert end user. Use simplification has today become a common
practice in the access and utilization of Cloud, Grid, and data center
resources. With the launch of L-GRID gateway, we introduced a new way to deal
with Grid portals. L-GRID is an extremely light portal developed in order to
access the EGI Grid infrastructure via Web, allowing users to submit their jobs
from whatever Web browser in a few minutes, without any knowledge about the
underlying Grid infrastructure.Comment: 6 page
Tool support for security-oriented virtual research collaborations
Collaboration is at the heart of e-Science and e-Research
more generally. Successful collaborations must address both
the needs of the end user researchers and the providers
that make resources available. Usability and security are
two fundamental requirements that are demanded by many
collaborations and both concerns must be considered from
both the researcher and resource provider perspective. In
this paper we outline tools and methods developed at the
National e-Science Centre (NeSC) that provide users with
seamless, secure access to distributed resources through
security-oriented research environments, whilst also allowing resource providers to define and enforce their own local access and usage policies through intuitive user interfaces. We describe these tools and illustrate their application in the ESRC-funded Data Management through e-Social Science (DAMES) and the JISC-funded SeeGEO projects
Cloud Storage Performance and Security Analysis with Hadoop and GridFTP
Even though cloud server has been around for a few years, most of the web hosts today have not converted to cloud yet. If the purpose of the cloud server is distributing and storing files on the internet, FTP servers were much earlier than the cloud. FTP server is sufficient to distribute content on the internet. Therefore, is it worth to shift from FTP server to cloud server? The cloud storage provider declares high durability and availability for their users, and the ability to scale up for more storage space easily could save users tons of money. However, does it provide higher performance and better security features? Hadoop is a very popular platform for cloud computing. It is free software under Apache License. It is written in Java and supports large data processing in a distributed environment. Characteristics of Hadoop include partitioning of data, computing across thousands of hosts, and executing application computations in parallel. Hadoop Distributed File System allows rapid data transfer up to thousands of terabytes, and is capable of operating even in the case of node failure. GridFTP supports high-speed data transfer for wide-area networks. It is based on the FTP and features multiple data channels for parallel transfers. This report describes the technology behind HDFS and enhancement to the Hadoop security features with Kerberos. Based on data transfer performance and security features of HDFS and GridFTP server, we can decide if we should replace GridFTP server with HDFS. According to our experiment result, we conclude that GridFTP server provides better throughput than HDFS, and Kerberos has minimal impact to HDFS performance. We proposed a solution which users authenticate with HDFS first, and get the file from HDFS server to the client using GridFTP
A Credential Store for Multi-tenant Science Gateways
Science Gateways bridge multiple computational grids and clouds, acting as overlay cyberinfrastructure. Gateways have three logical tiers: a user interfacing tier, a resource tier and a bridging middleware tier. Different groups may operate these tiers. This introduces three security challenges. First, the gateway middleware must manage multiple types of credentials associated with different resource providers. Second, the separation of the user interface and middleware layers means that security credentials must be securely delegated from the user interface to the middleware. Third, the same middleware may serve multiple gateways, so the middleware must correctly isolate user credentials associated with different gateways. We examine each of these three scenarios, concentrating on the requirements and implementation of the middleware layer. We propose and investigate the use of a Credential Store to solve the three security challenges
XSEDE Service Provider Software and Services Baseline v1.2
This document describes the Service Provider Software and
Services of the XSEDE Production System Baseline (version 1.2). The objective of this document is therefore twofold: to describe the software and services which XSEDE service providers (SPs) may, and in some
cases must deploy into their own environments; and to obtain community feedback on the suitability for purpose of these software and services, and in the manner of their descriptionNational Science Foundation OCI-1053575Ope
A NeISS collaboration to develop and use e-infrastructure for large-scale social simulation
The National e-Infrastructure for Social Simulation (NeISS) project is focused on
developing e-Infrastructure to support social simulation research. Part of NeISS aims to
provide an interface for running contemporary dynamic demographic social simulation
models as developed in the GENESIS project. These GENESIS models operate at the
individual person level and are stochastic. This paper focuses on support for a simplistic
demographic change model that has a daily time steps, and is typically run for a number
of years.
A portal based Graphical User Interface (GUI) has been developed as a set
of standard portlets. One portlet is for specifying model parameters and setting a
simulation running. Another is for comparing the results of different simulation runs.
Other portlets are for monitoring submitted jobs and for interfacing with an archive of
results. A layer of programs enacted by the portlets stage data in and submit jobs to a
Grid computer which then runs a specific GENESIS model program executable. Once a
job is submitted, some details are communicated back to a job monitoring portlet. Once
the job is completed, results are stored and made available for download and further
processing. Collectively we call the system the Genesis Simulator.
Progress in the development of the Genesis Simulator was presented at the UK e-
Science All Hands Meeting in September 2011 by way of a video based demonstration
of the GUI, and an oral presentation of a working paper. Since then, an automated
framework has been developed to run simulations for a number of years in yearly time
steps. The demographic models have also been improved in a number of ways. This
paper summarises the work to date, presents some of the latest results and considers the
next steps we are planning in this work
- …