5,725 research outputs found

    Global Grids and Software Toolkits: A Study of Four Grid Middleware Technologies

    Full text link
    Grid is an infrastructure that involves the integrated and collaborative use of computers, networks, databases and scientific instruments owned and managed by multiple organizations. Grid applications often involve large amounts of data and/or computing resources that require secure resource sharing across organizational boundaries. This makes Grid application management and deployment a complex undertaking. Grid middlewares provide users with seamless computing ability and uniform access to resources in the heterogeneous Grid environment. Several software toolkits and systems have been developed, most of which are results of academic research projects, all over the world. This chapter will focus on four of these middlewares--UNICORE, Globus, Legion and Gridbus. It also presents our implementation of a resource broker for UNICORE as this functionality was not supported in it. A comparison of these systems on the basis of the architecture, implementation model and several other features is included.Comment: 19 pages, 10 figure

    Data management of nanometre­ scale CMOS device simulations

    Get PDF
    In this paper we discuss the problems arising in managing and curating the data generated by simulations of nanometre scale CMOS (Complementary Metal–Oxide Semiconductor) transistors, circuits and systems and describe the software and operational techniques we have adopted to address them. Such simulations pose a number of challenges including, inter alia, multi­TByte data volumes, complex datasets with complex inter-relations between datasets, multi­-institutional collaborations including multiple specialisms and a mixture of academic and industrial partners, and demanding security requirements driven by commercial imperatives. This work was undertaken as part of the NanoCMOS project. However, the problems, solutions and experience seem likely to be of wider relevance, both within the CMOS design community and more generally in other disciplines

    Leveraging Public Knowledge Project\u27s Open Conference Systems for Digital Scholarship

    Get PDF
    The Media History Exchange (MHX) is an archive, social network, conference management tool, and collaborative workspace for the international, interdisciplinary community of researchers studying the history of journalism and communication. It opens a new scholarly space between the academic conference and the peer-reviewed journal by archiving “born digital” conference papers and abstracts that frequently have not been saved previously. In the spring of 2017, MHX migrated to the Public Knowledge Project’s Open Conference Systems. If your library is interested in expanding its digital scholarship offerings to include conference support, or offers its own library-focused conference, this technology might be exactly what you need. Co-author: Elliot King, Ph.D. (Loyola University Maryland

    The Locus Algorithm III: A Grid Computing system to generate catalogues of optimised pointings for Differential Photometry

    Get PDF
    This paper discusses the hardware and software components of the Grid Computing system used to implement the Locus Algorithm to identify optimum pointings for differential photometry of 61,662,376 stars and 23,799 quasars. The scale of the data, together with initial operational assessments demanded a High Performance Computing (HPC) system to complete the data analysis. Grid computing was chosen as the HPC solution as the optimum choice available within this project. The physical and logical structure of the National Grid computing Infrastructure informed the approach that was taken. That approach was one of layered separation of the different project components to enable maximum flexibility and extensibility

    MEDIN Feasibility Study : archiving oil and gas industry site survey data

    Get PDF
    This report was commissioned by the Marine Environmental and Information Network (MEDIN) to investigate the feasibility of collecting oil and gas industry site surveys conducted on the UKCS (UK Continental Shelf) for archive in the MEDIN DAC (Data Archive Centre) network. The archive of three principle data types is explored; information about legacy site surveys, catalogues of information about data products associated with site surveys and actual site survey data, which may include a survey report and enclosures and/or a selection of data e.g. side-scan or multibeam, sample descriptions and seismic profiles. The merits of the collection of these data types are explored alongside the cost implications, from both an oil and gas industry contractor’s and a marine geoscientist’s perspective, thereby enabling MEDIN to better understand and make decisions as to which data to concentrate on. The principles and proposed procedures for carrying out the collection of these data types are outlined however the practical details of these will require agreement should any decision be made to proceed. At this stage a further thorough detailed scope will be required in order to formulate procedures, qualify numbers, define activities, identify resources and plan timescales. The time period for the collection of legacy site surveys will require consideration i.e. how far back it is feasible to collect this information, and whether requests should be phased to include surveys acquired within predetermined time intervals. The size of the actual site survey data holdings, the storage capacity required to archive these and the amount of work involved in processing this data into useable and useful formats will require review. Some of these issues may need to be considered on a case-by-case basis. The procedures themselves will require regular review dependent on the response i.e. the volume, types and condition of data received

    HotGrid: Graduated Access to Grid-based Science Gateways

    Get PDF
    We describe the idea of a Science Gateway, an application-specific task wrapped as a web service, and some examples of these that are being implemented on the US TeraGrid cyberinfrastructure. We also describe HotGrid, a means of providing simple, immediate access to the Grid through one of these gateways, which we hope will broaden the use of the Grid, drawing in a wide community of users. The secondary purpose of HotGrid is to acclimate a science community to the concepts of certificate use. Our system provides these weakly authenticated users with immediate power to use the Grid resources for science, but without the dangerous power of running arbitrary code. We describe the implementation of these Science Gateways with the Clarens secure web server

    Secure, performance-oriented data management for nanoCMOS electronics

    Get PDF
    The EPSRC pilot project Meeting the Design Challenges of nanoCMOS Electronics (nanoCMOS) is focused upon delivering a production level e-Infrastructure to meet the challenges facing the semiconductor industry in dealing with the next generation of ‘atomic-scale’ transistor devices. This scale means that previous assumptions on the uniformity of transistor devices in electronics circuit and systems design are no longer valid, and the industry as a whole must deal with variability throughout the design process. Infrastructures to tackle this problem must provide seamless access to very large HPC resources for computationally expensive simulation of statistic ensembles of microscopically varying physical devices, and manage the many hundreds of thousands of files and meta-data associated with these simulations. A key challenge in undertaking this is in protecting the intellectual property associated with the data, simulations and design process as a whole. In this paper we present the nanoCMOS infrastructure and outline an evaluation undertaken on the Storage Resource Broker (SRB) and the Andrew File System (AFS) considering in particular the extent that they meet the performance and security requirements of the nanoCMOS domain. We also describe how metadata management is supported and linked to simulations and results in a scalable and secure manner

    nsroot: Minimalist Process Isolation Tool Implemented With Linux Namespaces

    Get PDF
    Data analyses in the life sciences are moving from tools run on a personal computer to services run on large computing platforms. This creates a need to package tools and dependencies for easy installation, configuration and deployment on distributed platforms. In addition, for secure execution there is a need for process isolation on a shared platform. Existing virtual machine and container technologies are often more complex than traditional Unix utilities, like chroot, and often require root privileges in order to set up or use. This is especially challenging on HPC systems where users typically do not have root access. We therefore present nsroot, a lightweight Linux namespaces based process isolation tool. It allows restricting the runtime environment of data analysis tools that may not have been designed with security as a top priority, in order to reduce the risk and consequences of security breaches, without requiring any special privileges. The codebase of nsroot is small, and it provides a command line interface similar to chroot. It can be used on all Linux kernels that implement user namespaces. In addition, we propose combining nsroot with the AppImage format for secure execution of packaged applications. nsroot is open sourced and available at: https://github.com/uit-no/nsroo

    CCBS – a method to maintain memorability, accuracy of password submission and the effective password space in click-based visual passwords

    Get PDF
    Text passwords are vulnerable to many security attacks due to a number of reasons such as the insecure practices of end users who select weak passwords to maintain their long term memory. As such, visual password (VP) solutions were developed to maintain the security and usability of user authentication in collaborative systems. This paper focuses on the challenges facing click-based visual password systems and proposes a novel method in response to them. For instance, Hotspots reveal a serious vulnerability. They occur because users are attracted to specific parts of an image and neglect other areas. Undertaking image analysis to identify these high probability areas can assist dictionary attacks. Another concern is that click-based systems do not guide users towards the correct click-point they are aiming to select. For instance, users might recall the correct spot or area but still fail to include their click within the tolerance distance around the original click-point which results in more incorrect password submissions. Nevertheless, the Passpoints study by Wiedenbeck et al., 2005 inspected the retention of their VP in comparison with text passwords over the long term. Despite being cued-recall the successful rate of their VP submission was not superior to text passwords as it decreased from 85% (the instant retention on the day of registration) to 55% after 2 weeks. This result was identical to that of the text password in the same experiment. The successful submission rates after 6 weeks were also 55% for both VP and text passwords. This paper addresses these issues, and then presents a novel method (CCBS) as a usable solution supported by an empirical proof. A user study is conducted and the results are evaluated against a comparative study
    corecore