332,251 research outputs found

    The Potential and Problems in using High Performance Computing in the Arts and Humanities: the Researching e-Science Analysis of Census Holdings (ReACH) Project

    Get PDF
    e-Science and high performance computing (HPC) have the potential to allow large datasets to be searched and analysed quickly, efficiently, and in complex and novel ways. Little application has been made of the processing power of grid technologies to humanities data, due to lack of available large-scale datasets, and little understanding of or access to e-Science technologies. The Researching e-Science Analysis of Census Holdings (ReACH) scoping study, an AHRC-funded e-science workshop series, was established to investigate the potential application of grid computing to a large dataset of interest to historians, humanists, digital consumers, and the general public: historical census records. Consisting of three one-day workshops held at UCL in Summer 2006, the workshop series brought together expertise across different domains to ascertain how useful, possible, or feasible it would be to analyse datasets from Ancestry and The National Archives using the HPC facilities available at UCL. This article details the academic, technical, managerial, and legal issues highlighted in the project when attempting to apply HPC to historical data sets. Additionally, generic issues facing humanities researchers attempting to utilise HPC technologies in their research are presented

    Cloud Storage and Bioinformatics in a private cloud deployment: Lessons for Data Intensive research

    No full text
    This paper describes service portability for a private cloud deployment, including a detailed case study about Cloud Storage and bioinformatics services developed as part of the Cloud Computing Adoption Framework (CCAF). Our Cloud Storage design and deployment is based on Storage Area Network (SAN) technologies, details of which include functionalities, technical implementation, architecture and user support. Experiments for data services (backup automation, data recovery and data migration) are performed and results confirm backup automation is completed swiftly and is reliable for data-intensive research. The data recovery result confirms that execution time is in proportion to quantity of recovered data, but the failure rate increases in an exponential manner. The data migration result confirms execution time is in proportion to disk volume of migrated data, but again the failure rate increases in an exponential manner. In addition, benefits of CCAF are illustrated using several bioinformatics examples such as tumour modelling, brain imaging, insulin molecules and simulations for medical training. Our Cloud Storage solution described here offers cost reduction, time-saving and user friendliness

    Cost modelling for cloud computing utilisation in long term digital preservation

    Get PDF
    The rapid increase in volume of digital information can cause concern among organisations regarding manageability, costs and security of their information in the long-term. As cloud computing technology is often used for digital preservation purposes and is still evolving, there is difficulty in determining its long-term costs. This paper presents the development of a generic cost model for public and private clouds utilisation in long term digital preservation (LTDP), considering the impact of uncertainties and obsolescence issues. The cost model consists of rules and assumptions and was built using a combination of activity based and parametric cost estimation techniques. After generation of cost breakdown structures for both clouds, uncertainties and obsolescence were categorised. To quantify impacts of uncertainties on cost, three-point estimate technique was employed and Monte Carlo simulation was applied to generate the probability distribution on each cost driver. A decision support cost estimation tool with dashboard representation of results was developed

    Digital curation and the cloud

    Get PDF
    Digital curation involves a wide range of activities, many of which could benefit from cloud deployment to a greater or lesser extent. These range from infrequent, resource-intensive tasks which benefit from the ability to rapidly provision resources to day-to-day collaborative activities which can be facilitated by networked cloud services. Associated benefits are offset by risks such as loss of data or service level, legal and governance incompatibilities and transfer bottlenecks. There is considerable variability across both risks and benefits according to the service and deployment models being adopted and the context in which activities are performed. Some risks, such as legal liabilities, are mitigated by the use of alternative, e.g., private cloud models, but this is typically at the expense of benefits such as resource elasticity and economies of scale. Infrastructure as a Service model may provide a basis on which more specialised software services may be provided. There is considerable work to be done in helping institutions understand the cloud and its associated costs, risks and benefits, and how these compare to their current working methods, in order that the most beneficial uses of cloud technologies may be identified. Specific proposals, echoing recent work coordinated by EPSRC and JISC are the development of advisory, costing and brokering services to facilitate appropriate cloud deployments, the exploration of opportunities for certifying or accrediting cloud preservation providers, and the targeted publicity of outputs from pilot studies to the full range of stakeholders within the curation lifecycle, including data creators and owners, repositories, institutional IT support professionals and senior manager

    High performance photonic reservoir computer based on a coherently driven passive cavity

    Full text link
    Reservoir computing is a recent bio-inspired approach for processing time-dependent signals. It has enabled a breakthrough in analog information processing, with several experiments, both electronic and optical, demonstrating state-of-the-art performances for hard tasks such as speech recognition, time series prediction and nonlinear channel equalization. A proof-of-principle experiment using a linear optical circuit on a photonic chip to process digital signals was recently reported. Here we present a photonic implementation of a reservoir computer based on a coherently driven passive fiber cavity processing analog signals. Our experiment has error rate as low or lower than previous experiments on a wide variety of tasks, and also has lower power consumption. Furthermore, the analytical model describing our experiment is also of interest, as it constitutes a very simple high performance reservoir computer algorithm. The present experiment, given its good performances, low energy consumption and conceptual simplicity, confirms the great potential of photonic reservoir computing for information processing applications ranging from artificial intelligence to telecommunicationsComment: non

    Four Decades of Computing in Subnuclear Physics - from Bubble Chamber to LHC

    Full text link
    This manuscript addresses selected aspects of computing for the reconstruction and simulation of particle interactions in subnuclear physics. Based on personal experience with experiments at DESY and at CERN, I cover the evolution of computing hardware and software from the era of track chambers where interactions were recorded on photographic film up to the LHC experiments with their multi-million electronic channels

    Sector skills insights : digital and creative

    Get PDF

    Innovative public governance through cloud computing: Information privacy, business models and performance measurement challenges

    Get PDF
    Purpose: The purpose of this paper is to identify and analyze challenges and to discuss proposed solutions for innovative public governance through cloud computing. Innovative technologies, such as federation of services and cloud computing, can greatly contribute to the provision of e-government services, through scaleable and flexible systems. Furthermore, they can facilitate in reducing costs and overcoming public information segmentation. Nonetheless, when public agencies use these technologies, they encounter several associated organizational and technical changes, as well as significant challenges. Design/methodology/approach: We followed a multidisciplinary perspective (social, behavioral, business and technical) and conducted a conceptual analysis for analyzing the associated challenges. We conducted focus group interviews in two countries for evaluating the performance models that resulted from the conceptual analysis. Findings: This study identifies and analyzes several challenges that may emerge while adopting innovative technologies for public governance and e-government services. Furthermore, it presents suggested solutions deriving from the experience of designing a related platform for public governance, including issues of privacy requirements, proposed business models and key performance indicators for public services on cloud computing. Research limitations/implications: The challenges and solutions discussed are based on the experience gained by designing one platform. However, we rely on issues and challenges collected from four countries. Practical implications: The identification of challenges for innovative design of e-government services through a central portal in Europe and using service federation is expected to inform practitioners in different roles about significant changes across multiple levels that are implied and may accelerate the challenges' resolution. Originality/value: This is the first study that discusses from multiple perspectives and through empirical investigation the challenges to realize public governance through innovative technologies. The results emerge from an actual portal that will function at a European level. © Emerald Group Publishing Limited
    • …
    corecore