106,436 research outputs found

    A NeISS collaboration to develop and use e-infrastructure for large-scale social simulation

    Get PDF
    The National e-Infrastructure for Social Simulation (NeISS) project is focused on developing e-Infrastructure to support social simulation research. Part of NeISS aims to provide an interface for running contemporary dynamic demographic social simulation models as developed in the GENESIS project. These GENESIS models operate at the individual person level and are stochastic. This paper focuses on support for a simplistic demographic change model that has a daily time steps, and is typically run for a number of years. A portal based Graphical User Interface (GUI) has been developed as a set of standard portlets. One portlet is for specifying model parameters and setting a simulation running. Another is for comparing the results of different simulation runs. Other portlets are for monitoring submitted jobs and for interfacing with an archive of results. A layer of programs enacted by the portlets stage data in and submit jobs to a Grid computer which then runs a specific GENESIS model program executable. Once a job is submitted, some details are communicated back to a job monitoring portlet. Once the job is completed, results are stored and made available for download and further processing. Collectively we call the system the Genesis Simulator. Progress in the development of the Genesis Simulator was presented at the UK e- Science All Hands Meeting in September 2011 by way of a video based demonstration of the GUI, and an oral presentation of a working paper. Since then, an automated framework has been developed to run simulations for a number of years in yearly time steps. The demographic models have also been improved in a number of ways. This paper summarises the work to date, presents some of the latest results and considers the next steps we are planning in this work

    The OMII Software – Demonstrations and Comparisons between two different deployments for Client-Server Distributed Systems

    No full text
    This paper describes the key elements of the OMII software and the scenarios which OMII software can be deployed to achieve distributed computing in the UK e-Science Community, where two different deployments for Client-Server distributed systems are demonstrated. Scenarios and experiments for each deployment have been described, with its advantages and disadvantages compared and analyzed. We conclude that our first deployment is more relevant for system administrators or developers, and the second deployment is more suitable for users’ perspective which they can send and check job status for hundred job submissions

    SIMDAT

    No full text

    Developing High Performance Computing Resources for Teaching Cluster and Grid Computing courses

    Get PDF
    High-Performance Computing (HPC) and the ability to process large amounts of data are of paramount importance for UK business and economy as outlined by Rt Hon David Willetts MP at the HPC and Big Data conference in February 2014. However there is a shortage of skills and available training in HPC to prepare and expand the workforce for the HPC and Big Data research and development. Currently, HPC skills are acquired mainly by students and staff taking part in HPC-related research projects, MSc courses, and at the dedicated training centres such as Edinburgh University’s EPCC. There are few UK universities teaching the HPC, Clusters and Grid Computing courses at the undergraduate level. To address the issue of skills shortages in the HPC it is essential to provide teaching and training as part of both postgraduate and undergraduate courses. The design and development of such courses is challenging since the technologies and software in the fields of large scale distributed systems such as Cluster, Cloud and Grid computing are undergoing continuous change. The students completing the HPC courses should be proficient in these evolving technologies and equipped with practical and theoretical skills for future jobs in this fast developing area. In this paper we present our experience in developing the HPC, Cluster and Grid modules including a review of existing HPC courses offered at the UK universities. The topics covered in the modules are described, as well as the coursework projects based on practical laboratory work. We conclude with an evaluation based on our experience over the last ten years in developing and delivering the HPC modules on the undergraduate courses, with suggestions for future work

    Leveraging HTC for UK eScience with very large Condor pools: demand for transforming untapped power into results

    Get PDF
    We provide an insight into the demand from the UK eScience community for very large HighThroughput Computing resources and provide an example of such a resource in current productionuse: the 930-node eMinerals Condor pool at UCL. We demonstrate the significant benefits thisresource has provided to UK eScientists via quickly and easily realising results throughout a rangeof problem areas. We demonstrate the value added by the pool to UCL I.S infrastructure andprovide a case for the expansion of very large Condor resources within the UK eScience Gridinfrastructure. We provide examples of the technical and administrative difficulties faced whenscaling up to institutional Condor pools, and propose the introduction of a UK Condor/HTCworking group to co-ordinate the mid to long term UK eScience Condor development, deploymentand support requirements, starting with the inaugural UK Condor Week in October 2004

    Parallel detrended fluctuation analysis for fast event detection on massive PMU data

    Get PDF
    ("(c) 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works.")Phasor measurement units (PMUs) are being rapidly deployed in power grids due to their high sampling rates and synchronized measurements. The devices high data reporting rates present major computational challenges in the requirement to process potentially massive volumes of data, in addition to new issues surrounding data storage. Fast algorithms capable of processing massive volumes of data are now required in the field of power systems. This paper presents a novel parallel detrended fluctuation analysis (PDFA) approach for fast event detection on massive volumes of PMU data, taking advantage of a cluster computing platform. The PDFA algorithm is evaluated using data from installed PMUs on the transmission system of Great Britain from the aspects of speedup, scalability, and accuracy. The speedup of the PDFA in computation is initially analyzed through Amdahl's Law. A revision to the law is then proposed, suggesting enhancements to its capability to analyze the performance gain in computation when parallelizing data intensive applications in a cluster computing environment

    Ocean Energy in Belgium - 2019

    Get PDF
    B

    The CEDAR Project

    Full text link
    We describe the plans and objectives of the CEDAR project (Combined e-Science Data Analysis Resource for High Energy Physics) newly funded by the PPARC e-Science programme in the UK. CEDAR will combine the strengths of the well established and widely used HEPDATA database of HEP data and the innovative JetWeb data/Monte Carlo comparison facility, built on the HZTOOL package, and will exploit developing grid technology. The current status and future plans of both of these individual sub-projects within the CEDAR framework are described, showing how they will cohesively provide (a) an extensive archive of Reaction Data, (b) validation and tuning of Monte Carlo programs against these reaction data sets, and (c) a validated code repository for a wide range of HEP code such as parton distribution functions and other calculation codes used by particle physicists. Once established it is envisaged CEDAR will become an important Grid tool used by LHC experimentalists in their analyses and may well serve as a model in other branches of science where there is a need to compare data and complex simulations.Comment: 4 pages, 4 postscript figures, uses CHEP2004.cls. Presented at Computing in High-Energy Physics (CHEP'04), Interlaken, Switzerland, 27th September - 1st October 200
    • 

    corecore