54,527 research outputs found
Recommended from our members
GRIDCC - Providing a real-time grid for distributed instrumentation
The GRIDCC project is extending the use of Grid computing to include access to and control of distributed instrumentation.
Access to the instruments will be via an interface to a Virtual Instrument Grid Service (VIGS). VIGS is a new concept and its design and implementation, together
with middleware that can provide the appropriate Quality of Service (QoS), is a key part of the GRIDCC development plan. An overall architecture for GRIDCC has been
defined and some of the application areas, which include distributed power systems, remote control of an accelerator and the remote monitoring of a large particle physics
experiment, are briefly discussed.E
Recommended from our members
UC Berkeley's Cory Hall: Evaluation of Challenges and Potential Applications of Building-to-Grid Implementation
From September 2009 through June 2010, a team of researchers developed, installed, and tested instrumentation on the energy flows in Cory Hall on the UC Berkeley campus to create a Building-to-Grid testbed. The UC Berkeley team was headed by Professor David Culler, and assisted by members from EnerNex, Lawrence Berkeley National Laboratory, California State University Sacramento, and the California Institute for Energy & Environment. While the Berkeley team mapped the load tree of the building, EnerNex researched types of meters, submeters, monitors, and sensors to be used (Task 1). Next the UC Berkeley team analyzed building needs and designed the network of metering components and data storage/visualization software (Task 2). After meeting with vendors in January, the UCB team procured and installed the components starting in late March (Task 3). Next, the UCB team tested and demonstrated the system (Task 4). Meanwhile, the CSUS team documented the methodology and steps necessary to implement a testbed (Task 5) and Harold Galicer developed a roadmap for the CSUS Smart Grid Center with results from the testbed (Task 5a) and evaluated the Cory Hall implementation process (Task 5b). The CSUS team also worked with local utilities to develop an approach to the energy information communication link between buildings and the utility (Task 6). The UC Berkeley team then prepared a roadmap to outline necessary technology development for Building-to-Grid, and presented the results of the project in early July (Task 7). Finally, CIEE evaluated the implementation, noting challenges and potential applications of Building-to-Grid (Task 8). These deliverables are available at the i4Energy site: http://i4energy.org/
Recommended from our members
A classification of emerging and traditional grid systems
The grid has evolved in numerous distinct phases. It started in the early â90s as a model of metacomputing in which supercomputers share resources; subsequently, researchers added the ability to share data. This is usually referred to as the first-generation grid. By the late â90s, researchers had outlined the framework for second-generation grids, characterized by their use of grid middleware systems to âglueâ different grid technologies together. Third-generation grids originated in the early millennium when Web technology was combined with second-generation grids. As a result, the invisible grid, in which grid complexity is fully hidden through resource virtualization, started receiving attention. Subsequently, grid researchers identified the requirement for semantically rich knowledge grids, in which middleware technologies are more intelligent and autonomic. Recently, the necessity for grids to support and extend the ambient intelligence vision has emerged. In AmI, humans are surrounded by computing technologies that are unobtrusively embedded in their surroundings.
However, third-generation gridsâ current architecture doesnât meet the requirements of next-generation grids (NGG) and service-oriented knowledge utility (SOKU).4 A few years ago, a group of independent experts, arranged by the European Commission, identified these shortcomings as a way to identify potential European grid research priorities for 2010 and beyond. The experts envision grid systemsâ information, knowledge, and processing capabilities as a set of utility services.3 Consequently, new grid systems are emerging to materialize these visions. Here, we review emerging grids and classify them to motivate further research and help establish a solid foundation in this rapidly evolving area
funcX: A Federated Function Serving Fabric for Science
Exploding data volumes and velocities, new computational methods and
platforms, and ubiquitous connectivity demand new approaches to computation in
the sciences. These new approaches must enable computation to be mobile, so
that, for example, it can occur near data, be triggered by events (e.g.,
arrival of new data), be offloaded to specialized accelerators, or run remotely
where resources are available. They also require new design approaches in which
monolithic applications can be decomposed into smaller components, that may in
turn be executed separately and on the most suitable resources. To address
these needs we present funcX---a distributed function as a service (FaaS)
platform that enables flexible, scalable, and high performance remote function
execution. funcX's endpoint software can transform existing clouds, clusters,
and supercomputers into function serving systems, while funcX's cloud-hosted
service provides transparent, secure, and reliable function execution across a
federated ecosystem of endpoints. We motivate the need for funcX with several
scientific case studies, present our prototype design and implementation, show
optimizations that deliver throughput in excess of 1 million functions per
second, and demonstrate, via experiments on two supercomputers, that funcX can
scale to more than more than 130000 concurrent workers.Comment: Accepted to ACM Symposium on High-Performance Parallel and
Distributed Computing (HPDC 2020). arXiv admin note: substantial text overlap
with arXiv:1908.0490
SciTokens: Capability-Based Secure Access to Remote Scientific Data
The management of security credentials (e.g., passwords, secret keys) for
computational science workflows is a burden for scientists and information
security officers. Problems with credentials (e.g., expiration, privilege
mismatch) cause workflows to fail to fetch needed input data or store valuable
scientific results, distracting scientists from their research by requiring
them to diagnose the problems, re-run their computations, and wait longer for
their results. In this paper, we introduce SciTokens, open source software to
help scientists manage their security credentials more reliably and securely.
We describe the SciTokens system architecture, design, and implementation
addressing use cases from the Laser Interferometer Gravitational-Wave
Observatory (LIGO) Scientific Collaboration and the Large Synoptic Survey
Telescope (LSST) projects. We also present our integration with widely-used
software that supports distributed scientific computing, including HTCondor,
CVMFS, and XrootD. SciTokens uses IETF-standard OAuth tokens for
capability-based secure access to remote scientific data. The access tokens
convey the specific authorizations needed by the workflows, rather than
general-purpose authentication impersonation credentials, to address the risks
of scientific workflows running on distributed infrastructure including NSF
resources (e.g., LIGO Data Grid, Open Science Grid, XSEDE) and public clouds
(e.g., Amazon Web Services, Google Cloud, Microsoft Azure). By improving the
interoperability and security of scientific workflows, SciTokens 1) enables use
of distributed computing for scientific domains that require greater data
protection and 2) enables use of more widely distributed computing resources by
reducing the risk of credential abuse on remote systems.Comment: 8 pages, 6 figures, PEARC '18: Practice and Experience in Advanced
Research Computing, July 22--26, 2018, Pittsburgh, PA, US
CERN openlab Whitepaper on Future IT Challenges in Scientific Research
This whitepaper describes the major IT challenges in scientific research at CERN and several other European and international research laboratories and projects. Each challenge is exemplified through a set of concrete use cases drawn from the requirements of large-scale scientific programs. The paper is based on contributions from many researchers and IT experts of the participating laboratories and also input from the existing CERN openlab industrial sponsors. The views expressed in this document are those of the individual contributors and do not necessarily reflect the view of their organisations and/or affiliates
e-Science Infrastructure for the Social Sciences
When the term âe-Scienceâ became popular, it frequently was referred to as âenhanced scienceâ or âelectronic scienceâ. More telling is the definition âe-Science is about global collaboration in key areas of science and the next generation of infrastructure that will enable itâ (Taylor, 2001). The question arises to what extent can the social sciences profit from recent developments in e- Science infrastructure? While computing, storage and network capacities so far were sufficient to accommodate and access social science data bases, new capacities and technologies support new types of research, e.g. linking and analysing transactional or audio-visual data. Increasingly collaborative working by researchers in distributed networks is efficiently supported and new resources are available for e-learning. Whether these new developments become transformative or just helpful will very much depend on whether their full potential is recognized and creatively integrated into new research designs by theoretically innovative scientists. Progress in e-Science was very much linked to the vision of the Grid as âa software infrastructure that enables flexible, secure, coordinated resource sharing among dynamic collections of individuals, institutions and resourcesâ and virtually unlimited computing capacities (Foster et al. 2000). In the Social Sciences there has been considerable progress in using modern IT- technologies for multilingual access to virtual distributed research databases across Europe and beyond (e.g. NESSTAR, CESSDA â Portal), data portals for access to statistical offices and for linking access to data, literature, project, expert and other data bases (e.g. Digital Libraries, VASCODA/SOWIPORT). Whether future developments will need GRID enabling of social science databases or can be further developed using WEB 2.0 support is currently an open question. The challenges here are seamless integration and interoperability of data bases, a requirement that is also stipulated by internationalisation and trans-disciplinary research. This goes along with the need for standards and harmonisation of data and metadata. Progress powered by e- infrastructure is, among others, dependent on regulatory frameworks and human capital well trained in both, data science and research methods. It is also dependent on sufficient critical mass of the institutional infrastructure to efficiently support a dynamic research community that wants to âtake the lead without catching upâ.
- âŠ