700 research outputs found
Guest Editors Introduction: Cloud Computing
The guest editors discuss this special issue on cloud computing, exploring how cloud platforms and abstractions can be effectively used to support real-world science and engineering applications
Workshop proceedings: Information Systems for Space Astrophysics in the 21st Century, volume 1
The Astrophysical Information Systems Workshop was one of the three Integrated Technology Planning workshops. Its objectives were to develop an understanding of future mission requirements for information systems, the potential role of technology in meeting these requirements, and the areas in which NASA investment might have the greatest impact. Workshop participants were briefed on the astrophysical mission set with an emphasis on those missions that drive information systems technology, the existing NASA space-science operations infrastructure, and the ongoing and planned NASA information systems technology programs. Program plans and recommendations were prepared in five technical areas: Mission Planning and Operations; Space-Borne Data Processing; Space-to-Earth Communications; Science Data Systems; and Data Analysis, Integration, and Visualization
Extreme-scale visual analytics
pre-printThe September/October 2004 CG&A introduced the term visual analytics (VA) to the computer science literature.1 In 2005, an international advisory panel with representatives from academia, industry, and government defined VA as "the science of analytical reasoning facilitated by interactive visual interfaces."2 VA has grown rapidly into a vibrant R&D community offering data analytics and exploration solutions to both scientific and nonscientific problems in diverse domains and platforms. This special issue further examines advances related to extreme-scale VA problems, their analytical and computational challenges, and their real-world applications
On the Super-computational Background of the Research Centre JĂĽlich
KFA Jülich is one of the largest big-science research centres in Europe; its scientific and engineering activities are ranging from fundamental research to applied science and technology. KFA's Central Institute for Applied Mathematics (ZAM) is running the large-scale computing facilities and network systems at KFA and is providing communication services, general-purpose and supercomputer capacity also for the HLRZ ("Höchstleistungsrechenzentrum") established in 1987 in order to further enhance and promote computational science in Germany. Thus, at KFA - and in particular enforced by ZAM - supercomputing has received high priority since more than ten years. What particle accelerators mean to experimental physics, supercomputers mean to Computational Science and Engineering: Supercomputers are the accelerators of theory
Can apparent bystanders distinctively shape an outcome? Global south countries and global catastrophic risk-focused governance of artificial intelligence
Increasingly, there is well-grounded concern that through perpetual
scaling-up of computation power and data, current deep learning techniques will
create highly capable artificial intelligence that could pursue goals in a
manner that is not aligned with human values. In turn, such AI could have the
potential of leading to a scenario in which there is serious global-scale
damage to human wellbeing. Against this backdrop, a number of researchers and
public policy professionals have been developing ideas about how to govern AI
in a manner that reduces the chances that it could lead to a global
catastrophe. The jurisdictional focus of a vast majority of their assessments
so far has been the United States, China, and Europe. That preference seems to
reveal an assumption underlying most of the work in this field: That global
south countries can only have a marginal role in attempts to govern AI
development from a global catastrophic risk -focused perspective. Our paper
sets out to undermine this assumption. We argue that global south countries
like India and Singapore (and specific coalitions) could in fact be fairly
consequential in the global catastrophic risk-focused governance of AI. We
support our position using 4 key claims. 3 are constructed out of the current
ways in which advanced foundational AI models are built and used while one is
constructed on the strategic roles that global south countries and coalitions
have historically played in the design and use of multilateral rules and
institutions. As each claim is elaborated, we also suggest some ways through
which global south countries can play a positive role in designing,
strengthening and operationalizing global catastrophic risk-focused AI
governance
Recommended from our members
Computer animation for virtual humans.
YesAdvances in computer animation techniques
have spurred increasing levels of
realism and movement in virtual characters that closely
mimic physical reality. Increases in computational
power and control methods enable the creation of 3D
virtual humans for real-time interactive applications.
Artificial intelligence techniques and autonomous
agents give computer-generated characters a life of
their own and let them interact with other characters
in virtual worlds. Developments and advances in networking
and virtual reality (VR) let multiple participants
share virtual worlds and interact with
applications or each other
Overview of Digital Library Components and Developments
Digital libraries are being built upon a firm foundation of prior work as the high-end information systems of the future. A component architecture approach is becoming popular, with well established support for key components like the repository, especially through the Open Archives Initiative. We consider digital objects, metadata, harvesting, indexing, searching, browsing, rights management, linking, and powerful interfaces. Flexible interaction will be possible through a variety of architectures, using buses, agents, and other technologies. The field as a whole is undergoing rapid growth, supported by advances in storage, processing, networking, algorithms, and interaction. There are many initiatives and developments, including those supporting education, and these will certainly be of benefit in Latin America
Progress Towards Petascale Applications in Biology: Status in 2006
Petascale computing is currently a common topic of discussion in the high performance computing community. Biological applications, particularly protein folding, are often given as examples of the need for petascale computing. There are at present biological applications that scale to execution rates of approximately 55 teraflops on a special-purpose supercomputer and 2.2 teraflops on a general-purpose supercomputer. In comparison, Qbox, a molecular dynamics code used to model metals, has an achieved performance of 207.3 teraflops. It may be useful to increase the extent to which operation rates and total calculations are reported in discussion of biological applications, and use total operations (integer and floating point combined) rather than (or in addition to) floating point operations as the unit of measure. Increased reporting of such metrics will enable better tracking of progress as the research community strives for the insights that will be enabled by petascale computing.This research was supported in part by the Indiana Genomics Initiative and the Indiana Metabolomics and Cytomics Initiative. The Indiana Genomics Initiative of Indiana University and the Indiana Metabolomics and Cytomics Initiative of Indiana University are supported in part by Lilly Endowment, Inc. The authors also wish to thank IBM, Inc. for support via Shared University Research Grants and partnerships via IU’s relationship as an IBM Life Sciences Institute of Innovation. Indiana University also thanks the TeraGrid partners; IU’s participation in the TeraGrid is funded by National Science Foundation grant numbers 0338618, 0504075, and 0451237. The early development of this paper was supported by a Fulbright Senior Scholars award from the Council for International Exchange of Scholars (CIES) and the United States Department of State to Dr. Craig A. Stewart; Matthias Mueller and the Technische Universität Dresden were hosts. Many reviewers contributed to the improvement of the ideas expressed in this paper and are gratefully appreciated; Thom Dunning, Robert Germain, Chris Mueller, Jim Phillips, Richard Repasky, Ralph Roskies, and Allan Snavely are thanked particularly for their insights
Cloud-efficient modelling and simulation of magnetic nano materials
Scientific simulations are rarely attempted in a cloud due to the substantial
performance costs of virtualization. Considerable communication overheads,
intolerable latencies, and inefficient hardware emulation are the main reasons why
this emerging technology has not been fully exploited. On the other hand, the
progress of computing infrastructure nowadays is strongly dependent on
perspective storage medium development, where efficient micromagnetic
simulations play a vital role in future memory design.
This thesis addresses both these topics by merging micromagnetic simulations
with the latest OpenStack cloud implementation while providing a time and costeffective alternative to expensive computing centers.
However, many challenges have to be addressed before a high-performance cloud
platform emerges as a solution for problems in micromagnetic research
communities. First, the best solver candidate has to be selected and further
improved, particularly in the parallelization and process communication domain.
Second, a 3-level cloud communication hierarchy needs to be recognized and
each segment adequately addressed. The required steps include breaking the VMisolation for the host’s shared memory activation, cloud network-stack tuning,
optimization, and efficient communication hardware integration.
The project work concludes with practical measurements and confirmation of
successfully implemented simulation into an open-source cloud environment. It is
achieved that the renewed Magpar solver runs for the first time in the OpenStack
cloud by using ivshmem for shared memory communication. Also, extensive
measurements proved the effectiveness of our solutions, yielding from sixty
percent to over ten times better results than those achieved in the standard cloud.Aufgrund der erheblichen Leistungskosten der Virtualisierung werden
wissenschaftliche Simulationen in einer Cloud selten versucht. Beträchtlicher
Kommunikationsaufwand, erhebliche Latenzen und ineffiziente
Hardwareemulation sind die HauptgrĂĽnde, warum diese aufkommende
Technologie nicht vollständig genutzt wurde. Andererseits hängt der Fortschritt der
Computertechnologie heutzutage stark von der Entwicklung perspektivischer
Speichermedien ab, bei denen effiziente mikromagnetische Simulationen eine
wichtige Rolle fĂĽr die zukĂĽnftige Speichertechnologie spielen.
Diese Arbeit befasst sich mit diesen beiden Themen, indem mikromagnetische
Simulationen mit der neuesten OpenStack Cloud-Implementierung
zusammengefĂĽhrt werden, um eine zeit- und kostengĂĽnstige Alternative zu teuren
Rechenzentren bereitzustellen.
Viele Herausforderungen mĂĽssen jedoch angegangen werden, bevor eine
leistungsstarke Cloud-Plattform als Lösung für Probleme in mikromagnetischen
Forschungsgemeinschaften entsteht. Zunächst muss der beste Kandidat für die
Lösung ausgewählt und weiter verbessert werden, insbesondere im Bereich der
Parallelisierung und Prozesskommunikation. Zweitens muss eine 3-stufige CloudKommunikationshierarchie erkannt und jedes Segment angemessen adressiert
werden. Die erforderlichen Schritte umfassen das Aufheben der VM-Isolation, um
den gemeinsam genutzten Speicher zwischen Cloud-Instanzen zu aktivieren, die
Optimierung des Cloud-Netzwerkstapels und die effiziente Integration von
Kommunikationshardware.
Die praktische Arbeit endet mit Messungen und der Bestätigung einer erfolgreich
implementierten Simulation in einer Open-Source Cloud-Umgebung. Als Ergebnis
haben wir erreicht, dass der neu erstellte Magpar-Solver zum ersten Mal in der
OpenStack Cloud ausgefĂĽhrt wird, indem ivshmem fĂĽr die Shared-Memory
Kommunikation verwendet wird. Umfangreiche Messungen haben auch die
Wirksamkeit unserer Lösungen bewiesen und von sechzig Prozent bis zu zehnmal
besseren Ergebnissen als in der Standard Cloud gefĂĽhrt
- …