721 research outputs found
Enabling adaptive scientific workflows via trigger detection
Next generation architectures necessitate a shift away from traditional
workflows in which the simulation state is saved at prescribed frequencies for
post-processing analysis. While the need to shift to in~situ workflows has been
acknowledged for some time, much of the current research is focused on static
workflows, where the analysis that would have been done as a post-process is
performed concurrently with the simulation at user-prescribed frequencies.
Recently, research efforts are striving to enable adaptive workflows, in which
the frequency, composition, and execution of computational and data
manipulation steps dynamically depend on the state of the simulation. Adapting
the workflow to the state of simulation in such a data-driven fashion puts
extremely strict efficiency requirements on the analysis capabilities that are
used to identify the transitions in the workflow. In this paper we build upon
earlier work on trigger detection using sublinear techniques to drive adaptive
workflows. Here we propose a methodology to detect the time when sudden heat
release occurs in simulations of turbulent combustion. Our proposed method
provides an alternative metric that can be used along with our former metric to
increase the robustness of trigger detection. We show the effectiveness of our
metric empirically for predicting heat release for two use cases.Comment: arXiv admin note: substantial text overlap with arXiv:1506.0825
Diva: A Declarative and Reactive Language for In-Situ Visualization
The use of adaptive workflow management for in situ visualization and
analysis has been a growing trend in large-scale scientific simulations.
However, coordinating adaptive workflows with traditional procedural
programming languages can be difficult because system flow is determined by
unpredictable scientific phenomena, which often appear in an unknown order and
can evade event handling. This makes the implementation of adaptive workflows
tedious and error-prone. Recently, reactive and declarative programming
paradigms have been recognized as well-suited solutions to similar problems in
other domains. However, there is a dearth of research on adapting these
approaches to in situ visualization and analysis. With this paper, we present a
language design and runtime system for developing adaptive systems through a
declarative and reactive programming paradigm. We illustrate how an adaptive
workflow programming system is implemented using our approach and demonstrate
it with a use case from a combustion simulation.Comment: 11 pages, 5 figures, 6 listings, 1 table, to be published in LDAV
2020. The article has gone through 2 major revisions: Emphasized
contributions, features and examples. Addressed connections between DIVA and
FRP. In sec. 3, we fixed a design flaw and addressed it in sec. 3.3-3.4.
Re-designed sec. 5 with a more concrete example and benchmark results.
Simplified the syntax of DIV
Performance Evaluation of Serverless Applications and Infrastructures
Context. Cloud computing has become the de facto standard for deploying modern web-based software systems, which makes its performance crucial to the efficient functioning of many applications. However, the unabated growth of established cloud services, such as Infrastructure-as-a-Service (IaaS), and the emergence of new serverless services, such as Function-as-a-Service (FaaS), has led to an unprecedented diversity of cloud services with different performance characteristics. Measuring these characteristics is difficult in dynamic cloud environments due to performance variability in large-scale distributed systems with limited observability.Objective. This thesis aims to enable reproducible performance evaluation of serverless applications and their underlying cloud infrastructure.Method. A combination of literature review and empirical research established a consolidated view on serverless applications and their performance. New solutions were developed through engineering research and used to conduct performance benchmarking field experiments in cloud environments.Findings. The review of 112 FaaS performance studies from academic and industrial sources found a strong focus on a single cloud platform using artificial micro-benchmarks and discovered that most studies do not follow reproducibility principles on cloud experimentation. Characterizing 89 serverless applications revealed that they are most commonly used for short-running tasks with low data volume and bursty workloads. A novel trace-based serverless application benchmark shows that external service calls often dominate the median end-to-end latency and cause long tail latency. The latency breakdown analysis further identifies performance challenges of serverless applications, such as long delays through asynchronous function triggers, substantial runtime initialization for coldstarts, increased performance variability under bursty workloads, and heavily provider-dependent performance characteristics. The evaluation of different cloud benchmarking methodologies has shown that only selected micro-benchmarks are suitable for estimating application performance, performance variability depends on the resource type, and batch testing on the same instance with repetitions should be used for reliable performance testing.Conclusions. The insights of this thesis can guide practitioners in building performance-optimized serverless applications and researchers in reproducibly evaluating cloud performance using suitable execution methodologies and different benchmark types
Automating Camera Placement for In Situ Visualization
Trends in high-performance computing increasingly require visualization to be carried out using in situ processing. This processing most often occurs without a human in the loop, meaning that the in situ software must be able to carry out its tasks without human guidance. This dissertation explores this topic, focusing on automating camera placement for in situ visualization when there is no a priori knowledge of where to place the camera. We introduce a new approach for this automation process, which depends on Viewpoint Quality (VQ) metrics that quantify how much insight a camera position provides. This research involves three major sub-projects: (1) performing a user survey to determine the viewpoint preferences of scientific users as well as developing new VQ metrics that can predict preference 68% of the time; (2) parallelizing VQ metrics and designing search algorithms so they can be executed efficiently in situ; and (3) evaluating the behavior of camera placement of time-varying data to determine how often a new camera placement should be considered. In all, this dissertation shows automating in situ camera placement for scientific simulations is possible on exascale computers and provides insight on best practices
Recommended from our members
A Programmable Streaming Framework for Extreme-Scale Scientific Visualizations
Emerging computational and acquisition technologies are empowering scientists to conduct simulations and experiments on an unprecedented scale. These advancements can push the frontiers of science and technology with groundbreaking discoveries. However, they also pose significant challenges to traditional scientific visualization workflows. Firstly, the data generated by modern scientific studies using these technologies tends to be extremely large and complex, often resulting in slow processing and rendering times. This demands the development of visualization algorithms that can effectively scale with the size of the data. Secondly, state-of-the-art simulations and experiments produce data at extraordinary rates, complicating the task of generating valuable visualization results for scientists. Therefore, there's a pressing need for more adaptive and intelligent visualization workflows. Lastly, although new computer hardware and architecture can speed up the visualization process, significant performance variations still exist among visualization algorithms due to differing design choices. As a result, optimizing algorithms to better leverage emerging hardware features for enhanced efficiency remains an ongoing necessity.This dissertation addresses the aforementioned challenges by introducing a programmable streaming framework enhanced with implicit neural representation, designed for visualizing extreme-scale scientific data. Specifically, it unfolds three innovative methodologies:Firstly, the framework offers a reactive and declarative programming language for streamlining image generation, layout and interaction creation, and I/O processes, eliminating the need for users to manually control all visualization parameters and procedures. This language enables scientists to define highly adaptive visualization workflows through high-level, rule-based grammars. The system then automatically optimizes the low-level implementation according to these specifications, facilitating the creation of more efficient visualization workflows with simpler coding.Secondly, the framework features a scalable, hardware-accelerated streaming visualization system that allows visualization processes to run concurrently with I/O operations. This system not only achieves state-of-the-art scalability but can also effectively manages complex, multi-resolution data structures. It delivers accurate rendering outcomes, reduces memory usage, and leverages emerging hardware capabilities more efficiently.Finally, the framework integrates implicit neural representation (INR) techniques for data compression and interactive visualization. The use of INRs significantly reduces data size while preserving high-frequency details. Additionally, it enables direct access to spatial locations at any desired resolution, obviating the need for decompression or interpolation.In summary, this dissertation research addresses long-standing challenges inherent in extreme-scale scientific visualization by introducing novel designs and methodologies. The presented framework not only enables more efficient and adaptive visualization workflows but also leverages the latest hardware acceleration and data compression techniques. The implications of these advancements extend beyond mere technical improvements; they pave the way for deeper insights and discoveries across a broad spectrum of scientific studies. This research, therefore, represents a significant leap forward, with the potential to transform the landscape of scientific visualization
- …