721 research outputs found
Diva: A Declarative and Reactive Language for In-Situ Visualization
The use of adaptive workflow management for in situ visualization and
analysis has been a growing trend in large-scale scientific simulations.
However, coordinating adaptive workflows with traditional procedural
programming languages can be difficult because system flow is determined by
unpredictable scientific phenomena, which often appear in an unknown order and
can evade event handling. This makes the implementation of adaptive workflows
tedious and error-prone. Recently, reactive and declarative programming
paradigms have been recognized as well-suited solutions to similar problems in
other domains. However, there is a dearth of research on adapting these
approaches to in situ visualization and analysis. With this paper, we present a
language design and runtime system for developing adaptive systems through a
declarative and reactive programming paradigm. We illustrate how an adaptive
workflow programming system is implemented using our approach and demonstrate
it with a use case from a combustion simulation.Comment: 11 pages, 5 figures, 6 listings, 1 table, to be published in LDAV
2020. The article has gone through 2 major revisions: Emphasized
contributions, features and examples. Addressed connections between DIVA and
FRP. In sec. 3, we fixed a design flaw and addressed it in sec. 3.3-3.4.
Re-designed sec. 5 with a more concrete example and benchmark results.
Simplified the syntax of DIV
Generalized techniques for using system execution traces to support software performance analysis
This dissertation proposes generalized techniques to support software performance analysis using system execution traces in the absence of software development artifacts such as source code. The proposed techniques do not require modifications to the source code, or to the software binaries, for the purpose of software analysis (non-intrusive). The proposed techniques are also not tightly coupled to the architecture specific details of the system being analyzed. This dissertation extends the current techniques of using system execution traces to evaluate software performance properties, such as response times, service times. The dissertation also proposes a novel technique to auto-construct a dataflow model from the system execution trace, which will be useful in evaluating software performance properties. Finally, it showcases how we can use execution traces in a novel technique to detect Excessive Dynamic Memory Allocations software performance anti-pattern. This is the first attempt, according to the author\u27s best knowledge, of a technique to detect automatically the excessive dynamic memory allocations anti-pattern. The contributions from this dissertation will ease the laborious process of software performance analysis and provide a foundation for helping software developers quickly locate the causes for negative performance results via execution traces
Recommended from our members
Computer-aided programming for multiprocessing systems
As both the number of processors and the complexity of problems to be solved increase, programming multiprocessing systems becomes more difficult and error-prone. This report discusses parallel models of computation and tools for computer-aided programming (CAP). Program development tools are necessary since programmers are not able to develop complex parallel programs efficiently. In particular, a CAP tool, named Hypertool, is described here. It performs scheduling and handles the communication primitive insertion automatically so that many errors are eliminated. It also generates the performance estimates and other program quality measures to help programmers in improving their algorithms and programs. Experiments have shown that up to a 300% performance improvement can be achieved by computer-aided programming
A Fortran Kernel Generation Framework for Scientific Legacy Code
Quality assurance procedure is very important for software development. The complexity of modules and structure in software impedes the testing procedure and further development. For complex and poorly designed scientific software, module developers and software testers need to put a lot of extra efforts to monitor not related modules\u27 impacts and to test the whole system\u27s constraints. In addition, widely used benchmarks cannot help programmers with accurate and program specific system performance evaluation. In this situation, the generated kernels could provide considerable insight into better performance tuning. Therefore, in order to greatly improve the productivity of various scientific software engineering tasks such as performance tuning, debugging, and verification of simulation results, we developed an automatic compute kernel extraction prototype platform for complex legacy scientific code. In addition, considering that scientific research and experiment require long-term simulation procedure and the huge size of data transfer, we apply message passing based parallelization and I/O behavior optimization to highly improve the performance of the kernel extractor framework and then use profiling tools to give guidance for parallel distribution. Abnormal event detection is another important aspect for scientific research; dealing with huge observational datasets combined with simulation results it becomes not only essential but also extremely difficult. In this dissertation, for the sake of detecting high frequency event and low frequency events, we reconfigured this framework equipped with in-situ data transfer infrastructure. Through the method of combining signal processing data preprocess(decimation) with machine learning detection model to train the stream data, our framework can significantly decrease the amount of transferred data demand for concurrent data analysis (between distributed computing CPU/GPU nodes). Finally, the dissertation presents the implementation of the framework and a case study of the ACME Land Model (ALM) for demonstration. It turns out that the generated compute kernel with lower cost can be used in performance tuning experiments and quality assurance, which include debugging legacy code, verification of simulation results through single point and multiple points of variables tracking, collaborating with compiler vendors, and generating custom benchmark tests
Role of Artificial Intelligence (AI) art in care of ageing society: focus on dementia
open access articleBackground: Art enhances both physical and mental health wellbeing. The health
benefits include reduction in blood pressure, heart rate, pain perception and briefer
inpatient stays, as well as improvement of communication skills and self-esteem. In
addition to these, people living with dementia benefit from reduction of their noncognitive,
behavioural changes, enhancement of their cognitive capacities and being
socially active.
Methods: The current study represents a narrative general literature review on
available studies and knowledge about contribution of Artificial Intelligence (AI) in
creative arts.
Results: We review AI visual arts technologies, and their potential for use among
people with dementia and care, drawing on similar experiences to date from
traditional art in dementia care.
Conclusion: The virtual reality, installations and the psychedelic properties of the AI
created art provide a new venue for more detailed research about its therapeutic use in
dementia
Static detection of anomalies in transactional memory programs
Dissertação apresentada na Faculdade de Ciências e Tecnologia da Universidade Nova de Lisboa para a obtenção do Grau de Mestre em Engenharia InformáticaTransactional Memory (TM) is an approach to concurrent programming based on the transactional semantics borrowed from database systems. In this paradigm, a transaction is a sequence of actions that may execute in a single logical instant, as though it was the only one being executed
at that moment. Unlike concurrent systems based in locks, TM does not enforce that a
single thread is performing the guarded operations. Instead, like in database systems, transactions execute concurrently, and the effects of a transaction are undone in case of a conflict, as though it never happened. The advantages of TM are an easier and less error-prone programming model, and a potential increase in scalability and performance.
In spite of these advantages, TM is still a young and immature technology, and has still
to become an established programming model. It still lacks the paraphernalia of tools and
standards which we have come to expect from a widely used programming paradigm. Testing
and analysis techniques and algorithms for TM programs are also just starting to be addressed by the scientific community, making this a leading research work is many of these aspects.
This work is aimed at statically identifying possible runtime anomalies in TMprograms. We
addressed both low-level dataraces in TM programs, as well as high-level anomalies resulting from incorrect splitting of transactions.
We have defined and implemented an approach to detect low-level dataraces in TM programs
by converting all the memory transactions into monitor protected critical regions, synchronized on a newly generated global lock. To validate the approach, we have applied our tool to a set of tests, adapted from the literature, that contain well documented errors.
We have also defined and implemented a new approach to static detection of high-level
concurrency anomalies in TM programs. This new approach works by conservatively tracing
transactions, and matching the interference between each consecutive pair of transactions
against a set of defined anomaly patterns. Once again, the approach was validated with well documented tests adapted from the literature
- …