456 research outputs found
SWISH: SWI-Prolog for Sharing
Recently, we see a new type of interfaces for programmers based on web
technology. For example, JSFiddle, IPython Notebook and R-studio. Web
technology enables cloud-based solutions, embedding in tutorial web pages,
atractive rendering of results, web-scale cooperative development, etc. This
article describes SWISH, a web front-end for Prolog. A public website exposes
SWI-Prolog using SWISH, which is used to run small Prolog programs for
demonstration, experimentation and education. We connected SWISH to the
ClioPatria semantic web toolkit, where it allows for collaborative development
of programs and queries related to a dataset as well as performing maintenance
tasks on the running server and we embedded SWISH in the Learn Prolog Now!
online Prolog book.Comment: International Workshop on User-Oriented Logic Programming (IULP
2015), co-located with the 31st International Conference on Logic Programming
(ICLP 2015), Proceedings of the International Workshop on User-Oriented Logic
Programming (IULP 2015), Editors: Stefan Ellmauthaler and Claudia Schulz,
pages 99-113, August 201
Improving the Accuracy and Efficiency of Time-Independent Trace Replay
Simulation is a popular approach to obtain objective performance indicators on platforms that are not at one's disposal. It may help the dimensioning of compute clusters in large computing centers. In a previous work, we proposed a framework for the off-line simulation of MPI applications. Its main originality with regard to the literature is to rely on time-independent execution traces. This allows us to completely decouple the acquisition process from the actual replay of the traces in a simulation context. Then we are able to acquire traces for large application instances without being limited to an execution on a single compute cluster. Finally our framework is built on top of a scalable, fast, and validated simulation kernel. In this paper, we detail the performance issues that we encountered with the first implementation of our trace replay framework. We propose several modifications to address these issues and analyze their impact. Results shows a clear improvement on the accuracy and efficiency with regard to the initial implementation.La simulation est une approche populaire pour obtenir des indicateurs de performance objectifs sur des plates-formes qui ne sont pas nécessairement accessibles. Elle peut par exemple aider au dimensionnement d'infrastructures dans de grands centres de calcul. Dans un article précédent, nous avons proposé un environnement pour la simulation hors-ligne d'applications MPI. La principale originalité de cet environnement par rapport à la littérature est de ne reposer que sur des traces indépendantes du temps. Cela nous permet de découpler totalement l'acquisition des traces de leur rejeu simulé effectif. Nous sommes ainsi capables d'obtenir des traces pour de très grandes instances d'applications sans être limités à une exécution au sein d'une seule grappe de machines. Enfin, cet environnement est fondé sur un noyau de simulation extensible, rapide et validé. Dans cet article nous détaillons les problèmes de performance rencontrés par la première implantation de notre environnement de rejeu de traces. Nous proposons plusieurs modifications pour résoudre ces problèmes et analysons leur impact. Les résultats obtenus montrent une amélioration notable à la fois en termes de précision et d'efficacité par rapport à l'implantation initiale
Lightweight monitoring of transactional memory programs
Dissertação para obtenção do Grau de Mestre em
Engenharia InformáticaConcurrent programs can take advantage of multi-core architectures. However, writing
correct and e cient concurrent programs remains a challenging task. Transactional
memory eases the task by providing a high-level programming model for concurrent programming.
Still, tools for analyzing and debugging transactional memory programs are very scarce. Tools have been developed for debugging support for transactional memory
that rely on logging events (start, commit, etc.) to generate a view of the execution.
During the execution, these events are writen to a log, associating a CPU-core dependent timestamp to each event. These clocks are not synchronized and so the events recorded in the log may not respect the real order and appear inconsistent, e.g., the commit event of a transaction may be recorded as if it happened before the corresponding start. We present a strategy for ordering the events in a trace log in order to reporduce a consistent view of the events recorded in the log.Fundação para a Ciência e Tecnologia - project Synergy-VM(PTDC/EIA-EIA/113613/2009
Aspect oriented pluggable support for parallel computing
In this paper, we present an approach to develop parallel applications based on aspect oriented programming. We propose a collection of aspects to implement group communication mechanisms on parallel applications. In our approach, parallelisation code is developed by composing the collection into the application core functionality. The approach requires fewer changes to sequential applications to parallelise the core functionality than current alternatives and yields more modular code. The paper presents the collection and shows how the aspects can be used to develop efficient parallel applicationsFundação para a Ciência e a Tecnologia (FCT) - PPC-VM (Portable Parallel Computing based on Virtual Machines) Project POSI/CHS/47158/2002; SOFTAS (POSI/EIA/60189/2004).Fundo Europeu de Desenvolvimento Regional (FEDER)
Tools for analyzing parallel I/O
Parallel application I/O performance often does not meet user expectations. Additionally, slight access pattern modifications may lead to significant changes in performance due to complex interactions between hardware and software. These issues call for sophisticated tools to capture, analyze, understand, and tune application I/O. In this paper, we highlight advances in monitoring tools to help address these issues. We also describe best practices, identify issues in measure- ment and analysis, and provide practical approaches to translate parallel I/O analysis into actionable outcomes for users, facility operators, and researchers
Recommended from our members
Data-driven reduction strategies for Bayesian inverse problems
A persistent central challenge in computational science and engineering (CSE), with both national and global security implications, is the efficient solution of large-scale Bayesian inverse problems. These problems range from estimating material parameters in subsurface simulations to estimating phenomenological parameters in climate models. Despite recent progress, our ability to quantify uncertainties and solve large-scale inverse problems lags well behind our ability to develop the governing forward simulations.
Inverse problems present unique computational challenges that are only magnified as we include larger observational data sets and demand higher-resolution parameter estimates. Even with the current state-of-the-art, solving deterministic large-scale inverse problems is prohibitively expensive. Large-scale uncertainty quantification (UQ), cast in the Bayesian inversion framework, is thus rendered intractable. To conquer these challenges, new methods that target the root causes of computational complexity are needed.
In this dissertation, we propose data-driven strategies for overcoming this “curse of di- mensionality.” First, we address the computational complexity induced in large-scale inverse problems by high-dimensional observational data. We propose a randomized misfit approach
(RMA), which uses random projections—quasi-orthogonal, information-preserving transformations—to map the high-dimensional data-misfit vector to a low-dimensional space. We provide the first theoretical explanation for why randomized misfit methods are successful in practice with a small reduced data-misfit dimension (n = O(1)).
Next, we develop the randomized geostatistical approach (RGA) for Bayesian sub- surface inverse problems with high-dimensional data. We show that the RGA is able to resolve transient groundwater inverse problems with noisy observed data dimensions up to 107, whereas a comparison method fails due to out-of-memory errors.
Finally, we address the solution of Bayesian inverse problems with spatially localized data. The motivation is CSE applications that would gain from high-fidelity estimation over a smaller data-local domain, versus expensive and uncertain estimation over the full simulation domain. We propose several truncated domain inversion methods using domain decomposition theory to build model-informed artificial boundary conditions. Numerical investigations of MAP estimation and sampling demonstrate improved fidelity and fewer partial differential equation (PDE) solves with our truncated methods.Computational Science, Engineering, and Mathematic
- …