40,127 research outputs found
Efficient and Effective Handling of Exceptions in Java Points-To Analysis
A joint points-to and exception analysis has been shown to yield benefits in both precision and performance. Treating exceptions as regular objects,
however, incurs significant and rather unexpected overhead. We show that in a
typical joint analysis most of the objects computed to flow in and out of a method
are due to exceptional control-flow and not normal call-return control-flow. For
instance, a context-insensitive analysis of the Antlr benchmark from the DaCapo
suite computes 4-5 times more objects going in or out of a method due to exceptional control-flow than due to normal control-flow. As a consequence, the
analysis spends a large amount of its time considering exceptions.
We show that the problem can be addressed both e
ectively and elegantly by
coarsening the representation of exception objects. An interesting find is that, instead of recording each distinct exception object, we can collapse all exceptions
of the same type, and use one representative object per type, to yield nearly identical precision (loss of less than 0.1%) but with a boost in performance of at least
50% for most analyses and benchmarks and large space savings (usually 40% or
more)
McRunjob: A High Energy Physics Workflow Planner for Grid Production Processing
McRunjob is a powerful grid workflow manager used to manage the generation of
large numbers of production processing jobs in High Energy Physics. In use at
both the DZero and CMS experiments, McRunjob has been used to manage large
Monte Carlo production processing since 1999 and is being extended to uses in
regular production processing for analysis and reconstruction. Described at
CHEP 2001, McRunjob converts core metadata into jobs submittable in a variety
of environments. The powerful core metadata description language includes
methods for converting the metadata into persistent forms, job descriptions,
multi-step workflows, and data provenance information. The language features
allow for structure in the metadata by including full expressions, namespaces,
functional dependencies, site specific parameters in a grid environment, and
ontological definitions. It also has simple control structures for
parallelization of large jobs. McRunjob features a modular design which allows
for easy expansion to new job description languages or new application level
tasks.Comment: CHEP 2003 serial number TUCT00
Knowledge-Intensive Processes: Characteristics, Requirements and Analysis of Contemporary Approaches
Engineering of knowledge-intensive processes (KiPs) is far from being mastered, since they are genuinely knowledge- and data-centric, and require substantial flexibility, at both design- and run-time. In this work, starting from a scientific literature analysis in the area of KiPs and from three real-world domains and application scenarios, we provide a precise characterization of KiPs. Furthermore, we devise some general requirements related to KiPs management and execution. Such requirements contribute to the definition of an evaluation framework to assess current system support for KiPs. To this end, we present a critical analysis on a number of existing process-oriented approaches by discussing their efficacy against the requirements
Distributed interoperable workflow support for electronic commerce.
Abstract. This paper describes a flexible distributed transactional workflow environment based on an extensible object-oriented framework built around class libraries, application programming interfaces, and shared services. The purpose of this environment is to support a range of EC-like business activities including the support of financial transactions and electronic contracts. This environment has as its aim to provide key infrastructure services for mediating and monitoring electronic commerce.
- …