69,936 research outputs found
Debugging Scandal: The Next Generation
In 1997, the general lack of debugging tools was termed "the debugging scandal". Today, as new languages are emerging to support software evolution, once more debugging support is lagging. The powerful abstractions offered by new languages are compiled away and transformed into complex synthetic structures. Current debugging tools only allow inspection in terms of this complex synthetic structure; they do not support observation of program executions in terms of the original development abstractions. In this position paper, we outline this problem and present two emerging lines of research that ease the burden for debugger implementers and enable developers to debug in terms of development abstractions. For both approaches we identify language-independent debugger components and those that must be implemented for every new language. One approach restores the abstractions by a tool external to the program. The other maintains the abstractions by using a dedicated execution environment, supporting the relevant abstractions. Both approaches have the potential of improving debugging support for new languages. We discuss the advantages and disadvantages of both approaches, outline a combination thereof and also discuss open challenges
Harnessing the Power of Many: Extensible Toolkit for Scalable Ensemble Applications
Many scientific problems require multiple distinct computational tasks to be
executed in order to achieve a desired solution. We introduce the Ensemble
Toolkit (EnTK) to address the challenges of scale, diversity and reliability
they pose. We describe the design and implementation of EnTK, characterize its
performance and integrate it with two distinct exemplar use cases: seismic
inversion and adaptive analog ensembles. We perform nine experiments,
characterizing EnTK overheads, strong and weak scalability, and the performance
of two use case implementations, at scale and on production infrastructures. We
show how EnTK meets the following general requirements: (i) implementing
dedicated abstractions to support the description and execution of ensemble
applications; (ii) support for execution on heterogeneous computing
infrastructures; (iii) efficient scalability up to O(10^4) tasks; and (iv)
fault tolerance. We discuss novel computational capabilities that EnTK enables
and the scientific advantages arising thereof. We propose EnTK as an important
addition to the suite of tools in support of production scientific computing
Using Dedicated and Opportunistic Networks in Synergy for a Cost-effective Distributed Stream Processing Platform
This paper presents a case for exploiting the synergy of dedicated and
opportunistic network resources in a distributed hosting platform for data
stream processing applications. Our previous studies have demonstrated the
benefits of combining dedicated reliable resources with opportunistic resources
in case of high-throughput computing applications, where timely allocation of
the processing units is the primary concern. Since distributed stream
processing applications demand large volume of data transmission between the
processing sites at a consistent rate, adequate control over the network
resources is important here to assure a steady flow of processing. In this
paper, we propose a system model for the hybrid hosting platform where stream
processing servers installed at distributed sites are interconnected with a
combination of dedicated links and public Internet. Decentralized algorithms
have been developed for allocation of the two classes of network resources
among the competing tasks with an objective towards higher task throughput and
better utilization of expensive dedicated resources. Results from extensive
simulation study show that with proper management, systems exploiting the
synergy of dedicated and opportunistic resources yield considerably higher task
throughput and thus, higher return on investment over the systems solely using
expensive dedicated resources.Comment: 9 page
- …