36,060 research outputs found
Common Subexpression Elimination in a Lazy Functional Language
Common subexpression elimination is a well-known compiler optimisation that saves time by avoiding the repetition of the same computation. To our knowledge it has not yet been applied to lazy functional programming languages, although there are several advantages. First, the referential transparency of these languages makes the identification of common subexpressions very simple. Second, more common subexpressions can be recognised because they can be of arbitrary type whereas standard common subexpression elimination only shares primitive values. However, because lazy functional languages decouple program structure from data space allocation and control flow, analysing its effects and deciding under which conditions the elimination of a common subexpression is beneficial proves to be quite difficult. We developed and implemented the transformation for the language Haskell by extending the Glasgow Haskell compiler and measured its effectiveness on real-world programs
Program transformations using temporal logic side conditions
This paper describes an approach to program optimisation based on transformations, where temporal logic is used to specify side conditions, and strategies are created which expand the repertoire of transformations and provide a suitable level of abstraction. We demonstrate the power of this approach by developing a set of optimisations using our transformation language and showing how the transformations can be converted into a form which makes it easier to apply them, while maintaining trust in the resulting optimising steps. The approach is illustrated through a transformational case study where we apply several optimisations to a small program
Specific "scientific" data structures, and their processing
Programming physicists use, as all programmers, arrays, lists, tuples,
records, etc., and this requires some change in their thought patterns while
converting their formulae into some code, since the "data structures" operated
upon, while elaborating some theory and its consequences, are rather: power
series and Pad\'e approximants, differential forms and other instances of
differential algebras, functionals (for the variational calculus), trajectories
(solutions of differential equations), Young diagrams and Feynman graphs, etc.
Such data is often used in a [semi-]numerical setting, not necessarily
"symbolic", appropriate for the computer algebra packages. Modules adapted to
such data may be "just libraries", but often they become specific, embedded
sub-languages, typically mapped into object-oriented frameworks, with
overloaded mathematical operations. Here we present a functional approach to
this philosophy. We show how the usage of Haskell datatypes and - fundamental
for our tutorial - the application of lazy evaluation makes it possible to
operate upon such data (in particular: the "infinite" sequences) in a natural
and comfortable manner.Comment: In Proceedings DSL 2011, arXiv:1109.032
Recommended from our members
Artificial intelligence makes computers lazy
This paper looks at the age-old problem of trying to instil some degree of intelligence in computers. Genetic Algorithms (GA) and Genetic Programming (GP) are techniques that are used to evolve a solution to a problem using processes that mimic natural evolution. This paper reflects on the experience gained while conducting research applying GA and GP to two quite different problems: Medical Diagnosis and Robot Path Planning. An observation is made that when these algorithms are not applied correctly the computer seemingly exhibits lazy behaviour, arriving at a suboptimal solutions. Using examples, this paper shows how this 'lazy' behaviour can be overcome
Bayesian Active Edge Evaluation on Expensive Graphs
Robots operate in environments with varying implicit structure. For instance,
a helicopter flying over terrain encounters a very different arrangement of
obstacles than a robotic arm manipulating objects on a cluttered table top.
State-of-the-art motion planning systems do not exploit this structure, thereby
expending valuable planning effort searching for implausible solutions. We are
interested in planning algorithms that actively infer the underlying structure
of the valid configuration space during planning in order to find solutions
with minimal effort. Consider the problem of evaluating edges on a graph to
quickly discover collision-free paths. Evaluating edges is expensive, both for
robots with complex geometries like robot arms, and for robots with limited
onboard computation like UAVs. Until now, this challenge has been addressed via
laziness i.e. deferring edge evaluation until absolutely necessary, with the
hope that edges turn out to be valid. However, all edges are not alike in value
- some have a lot of potentially good paths flowing through them, and some
others encode the likelihood of neighbouring edges being valid. This leads to
our key insight - instead of passive laziness, we can actively choose edges
that reduce the uncertainty about the validity of paths. We show that this is
equivalent to the Bayesian active learning paradigm of decision region
determination (DRD). However, the DRD problem is not only combinatorially hard,
but also requires explicit enumeration of all possible worlds. We propose a
novel framework that combines two DRD algorithms, DIRECT and BISECT, to
overcome both issues. We show that our approach outperforms several
state-of-the-art algorithms on a spectrum of planning problems for mobile
robots, manipulators and autonomous helicopters
Lazy localization using the Frozen-Time Smoother
We present a new algorithm for solving the global localization problem called Frozen-Time Smoother (FTS). Time is 'frozen', in the sense that the belief always refers to the same time instant, instead of following a moving target, like Monte Carlo Localization does. This algorithm works in the case in which global localization is formulated as a smoothing problem, and a precise estimate of the incremental motion of the robot is usually available. These assumptions correspond to the case when global localization is used to solve the loop closing problem in SLAM. We compare FTS to two Monte Carlo methods designed with the same assumptions. The experiments suggest that a naive implementation of the FTS is more efficient than an extremely optimized equivalent Monte Carlo solution. Moreover, the FTS has an intrinsic laziness: it does not need frequent updates (scans can be integrated once every many meters) and it can process data in arbitrary order. The source code and datasets are available for download
- ā¦