669 research outputs found
Stream Fusion, to Completeness
Stream processing is mainstream (again): Widely-used stream libraries are now
available for virtually all modern OO and functional languages, from Java to C#
to Scala to OCaml to Haskell. Yet expressivity and performance are still
lacking. For instance, the popular, well-optimized Java 8 streams do not
support the zip operator and are still an order of magnitude slower than
hand-written loops. We present the first approach that represents the full
generality of stream processing and eliminates overheads, via the use of
staging. It is based on an unusually rich semantic model of stream interaction.
We support any combination of zipping, nesting (or flat-mapping), sub-ranging,
filtering, mapping-of finite or infinite streams. Our model captures
idiosyncrasies that a programmer uses in optimizing stream pipelines, such as
rate differences and the choice of a "for" vs. "while" loops. Our approach
delivers hand-written-like code, but automatically. It explicitly avoids the
reliance on black-box optimizers and sufficiently-smart compilers, offering
highest, guaranteed and portable performance. Our approach relies on high-level
concepts that are then readily mapped into an implementation. Accordingly, we
have two distinct implementations: an OCaml stream library, staged via
MetaOCaml, and a Scala library for the JVM, staged via LMS. In both cases, we
derive libraries richer and simultaneously many tens of times faster than past
work. We greatly exceed in performance the standard stream libraries available
in Java, Scala and OCaml, including the well-optimized Java 8 streams
Towards an Adaptive Skeleton Framework for Performance Portability
The proliferation of widely available, but very different, parallel architectures
makes the ability to deliver good parallel performance
on a range of architectures, or performance portability, highly desirable.
Irregularly-parallel problems, where the number and size
of tasks is unpredictable, are particularly challenging and require
dynamic coordination.
The paper outlines a novel approach to delivering portable parallel
performance for irregularly parallel programs. The approach
combines declarative parallelism with JIT technology, dynamic
scheduling, and dynamic transformation.
We present the design of an adaptive skeleton library, with a task
graph implementation, JIT trace costing, and adaptive transformations.
We outline the architecture of the protoype adaptive skeleton
execution framework in Pycket, describing tasks, serialisation,
and the current scheduler.We report a preliminary evaluation of the
prototype framework using 4 micro-benchmarks and a small case
study on two NUMA servers (24 and 96 cores) and a small cluster
(17 hosts, 272 cores). Key results include Pycket delivering good
sequential performance e.g. almost as fast as C for some benchmarks;
good absolute speedups on all architectures (up to 120 on
128 cores for sumEuler); and that the adaptive transformations do
improve performance
Constellation Shaping for WDM systems using 256QAM/1024QAM with Probabilistic Optimization
In this paper, probabilistic shaping is numerically and experimentally
investigated for increasing the transmission reach of wavelength division
multiplexed (WDM) optical communication system employing quadrature amplitude
modulation (QAM). An optimized probability mass function (PMF) of the QAM
symbols is first found from a modified Blahut-Arimoto algorithm for the optical
channel. A turbo coded bit interleaved coded modulation system is then applied,
which relies on many-to-one labeling to achieve the desired PMF, thereby
achieving shaping gain. Pilot symbols at rate at most 2% are used for
synchronization and equalization, making it possible to receive input
constellations as large as 1024QAM. The system is evaluated experimentally on a
10 GBaud, 5 channels WDM setup. The maximum system reach is increased w.r.t.
standard 1024QAM by 20% at input data rate of 4.65 bits/symbol and up to 75% at
5.46 bits/symbol. It is shown that rate adaptation does not require changing of
the modulation format. The performance of the proposed 1024QAM shaped system is
validated on all 5 channels of the WDM signal for selected distances and rates.
Finally, it was shown via EXIT charts and BER analysis that iterative
demapping, while generally beneficial to the system, is not a requirement for
achieving the shaping gain.Comment: 10 pages, 12 figures, Journal of Lightwave Technology, 201
Shared-Environment Call-by-Need
Call-by-need semantics formalize the wisdom that work should be done at most once. It frees programmers to focus more on the correctness of their code, and less on the operational details. Because of this property, programmers of lazy functional languages rely heavily on their compiler to both preserve correctness and generate high-performance code for high level abstractions. In this dissertation I present a novel technique for compiling call-by-need semantics by using shared environments to share results of computation. I show how the approach enables a compiler that generates high-performance code, while staying simple enough to lend itself to formal reasoning. The dissertation is divided into three main contributions. First, I present an abstract machine, the \ce machine, which formalizes the approach. Second, I show that it can be implemented as a native code compiler with encouraging performance results. Finally, I present a verified compiler, implemented in the Coq proof assistant, demonstrating how the simplicity of the approach enables formal verification
HERMIT: Mechanized Reasoning during Compilation in the Glasgow Haskell Compiler
It is difficult to write programs which are both correct and fast. A promising approach, functional programming, is based on the idea of using pure, mathematical functions to construct programs. With effort, it is possible to establish a connection between a specification written in a functional language, which has been proven correct, and a fast implementation, via program transformation. When practiced in the functional programming community, this style of reasoning is still typically performed by hand, by either modifying the source code or using pen-and-paper. Unfortunately, performing such semi-formal reasoning by directly modifying the source code often obfuscates the program, and pen-and-paper reasoning becomes outdated as the program changes over time. Even so, this semi-formal reasoning prevails because formal reasoning is time-consuming, and requires considerable expertise. Formal reasoning tools often only work for a subset of the target language, or require programs to be implemented in a custom language for reasoning. This dissertation investigates a solution, called HERMIT, which mechanizes reasoning during compilation. HERMIT can be used to prove properties about programs written in the Haskell functional programming language, or transform them to improve their performance. Reasoning in HERMIT proceeds in a style familiar to practitioners of pen-and-paper reasoning, and mechanization allows these techniques to be applied to real-world programs with greater confidence. HERMIT can also re-check recorded reasoning steps on subsequent compilations, enforcing a connection with the program as the program is developed. HERMIT is the first system capable of directly reasoning about the full Haskell language. The design and implementation of HERMIT, motivated both by typical reasoning tasks and HERMIT's place in the Haskell ecosystem, is presented in detail. Three case studies investigate HERMIT's capability to reason in practice. These case studies demonstrate that semi-formal reasoning with HERMIT lowers the barrier to writing programs which are both correct and fast
Assessing the role of EO in biodiversity monitoring: options for integrating in-situ observations with EO within the context of the EBONE concept
The European Biodiversity Observation Network (EBONE) is a European contribution on terrestrial monitoring to GEO BON, the Group on Earth Observations Biodiversity Observation Network. EBONEâs aims are to develop a system of biodiversity observation at regional, national and European levels by assessing existing approaches in terms of their validity and applicability starting in Europe, then expanding to regions in Africa. The objective of EBONE is to deliver:
1. A sound scientific basis for the production of statistical estimates of stock and change of key indicators;
2. The development of a system for estimating past changes and forecasting and testing policy options and management strategies for threatened ecosystems and species;
3. A proposal for a cost-effective biodiversity monitoring system.
There is a consensus that Earth Observation (EO) has a role to play in monitoring biodiversity. With its capacity to observe detailed spatial patterns and variability across large areas at regular intervals, our instinct suggests that EO could deliver the type of spatial and temporal coverage that is beyond reach with in-situ efforts. Furthermore, when considering the emerging networks of in-situ observations, the prospect of enhancing the quality of the information whilst reducing cost through integration is compelling. This report gives a realistic assessment of the role of EO in biodiversity monitoring and the options for integrating in-situ observations with EO within the context of the EBONE concept (cfr. EBONE-ID1.4). The assessment is mainly based on a set of targeted pilot studies. Building on this assessment, the report then presents a series of recommendations on the best options for using EO in an effective, consistent and sustainable biodiversity monitoring scheme.
The issues that we faced were many:
1. Integration can be interpreted in different ways. One possible interpretation is: the combined use of independent data sets to deliver a different but improved data set; another is: the use of one data set to complement another dataset.
2. The targeted improvement will vary with stakeholder group: some will seek for more efficiency, others for more reliable estimates (accuracy and/or precision); others for more detail in space and/or time or more of everything.
3. Integration requires a link between the datasets (EO and in-situ). The strength of the link between reflected electromagnetic radiation and the habitats and their biodiversity observed in-situ is function of many variables, for example: the spatial scale of the observations; timing of the observations; the adopted nomenclature for classification; the complexity of the landscape in terms of composition, spatial structure and the physical environment; the habitat and land cover types under consideration.
4. The type of the EO data available varies (function of e.g. budget, size and location of region, cloudiness, national and/or international investment in airborne campaigns or space technology) which determines its capability to deliver the required output.
EO and in-situ could be combined in different ways, depending on the type of integration we wanted to achieve and the targeted improvement. We aimed for an improvement in accuracy (i.e. the reduction in error of our indicator estimate calculated for an environmental zone). Furthermore, EO would also provide the spatial patterns for correlated in-situ data.
EBONE in its initial development, focused on three main indicators covering:
(i) the extent and change of habitats of European interest in the context of a general habitat assessment;
(ii) abundance and distribution of selected species (birds, butterflies and plants); and
(iii) fragmentation of natural and semi-natural areas.
For habitat extent, we decided that it did not matter how in-situ was integrated with EO as long as we could demonstrate that acceptable accuracies could be achieved and the precision could consistently be improved. The nomenclature used to map habitats in-situ was the General Habitat Classification. We considered the following options where the EO and in-situ play different roles:
using in-situ samples to re-calibrate a habitat map independently derived from EO; improving the accuracy of in-situ sampled habitat statistics, by post-stratification with correlated EO data; and using in-situ samples to train the classification of EO data into habitat types where the EO data delivers full coverage or a larger number of samples.
For some of the above cases we also considered the impact that the sampling strategy employed to deliver the samples would have on the accuracy and precision achieved.
Restricted access to European wide species data prevented work on the indicator âabundance and distribution of speciesâ.
With respect to the indicator âfragmentationâ, we investigated ways of delivering EO derived measures of habitat patterns that are meaningful to sampled in-situ observations
Transforming specifications of observable behaviour into programs
A methodology for deriving programs from specifications of observable
behaviour is described. The class of processes to which this methodology
is applicable includes those whose state changes are fully definable by labelled
transition systems, for example communicating processes without
internal state changes. A logic program representation of such labelled
transition systems is proposed, interpreters based on path searching techniques
are defined, and the use of partial evaluation techniques to derive
the executable programs is described
Recommended from our members
Hydrodynamic behaviour of gliding hydrofoil crafts
A new type of high-speed craft, called a Gliding-Hydrofoil Craft (GHC), has recently been developed in Jiangsu University of Science and Technology, China. This craft is similar to a planing hull but with a hydrofoil in the front part of its body. The fixed hydrofoil improves the seakeeping properties and stability of the craft compared with a conventional planing hull. In addition, the GHC has a more simple structure and higher stability when compared to a hydrofoil craft. Unlike conventional planing hulls and hydrofoil crafts, the study of hydrodynamics of GHC has been overlooked. The present work aims to advance our understanding on hydrodynamics of GHC, both model tests and numerical investigations are presented.
To study its hydrodynamic characteristics, model tests are carried out in a towing tank, and the total resistance, trim angle and wetted area of the craft in the cases with different Froude numbers are measured. For the purpose of comparison, model tests have also been carried out for the hull without the hydrofoil. This thesis presents analysis on the experimental data and discusses the effects of the submerged depth and initial attack angle of the hydrofoil on the hydrodynamic features of the GHC.
On this basis, the FLUENT software is then adopted to numerically investigate the hydrodynamics of the GHC. The accuracy of the FLUENT addressing this problem is validated by comparing the numerical solutions with the experimental data. The validation cases include 2D hydrofoil in current, Wigley hull with steady forward speed. Good agreement between numerical results and experimental data was obtained. Considering the significance of the turbulence involved in the problem, especially near the hydrofoil, a numerical investigation aiming to find a suitable turbulence model has been carried out. After being validated, 3D numerical simulations on both the planing craft and the GHC in steady flow are considered. The resistance coefficient, pressure coefficient and wave pattern with different Froude number are investigated. Some results are compared with experiment data obtained in the model tests. The wave pattern, velocity field and pressure distribution near the hulls are discussed in detail as well as the influence of the hydrofoil. Finally, the hydrodynamic performance of GHC in unsteady flow is investigated. Three cases were considered: ship berthing, leaving the harbour and turning navigation direction; which are very commonly seen unsteady examples in reality. The preliminary results presented in this thesis have confirmed the significant effects due to the unsteady procedure and imply the need of carrying out unsteady simulations in the future
- âŚ