7 research outputs found
Loop Optimizations in C and C++ Compilers: An Overview
The evolution of computer hardware in the past decades has truly been remarkable. From scalar instruction execution through superscalar and vector to parallel, processors are able to reach astonishing speeds – if programmed accordingly. Now, writing programs that take all the hardware details into consideration for the sake of efficiency is extremely difficult and error-prone. Therefore we increasingly rely on compilers to do the heavy-lifting for us. A significant part of optimizations done by compilers are loop optimiza- tions. Loops are inherently expensive parts of a program in terms of run time, and it is important that they exploit superscalar and vector instructions. In this paper, we give an overview of the scientific literature on loop optimization technology, and summarize the status of current implementations in the most widely used C and C++ compilers in the industry
Loop optimizations in C and C++ compilers: an overview
The evolution of computer hardware in the past decades has truly been
remarkable. From scalar instruction execution through superscalar and vector
to parallel, processors are able to reach astonishing speeds – if programmed
accordingly. Now, writing programs that take all the hardware details into
consideration for the sake of efficiency is extremely difficult and error-prone.
Therefore we increasingly rely on compilers to do the heavy-lifting for us.
A significant part of optimizations done by compilers are loop optimiza-
tions. Loops are inherently expensive parts of a program in terms of run time,
and it is important that they exploit superscalar and vector instructions. In
this paper, we give an overview of the scientific literature on loop optimization
technology, and summarize the status of current implementations in the
most widely used C and C++ compilers in the industry
Survey of new vector computers: The CRAY 1S from CRAY research; the CYBER 205 from CDC and the parallel computer from ICL - architecture and programming
Problems which can arise with vector and parallel computers are discussed in a user oriented context. Emphasis is placed on the algorithms used and the programming techniques adopted. Three recently developed supercomputers are examined and typical application examples are given in CRAY FORTRAN, CYBER 205 FORTRAN and DAP (distributed array processor) FORTRAN. The systems performance is compared. The addition of parts of two N x N arrays is considered. The influence of the architecture on the algorithms and programming language is demonstrated. Numerical analysis of magnetohydrodynamic differential equations by an explicit difference method is illustrated, showing very good results for all three systems. The prognosis for supercomputer development is assessed
Data-Driven Refactorings for Haskell
Agile software development allows for software to evolve slowly over time. Decisions
made during the early stages of a program's lifecycle often come with a cost in the
form of technical debt. Technical debt is the concept that reworking a program that
is implemented in a naive or "easy" way, is often more difficult than changing the
behaviour of a more robust solution. Refactoring is one of the primary ways to reduce
technical debt.
Refactoring is the process of changing the internal structure of a program without
changing its external behaviour. The goal of performing refactorings is to increase code
quality, maintainability, and extensibility of the source program. Performing refactorings
manually is time consuming and error-prone. This makes automated refactoring
tools very useful.
Haskell is a strongly typed, pure functional programming language. Haskell's rich
type system allows for complex and powerful data models and abstractions. These
abstractions and data models are an important part of Haskell programs. This thesis
argues that these parts of a program accrue technical debt, and that refactoring is an
important technique to reduce this type of technical debt.
Refactorings exist that tackle issues with a program's data model, however these
refactorings are specific to the object-oriented programming paradigm. This thesis reports
on work done to design and automate refactorings that help Haskell programmers
develop and evolve these abstractions.
This work also discussed the current design and implementation of HaRe (the Haskell Refactorer). HaRe now supports the Glasgow Haskell Compiler's implementation of
the Haskell 2010 standard and its extensions, and uses some of GHC's internal packages
in its implementation