1,065 research outputs found
Energyware engineering: techniques and tools for green software development
Tese de Doutoramento em Informática (MAP-i)Energy consumption is nowadays one of the most important concerns worldwide. While
hardware is generally seen as the main culprit for a computer’s energy usage, software
too has a tremendous impact on the energy spent, as it can cancel the efficiency introduced
by the hardware. Green Computing is not a newfield of study, but the focus has been,
until recently, on hardware. While there has been advancements in Green Software techniques,
there is still not enough support for software developers so they can make their
code more energy-aware, with various studies arguing there is both a lack of knowledge
and lack of tools for energy-aware development.
This thesis intends to tackle these two problems and aims at further pushing
forward research on Green Software. This software energy consumption issue is faced
as a software engineering question. By using systematic, disciplined, and quantifiable
approaches to the development, operation, and maintenance of software we defined several
techniques, methodologies, and tools within this document. These focus on providing
software developers more knowledge and tools to help with energy-aware software
development, or Energyware Engineering.
Insights are provided on the energy influence of several stages performed during
a software’s development process. We look at the energy efficiency of various popular
programming languages, understanding which are the most appropriate if a developer’s
concern is energy consumption. A detailed study on the energy profiles of different
Java data structures is also presented, alongwith a technique and tool, further providing
more knowledge on what energy efficient alternatives a developer has to choose from. To
help developers with the lack of tools, we defined and implemented a technique to detect
energy inefficient fragments within the source code of a software system. This technique
and tool has been shown to help developers improve the energy efficiency of their programs,
and even outperforming a runtime profiler. Finally, answers are provided to common questions and misconceptions within
this field of research, such as the relationship between time and energy, and howone can
improve their software’s energy consumption.
This thesis provides a great effort to help support both research and education on
this topic, helps continue to grow green software out of its infancy, and contributes to
solving the lack of knowledge and tools which exist for Energyware Engineering.Hoje em dia o consumo energético é uma das maiores preocupações a nível global. Apesar
do hardware ser, de umaforma geral, o principal culpado para o consumo de energia
num computador, o software tem também um impacto significativo na energia consumida,
pois pode anular, em parte, a eficiência introduzida pelo hardware. Embora
Green Computing não seja uma área de investigação nova, o foco tem sido, até recentemente,
na componente de hardware. Embora as técnicas de Green Software tenham
vindo a evoluir, não há ainda suporte suficiente para que os programadores possam
produzir código com consciencialização energética. De facto existemvários estudos que
defendem que existe tanto uma falta de conhecimento como uma escassez de ferramentas
para o desenvolvimento energeticamente consciente.
Esta tese pretende abordar estes dois problemas e tem como foco promover avanços
em green software. O tópico do consumo de energia é abordado duma perspectiva
de engenharia de software. Através do uso de abordagens sistemáticas, disciplinadas
e quantificáveis no processo de desenvolvimento, operação e manutencão de software,
foi possível a definição de novas metodologias e ferramentas, apresentadas neste documento.
Estas ferramentas e metodologias têm como foco dotar de conhecimento e
ferramentas os programadores de software, de modo a suportar um desenvolvimento
energeticamente consciente, ou Energyware Engineering.
Deste trabalho resulta a compreensão sobre a influência energética a ser usada
durante as diferentes fases do processo de desenvolvimento de software. Observamos as
linguagens de programação mais populares sobre um ponto de vista de eficiência energética,
percebendo quais as mais apropriadas caso o programador tenha uma preocupação
com o consumo energético. Apresentamos também um estudo detalhado sobre perfis energéticos de diferentes
estruturas de dados em Java, acompanhado por técnicas e ferramentas, fornecendo
conhecimento relativo a quais as alternativas energeticamente eficientes que os programadores
dispõem. Por forma a ajudar os programadores, definimos e implementamos
uma técnica para detetar fragmentos energicamente ineficientes dentro do código fonte
de um sistema de software. Esta técnica e ferramenta têm demonstrado ajudar programadores
a melhorarem a eficiência energética dos seus programas e em algum casos
superando um runtime profiler.
Por fim, são dadas respostas a questões e conceções erradamente formuladas dentro
desta área de investigação, tais como o relacionamento entre tempo e energia e como
é possível melhorar o consumo de energia do software.
Foi empregue nesta tese um esforço árduo de suporte tanto na investigação como
na educação relativo a este tópico, ajudando à maturação e crescimento de green computing,
contribuindo para a resolução da lacuna de conhecimento e ferramentas para
suporte a Energyware Engineering.This work is partially funded by FCT – Foundation for Science and Technology, the
Portuguese Ministry of Science, Technology and Higher Education, through national funds,
and co-financed by the European Social Fund (ESF) through the Operacional Programme for
Human Capital (POCH), with scholarship reference SFRH/BD/112733/2015. Additionally,
funding was also provided the ERDF – European Regional Development Fund – through
the Operational Programmes for Competitiveness and Internationalisation COMPETE and
COMPETE 2020, and by the Portuguese Government through FCT project Green Software
Lab (ref. POCI-01-0145-FEDER-016718), by the project GreenSSCM - Green Software for
Space Missions Control, a project financed by the Innovation Agency, SA, Northern Regional
Operational Programme, Financial Incentive Grant Agreement under the Incentive Research
and Development System, Project No. 38973, and by the Luso-American Foundation in
collaboration with the National Science Foundation with grant FLAD/NSF ref. 300/2015 and
ref. 275/2016
On the performance of WebAssembly
Dissertação de mestrado integrado em Informatics EngineeringThe worldwide Web has dramatically evolved in recent years. Web pages are dynamic, expressed by pro grams written in common programming languages given rise to sophisticated Web applications. Thus,
Web browsers are almost operating systems, having to interpret/compile such programs and execute
them. Although JavaScript is widely used to express dynamic Web pages, it has several shortcomings and
performance inefficiencies. To overcome such limitations, major IT powerhouses are developing a new
portable and size/load efficient language: WebAssembly.
In this dissertation, we conduct the first systematic study on the energy and run-time performance
of WebAssembly and JavaScript on the Web. We used micro-benchmarks and real applications to have
more realistic results. The results show that WebAssembly, while still in its infancy, is starting to already
outperform JavaScript, with much more room to grow. A statistical analysis indicates that WebAssembly
produces significant performance differences compared to JavaScript. However, these differences differ
between micro-benchmarks and real-world benchmarks. Our results also show that WebAssembly improved
energy efficiency by 30%, on average, and show how different WebAssembly behaviour is among three
popular Web Browsers: Google Chrome, Microsoft Edge, and Mozilla Firefox. Our findings indicate that
WebAssembly is faster than JavaScript and even more energy-efficient. Our benchmarking framework is
also available to allow further research and replication.A Web evoluiu dramaticamente em todo o mundo nos últimos anos. As páginas Web são dinâmicas,
expressas por programas escritos em linguagens de programação comuns, dando origem a aplicativos
Web sofisticados. Assim, os navegadores Web são quase como sistemas operacionais, tendo que interpre tar/compilar tais programas e executá-los. Embora o JavaScript seja amplamente usado para expressar
páginas Web dinâmicas, ele tem várias deficiências e ineficiências de desempenho. Para superar tais
limitações, as principais potências de TI estão a desenvolver uma nova linguagem portátil e eficiente em
tamanho/carregamento: WebAssembly.
Nesta dissertação, conduzimos o primeiro estudo sistemático sobre o desempenho da energia e do
tempo de execução do WebAssembly e JavaScript na Web. Usamos micro-benchmarks e aplicações reais
para obter resultados mais realistas. Os resultados mostram que WebAssembly, embora ainda esteja
na sua infância, já está começa a superar o JavaScript, com muito mais espaço para crescer. Uma
análise estatística indica que WebAssembly produz diferenças de desempenho significativas em relação
ao JavaScript. No entanto, essas diferenças diferem entre micro-benchmarks e benchmarks de aplicações
reais. Os nossos resultados também mostram que o WebAssembly melhorou a eficiência energética em
30%, em média, e mostram como o comportamento do WebAssembly é diferente entre três navegadores
Web populares: Google Chrome, Microsoft Edge e Mozilla Firefox. As nossas descobertas indicam que
o WebAssembly é mais rápido que o JavaScript e ainda mais eficiente em termos de energia. A nossa
benchmarking framework está disponível para permitir pesquisas adicionais e replicação
Towards Implicit Parallel Programming for Systems
Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually.
In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution.
We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates
Towards Implicit Parallel Programming for Systems
Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually.
In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution.
We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates
Energy Consumption of Functional Programs in the Context of Lazy Evaluation
We have limited natural resources available to support our daily living, be they raw materials for
manufacturing or energy to generate work. The pace at which we consume those resources is
approaching the limits at which nature can replenish them, and at which we can extract them.
It is with those resources that we develop the most varied technology, on which our modern
way of life is increasingly more dependent, to provide every kind of service conceivable.
In particular, the Information and Communication Technologies are an essential part of today’s
living. With ever more devices, supporting different services, in utilization, their energy demand
grows daily.
Aware of this facts, hardware/software developers seek ways to optimize the energy consumption
by the computing hardware/software artifacts.
Our work, focused on software, was driven by the need to know if, and to what extent, can we
save energy by refactoring existing programs.
To that extent, we implemented a benchmark that was used to analyze the energy consumption
of various implementations of common data structure abstractions, implemented in the Edison
library, for the Haskell programming language.
Our findings lead us to conclude that, we can save energy, to a great extent, depending on the
usage pattern, by software programs, of the native operations available in Edison.O planeta Terra dispõe de recursos naturais limitados disponíveis para suportar o nosso quotidiano,
sejam eles matérias primas para manufactura ou energia para gerar trabalho. O ritmo a
que consumimos esse recursos está a aproximar-se dos limites dentro dos quais a natureza pode
restabelecê-los, e a que nós podemos extraí-los.
É com esses recursos que desenvolvemos a mais variada tecnologia, da qual o nosso modo de vida
moderno é cada vez mais dependente, para providenciar todos os tipos de serviços imagináveis.
Em particular, as Tecnologias de Informação e Comunicação (TIC) são uma parte essencial da
vida de hoje. Com cada vez mais dispositivos, suportando diferentes serviços, em utilização, o
seu consumo de energia cresce diariamente.
Cientes deste factos, os desenvolvedores de hardware/software procuram modos de optimizar
o consumo de energia dos artefactos computationais (hardware/software).
O nosso trabalho, focado no software, foi motivado pela necessidade de apurar se, e até que
ponto, podemos poupar energia adaptando programas existentes.
Nessa medida, implementámos um benchmark que foi utilizado para analisar o consumo energético
de várias implementações de abstracções de estruturas de dados comuns, implementadas
na biblioteca Edison, para a linguagem de programação Haskell.
As nossas descobertas levam-nos a concluir que podemos poupar energia, extensivamente, dependendo
do padrão de utilização, por parte dos programas, das operações nativas disponíveis
na Edison
Towards Implicit Parallel Programming for Systems
Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually.
In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution.
We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates
Towards Implicit Parallel Programming for Systems
Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually.
In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution.
We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates
Towards Implicit Parallel Programming for Systems
Multi-core processors require a program to be decomposable into independent parts that can execute in parallel in order to scale performance with the number of cores. But parallel programming is hard especially when the program requires state, which many system programs use for optimization, such as for example a cache to reduce disk I/O. Most prevalent parallel programming models do not support a notion of state and require the programmer to synchronize state access manually, i.e., outside the realms of an associated optimizing compiler. This prevents the compiler to introduce parallelism automatically and requires the programmer to optimize the program manually.
In this dissertation, we propose a programming language/compiler co-design to provide a new programming model for implicit parallel programming with state and a compiler that can optimize the program for a parallel execution.
We define the notion of a stateful function along with their composition and control structures. An example implementation of a highly scalable server shows that stateful functions smoothly integrate into existing programming language concepts, such as object-oriented programming and programming with structs. Our programming model is also highly practical and allows to gradually adapt existing code bases. As a case study, we implemented a new data processing core for the Hadoop Map/Reduce system to overcome existing performance bottlenecks. Our lambda-calculus-based compiler automatically extracts parallelism without changing the program's semantics. We added further domain-specific semantic-preserving transformations that reduce I/O calls for microservice programs. The runtime format of a program is a dataflow graph that can be executed in parallel, performs concurrent I/O and allows for non-blocking live updates
- …