50 research outputs found
AutoBench: comparing the time performance of Haskell programs
Two fundamental goals in programming are correctness (producing the right results) and efficiency (using as few resources as possible). Property-based testing tools such as QuickCheck provide a lightweight means to check the correctness of Haskell programs, but what about their efficiency? In this article, we show how QuickCheck can be combined with the Criterion benchmarking library to give a lightweight means to compare the time performance of Haskell programs. We present the design and implementation of the AutoBench system, demonstrate its utility with a number of case studies, and find that many QuickCheck correctness properties are also efficiency improvements
From natural language requirements to formal descriptions in Alloy through boilerplates
Dissertação de mestrado em Engenharia de InformáticaFormal Methods are usually applied by specialists in the final phases of software development.
They aim to identify programming errors, and through that reduce the probability of a future failure.
Usually, errors are more related with misinterpretation of requirements than with bad programming.
More than ever, requirements documents deal with complex terms, which programmers
aren’t familiar with, resulting in an increase of misinterpretation of requirements and increasing
the costs of the execution of a software project. The use of formal methods could reduce these
costs, if properly used to verify requirements and not source code. However, most companies
avoid using formal methods due to high costs associated with formal methods application. Programmers
or requirements engineers can’t apply formal methods efficiently without previously
having specific training, which implies hiring expensive specialists in formal methods.
This dissertation presents methods which aim to bring formal methods closer to requirements
descriptions. For such, formal modeling is used to verify and validate the descriptions of requirements,
and not source code. Initially it’s presented a standard to create formal models, which
makes a direct correspondence between each requirement and its model. This standard is supported
by a tool which, among other things, automatically generates graphics representations
of requirements using its models. Afterwards it’s presented a connection between requirements
boilerplates and Alloy models. This connection allows to generate formal models in an automatic
fashion, without the need of a specialist. This drastically reduces the costs of using formal
methods in software projects. It’s also presented the beginning of an algebra which allows to
aggregate these templates. This aggregation allows one to write its requirements documents
throught boilerplates and at the end have the complete model of all requirements, for free.
When one is modeling a requirements document in Alloy and at some point appears requirements
with explicit temporal restrictions, it’s necessary to recreate the whole model in a tool which
allows that kind of specification (eg. Uppaal). This process is highly error prone, because it’s a
manual transformation and highly dependent on the interpretation of who is modeling. In this
dissertation it’s presented a method which allows to automatically generate an Uppaal model
from an Alloy model. This transformation allows that at any point in the requirements document,
the requirements engineer can generate the correspondent Uppaal model and there specify the
temporal properties.Os métodos formais são normalmente aplicados por especialistas nas fases finais do desenvolvimento
de software. A sua aplicação visa identificar erros de programação, reduzindo assim
a probabilidade de uma falha futura. Tipicamente, os erros que se encontram prendem-se com
má interpretação de requisitos e não má programação. Cada vez mais os documentos de requisitos
tratam de termos complexos e fora do conhecimento do programador, o que leva a mais
erros de interpretação e consequentemente a um aumento dos custos de execução de um projeto
de software. A utilização de métodos formais poderia minimizar estes custos, caso eles
fossem utilizados não para verificar código, mas sim para verificar requisitos. No entanto, muitas
empresas evitam a utilização de métodos formais, devido ao custo elevado da sua aplicação.
Os programadores ou engenheiros de requisitos não conseguem aplicar métodos formais de
forma eficiente sem terem formação prévia e específica na área, o que implica a contratação de
especialistas em métodos formais.
Nesta dissertação são apresentados métodos que visam aproximar os métodos formais da
escrita dos requisitos. Para tal, a modelação formal é utilizada não para verificar código, mas
para verificar a escrita de requisitos. Inicialmente é apresentado um standard para a criação de
modelos, que faz uma correspondência direta entre cada requisito e o seu modelo formal. Este
standard é suportado por uma ferramenta que, entre outras coisas, gera de forma automática
representações gráficas dos requisitos através dos seus modelos. Posteriormente é apresentada
uma conexão entre templates de requisitos (requirements boilerplates) e modelos Alloy. Esta
conexão permite a criação de modelos formais de forma automática, sem necessidade de um
especialista. Isto reduz drasticamente o custo de utilização de métodos formais. Apresenta-se
igualmente o começo de uma álgebra que permite agregar estes templates. Esta agregação
permite que um engenheiro de requisitos escreva o seu documento de requisitos através de
templates e no fim tenha de forma automática o modelo formal de todos os requisitos.
Quando se está a modelar um documento de requisitos em Alloy e a certo ponto aparecem
requisitos com restrições temporais explícitas, é necessário recriar todo o modelo numa ferramenta
que permita essa modelação (ex: Uppaal). Este processo está sujeito a erros, porque
esta transformação é manual e altamente dependente da interpretação de quem está a modelar.
Nesta dissertação é apresentado um método que permite a geração automática de um modelo
Uppaal a partir de um modelo Alloy. Esta transformação permite que a qualquer ponto da modelação
em Alloy, se crie o modelo Uppaal correspondente e se especifiquem as propriedades
temporais
Hybrid eager and lazy evaluation for efficient compilation of Haskell
Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2002.Includes bibliographical references (p. 208-220).This electronic version was submitted by the student author. The certified thesis is available in the Institute Archives and Special Collections.The advantage of a non-strict, purely functional language such as Haskell lies in its clean equational semantics. However, lazy implementations of Haskell fall short: they cannot express tail recursion gracefully without annotation. We describe resource-bounded hybrid evaluation, a mixture of strict and lazy evaluation, and its realization in Eager Haskell. From the programmer's perspective, Eager Haskell is simply another implementation of Haskell with the same clean equational semantics. Iteration can be expressed using tail recursion, without the need to resort to program annotations. Under hybrid evaluation, computations are ordinarily executed in program order just as in a strict functional language. When particular stack, heap, or time bounds are exceeded, suspensions are generated for all outstanding computations. These suspensions are re-started in a demand-driven fashion from the root. The Eager Haskell compiler translates Ac, the compiler's intermediate representation, to efficient C code. We use an equational semantics for Ac to develop simple correctness proofs for program transformations, and connect actions in the run-time system to steps in the hybrid evaluation strategy.(cont.) The focus of compilation is efficiency in the common case of straight-line execution; the handling of non-strictness and suspension are left to the run-time system. Several additional contributions have resulted from the implementation of hybrid evaluation. Eager Haskell is the first eager compiler to use a call stack. Our generational garbage collector uses this stack as an additional predictor of object lifetime. Objects above a stack watermark are assumed to be likely to die; we avoid promoting them. Those below are likely to remain untouched and therefore are good candidates for promotion. To avoid eagerly evaluating error checks, they are compiled into special bottom thunks, which are treated specially by the run-time system. The compiler identifies error handling code using a mixture of strictness and type information. This information is also used to avoid inlining error handlers, and to enable aggressive program transformation in the presence of error handling.by Jan-Willem Maessen.Ph.D
Interaction-aware development environments: recording, mining, and leveraging IDE interactions to analyze and support the development flow
Nowadays, software development is largely carried out using Integrated Development Environments, or IDEs. An IDE is a collection of tools and facilities to support the most diverse software engineering activities, such as writing code, debugging, and program understanding. The fact that they are integrated enables developers to find all the tools needed for the development in the same place. Each activity is composed of many basic events, such as clicking on a menu item in the IDE, opening a new user interface to browse the source code of a method, or adding a new statement in the body of a method. While working, developers generate thousands of these interactions, that we call fine-grained IDE interaction data. We believe this data is a valuable source of information that can be leveraged to enable better analyses and to offer novel support to developers. However, this data is largely neglected by modern IDEs. In this dissertation we propose the concept of "Interaction-Aware Development Environments": IDEs that collect, mine, and leverage the interactions of developers to support and simplify their workflow. We formulate our thesis as follows: Interaction-Aware Development Environments enable novel and in- depth analyses of the behavior of software developers and set the ground to provide developers with effective and actionable support for their activities inside the IDE. For example, by monitoring how developers navigate source code, the IDE could suggest the program entities that are potentially relevant for a particular task. Our research focuses on three main directions: 1. Modeling and Persisting Interaction Data. The first step to make IDEs aware of interaction data is to overcome its ephemeral nature. To do so we have to model this new source of data and to persist it, making it available for further use. 2. Interpreting Interaction Data. One of the biggest challenges of our research is making sense of the millions of interactions generated by developers. We propose several models to interpret this data, for example, by reconstructing high-level development activities from interaction histories or measure the navigation efficiency of developers. 3. Supporting Developers with Interaction Data. Novel IDEs can use the potential of interaction data to support software development. For example, they can identify the UI components that are potentially unnecessary for the future and suggest developers to close them, reducing the visual cluttering of the IDE
Reinhabiting Havana
Thesis (S.M.)--Massachusetts Institute of Technology, Dept. of Architecture, 1998.Includes bibliographical references (leaves [92]-[93]).The project presented here is about the transformation of an extant fabric. The observation of the built environment in Havana revealed that people's interaction with their built environment have produced artifacts like the barbacoa. Their interaction is a creative attempt to resolve the need for more housing space. This make shift construction may solve the problem temporarily. The thesis deals with the observation and documentation of this artifact in order to develop a new model for housing based from the barbacoa and its culture. This new model will increase the density of buildings, provide a prototype that can be adapted in different buildings types and help preserve a way of life in Havana.by Frank Javier Valdes.S.M
Neural Radiance Fields: Past, Present, and Future
The various aspects like modeling and interpreting 3D environments and
surroundings have enticed humans to progress their research in 3D Computer
Vision, Computer Graphics, and Machine Learning. An attempt made by Mildenhall
et al in their paper about NeRFs (Neural Radiance Fields) led to a boom in
Computer Graphics, Robotics, Computer Vision, and the possible scope of
High-Resolution Low Storage Augmented Reality and Virtual Reality-based 3D
models have gained traction from res with more than 1000 preprints related to
NeRFs published. This paper serves as a bridge for people starting to study
these fields by building on the basics of Mathematics, Geometry, Computer
Vision, and Computer Graphics to the difficulties encountered in Implicit
Representations at the intersection of all these disciplines. This survey
provides the history of rendering, Implicit Learning, and NeRFs, the
progression of research on NeRFs, and the potential applications and
implications of NeRFs in today's world. In doing so, this survey categorizes
all the NeRF-related research in terms of the datasets used, objective
functions, applications solved, and evaluation criteria for these applications.Comment: 413 pages, 9 figures, 277 citation
Aeronautical engineering: A special bibliography with indexes, supplement 82, April 1977
This bibliography lists 311 reports, articles, and other documents introduced into the NASA scientific and technical information system in March 1977
Rohelisema tarkvaratehnoloogia poole tarkvaraanalüüsi abil
Mobiilirakendused, mis ei tühjenda akut, saavad tavaliselt head kasutajahinnangud. Mobiilirakenduste energiatõhusaks muutmiseks on avaldatud mitmeid refaktoreerimis- suuniseid ja tööriistu, mis aitavad rakenduse koodi optimeerida. Neid suuniseid ei saa aga seoses energiatõhususega üldistada, sest kõigi kontekstide kohta ei ole piisavalt energiaga seotud andmeid. Olemasolevad energiatõhususe parandamise tööriistad/profiilid on enamasti prototüübid, mis kohalduvad ainult väikese alamhulga energiaga seotud probleemide suhtes. Lisaks käsitlevad olemasolevad suunised ja tööriistad energiaprobleeme peamiselt a posteriori ehk tagantjärele, kui need on juba lähtekoodi sees. Android rakenduse koodi saab põhijoontes jagada kaheks osaks: kohandatud kood ja korduvkasutatav kood. Kohandatud kood on igal rakendusel ainulaadne. Korduvkasutatav kood hõlmab kolmandate poolte teeke, mis on rakendustesse lisatud arendusprotessi kiirendamiseks. Alustuseks hindame mitmete lähtekoodi halbade lõhnade refaktoreerimiste energiatarbimist Androidi rakendustes. Seejärel teeme empiirilise uuringu Androidi rakendustes kasutatavate kolmandate osapoolte võrguteekide energiamõju kohta. Pakume üldisi kontekstilisi suuniseid, mida võiks rakenduste arendamisel kasutada. Lisaks teeme süstemaatilise kirjanduse ülevaate, et teha kindlaks ja uurida nüüdisaegseid tugitööriistu, mis on rohelise Androidi arendamiseks saadaval. Selle uuringu ja varem läbi viidud katsete põhjal toome esile riistvarapõhiste energiamõõtmiste jäädvustamise ja taasesitamise probleemid. Arendame tugitööriista ARENA, mis võib aidata koguda energiaandmeid ja analüüsida Androidi rakenduste energiatarbimist. Viimasena töötame välja tugitööriista REHAB, et soovitada arendajatele energiatõhusaid kolmanda osapoole võrguteekeMobile apps that do not drain the battery usually get good user ratings. To make mobile apps energy efficient many refactoring guidelines and tools are published that help optimize the app code. However, these guidelines cannot be generalized w.r.t energy efficiency, as there is not enough energy-related data for every context. Existing energy enhancement tools/profilers are mostly prototypes applicable to only a small subset of energy-related problems. In addition, the existing guidelines and tools mostly address the energy issues a posteriori, i.e., once they have already been introduced into the code.
Android app code can be roughly divided into two parts: the custom code and the reusable code. Custom code is unique to each app. Reusable code includes third-party libraries that are included in apps to speed up the development process. We start by evaluating the energy consumption of various code smell refactorings in native Android apps. Then we conduct an empirical study on the energy impact of third-party network libraries used in Android apps. We provide generalized contextual guidelines that could be used during app development
Further, we conduct a systematic literature review to identify and study the current state of the art support tools available to aid green Android development. Based on this study and the experiments we conducted before, we highlight the problems in capturing and reproducing hardware-based energy measurements. We develop the support tool ‘ARENA’ that could help gather energy data and analyze the energy consumption of Android apps. Last, we develop the support tool ‘REHAB’ to recommend energy efficient third-party network libraries to developers.https://www.ester.ee/record=b547174