200 research outputs found
Transfuse: A Compile-Time Metaprogramming Solution for Reducing Boilerplate on Google\u27s Android
Modern Java application development makes use of metaprogramming to offset and reduce application boilerplate. Unfortunately, metaprogramming techniques typically require a relatively high run-time cost, particularly at application startup. Therefore, environments with limited resources or without the luxury of a warm-up period, often lack metaprogramming as an option. This is precisely the case with applications written for Google Android. Android applications run on low resource mobile hardware and lack an offline startup period. Therefore, Android applications often suffer from a high amount of boilerplate. Fortunately, there is an alternative to the traditional metaprogramming approach. In this thesis, we examine the approach of a metaprogramming tool named Transfuse. Transfuse targets boilerplate reduction within the constraints prescribed by the Android environment. This is accomplished through compile-time analysis and code generation. This approach is analyzed from both boilerplate reduction and run-time performance perspectives
Bigraph Metaprogramming for Distributed Computation
Ubiquitous computing is a paradigm that emphasises integration of computing activities into the fabric of everyday life. With the increasing availability of small, cheap computing devices, the ubiquitous computing model seems more and more likely to supplant desktop computing as the dominant paradigm. Similarly, the presence of high-speed network connectivity between vast numbers of computers has already made distributed computing the preferred paradigm for many application domains. Unfortunately, traditional approaches to software development are not necessarily well-suited to developing software in a post-desktop world. We present an extension to the bigraphical reactive systems formalism that enables us to construct a programming language based upon it. We believe that this programming language provides programmers with an environment better suited to the challenges that arise when creating software within a distributed or ubiquitous computing paradigm. We detail our modification to the theory of bigraphical reactive systems that enables metaprogramming. Finally, we provide a description of our prototype implementation of a programming language that enables metaprogramming of bigraphical reactive systems
On the impact of atoms of confusion in javascript code
Trabalho de conclusão de curso (graduação)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2019.Um dos aspectos mais importantes para a engenharia de software manutenível é a pre- ocupaçao com quão compreensível o código fonte é sob a perspectiva humana. Uma vez que ter que despender muito esforço cognitivo pode desencorajar aquele que está tentando se familiarizar com novos trechos de código, torna-se evidente a importância de escrever código que seja o mais simples possível de entender. Além disso, o desenvolvimento de ferramentas que automatizem o processo de reescrita de código que é difícil de enten- der pode não apenas poupar o tempo que o programador gastaria para entendê-lo, como também o tempo que seria gasto escrevendo uma versão mais simples. Neste trabalho, apresentamos a metodologia e os resultado de uma pesquisa que conduzimos com mais de 200 programadores de diversos níveis de experiência e educação, onde buscamos isolar alguns dos menores trechos de código em JavaScript que possam confundir programado- res, trechos tais que chamamos de átomos de confusão. Após medirmos diferenças nas previsões que programadores fariam sobre as saídas trechos de código que continham ou não tais átomos, mostramos que determinados átomos tornam o código significativamente menos compreensível. Para concluir o trabalho, propomos o uso de uma ferramenta de metaprogramação para automaticamente detectar código que contenha átomos de confu- são.One of the main aspects of engineering maintainable software is the concern with how understandable the code is from the human perspective. Since having to spend a lot of cognitive resources can be discouraging when familiarising with new blocks of code, it is important to write code that is as straightforward as possible to understand. Moreover, developing tools to automate the process of rewriting code that is difficult to understand can save not only the time that would be spent understanding the code, but also the time spent rewriting its simpler counterpart. In this paper, we present the methodology and the results of a survey conducted on over 200 programmers of different levels of experience, in which we sought out to isolate some of the smallest possible snippets of confusing code in JavaScript, known as ’atoms of confusion’. Upon measuring the disparity in answer correctness between confusing and simplified pieces of code, as well as differences in time taken to predict the program’s output, we showed that certain constructs make the code significantly harder to understand. To conclude the work, we propose the use of a metaprogramming tool to automatically detect confusing code
Recommended from our members
Transformational maintenance by reuse of design histories
This thesis provides theory and procedures for modifying software artifacts implemented by a formal transformation process. Installing modifications requires knowing not only what transformations were applied (a derivation history) to construct the artifact, but also why the application sequence ensures that the artifact meets its specification. The derivation history and the justification are collectively called a design history. A Design Maintenance System (DMS), when provided with a formal change called a maintenance delta, revises a design history to guide construction of a new artifact. A DMS can be used to integrate a stream of deltas into a history, providing implementations as a side effect, leading to an incremental-evolution model for software construction.We provide a broadly applicable formal model of transformation systems in which specifications are performance predicates, subsuming the functional specifications which are traditional for transformation systems. Such performance predicates provide vocabulary used in the design history to describe the effect of applying sets of transformations.A nonprocedural, performance-goal-oriented Transformation Control Language (TCL) is defined to control navigation of the design space for a transformation system. Recording the execution of a TCL metaprogram directly provides a design history.A complete classification of, and representation for, the set of possible maintenance deltas is given in terms of the inputs defined by the transformation system model. Such deltas include not only specification changes, but also changes to implementation support technologies. Delta integration procedures for revising derivation histories given functional or support technology deltas are provided, based on rearranging the order of transformations in the design space. Building on these operations, integration procedures that revise the design history for each type of delta are described. An agenda-oriented TCL execution process dovetails smoothly with the integration procedures.Our DMS is compared to a number of other maintenance systems. By using an explicit delta and verified commutativity, our DMS often reuses transformations correctly when others fail
Ur/Web: A Simple Model for Programming the Web
The World Wide Web has evolved gradually from a document delivery platform to an architecture for distributed programming. This largely unplanned evolution is apparent in the set of interconnected languages and protocols that any Web application must manage. This paper presents Ur/Web, a domain-specific, statically typed functional programming language with a much simpler model for programming modern Web applications. Ur/Web's model is unified, where programs in a single programming language are compiled to other "Web standards" languages as needed; modular, supporting novel kinds of encapsulation of Web-specific state; and exposes simple concurrency, where programmers can reason about distributed, multithreaded applications via a mix of transactions and cooperative preemption. We give a tutorial introduction to the main features of Ur/Web, formalize the basic programming model with operational semantics, and discuss the language implementation and the production Web applications that use it.National Science Foundation (U.S.) (Grant CCF-1217501
Unwoven Aspect Analysis
Various languages and tools supporting advanced separation of concerns (such as aspect-oriented programming) provide a software developer with the ability to separate functional and non-functional programmatic intentions. Once these separate pieces of the software have been specified, the tools automatically handle interaction points between separate modules, relieving the developer of this chore and permitting more understandable, maintainable code. Many approaches have left traditional compiler analysis and optimization until after the composition has been performed; unfortunately, analyses performed after composition cannot make use of the logical separation present in the original program. Further, for modular systems that can be configured with different sets of features, testing under every possible combination of features may be necessary and time-consuming to avoid bugs in production software. To solve this testing problem, we investigate a feature-aware compiler analysis that runs during composition and discovers features strongly independent of each other. When the their independence can be judged, the number of feature combinations that must be separately tested can be reduced. We develop this approach and discuss our implementation. We look forward to future programming languages in two ways: we implement solutions to problems that are conceptually aspect-oriented but for which current aspect languages and tools fail. We study these cases and consider what language designs might provide even more information to a compiler. We describe some features that such a future language might have, based on our observations of current language deficiencies and our experience with compilers for these languages
The Computational Intelligence of MoGo Revealed in Taiwan's Computer Go Tournaments
International audienceTHE AUTHORS ARE EXTREMELY GRATEFUL TO GRID5000 for helping in designing and experimenting around Monte-Carlo Tree Search. In order to promote computer Go and stimulate further development and research in the field, the event activities, "Computational Intelligence Forum" and "World 99 Computer Go Championship," were held in Taiwan. This study focuses on the invited games played in the tournament, "Taiwanese Go players versus the computer program MoGo," held at National University of Tainan (NUTN). Several Taiwanese Go players, including one 9-Dan professional Go player and eight amateur Go players, were invited by NUTN to play against MoGo from August 26 to October 4, 2008. The MoGo program combines All Moves As First (AMAF)/Rapid Action Value Estimation (RAVE) values, online "UCT-like" values, offline values extracted from databases, and expert rules. Additionally, four properties of MoGo are analyzed including: (1) the weakness in corners, (2) the scaling over time, (3) the behavior in handicap games, and (4) the main strength of MoGo in contact fights. The results reveal that MoGo can reach the level of 3 Dan with, (1) good skills for fights, (2) weaknesses in corners, in particular for "semeai" situations, and (3) weaknesses in favorable situations such as handicap games. It is hoped that the advances in artificial intelligence and computational power will enable considerable progress in the field of computer Go, with the aim of achieving the same levels as computer chess or Chinese chess in the future
Origin Gaps and the Eternal Sunshine of the Second-Order Pendulum
The rich experiences of an intentional, goal-oriented life emerge, in an
unpredictable fashion, from the basic laws of physics. Here I argue that this
unpredictability is no mirage: there are true gaps between life and non-life,
mind and mindlessness, and even between functional societies and groups of
Hobbesian individuals. These gaps, I suggest, emerge from the mathematics of
self-reference, and the logical barriers to prediction that self-referring
systems present. Still, a mathematical truth does not imply a physical one: the
universe need not have made self-reference possible. It did, and the question
then is how. In the second half of this essay, I show how a basic move in
physics, known as renormalization, transforms the "forgetful" second-order
equations of fundamental physics into a rich, self-referential world that makes
possible the major transitions we care so much about. While the universe runs
in assembly code, the coarse-grained version runs in LISP, and it is from that
the world of aim and intention grows.Comment: FQXI Prize Essay 2017. 18 pages, including afterword on
Ostrogradsky's Theorem and an exchange with John Bova, Dresden Craig, and
Paul Livingsto
Program Transformations in Magnolia
We explore program transformations in the context of the Magnolia programming language. We discuss research and implementations of transformation techniques, scenarios to put them to use in Magnolia, interfacing with transformations, and potential workflows and tooling that this approach to programming enables.Vi utforsker program transformasjoner med tanke på programmeringsspråket Magnolia. Vi diskuterer forsking og implementasjoner av transformasjonsteknikker, sammenhenger der vi kan bruke dei i Magnolia, grensesnitt til transformasjoner, og potensielle arbeidsflyt og verktøy som denne tilnærmingen til programmering kan tillate og fremme.Masteroppgåve i informatikkINF39
- …