154 research outputs found

    Genetic Improvement of Software: a Comprehensive Survey

    Get PDF
    Genetic improvement (GI) uses automated search to find improved versions of existing software. We present a comprehensive survey of this nascent field of research with a focus on the core papers in the area published between 1995 and 2015. We identified core publications including empirical studies, 96% of which use evolutionary algorithms (genetic programming in particular). Although we can trace the foundations of GI back to the origins of computer science itself, our analysis reveals a significant upsurge in activity since 2012. GI has resulted in dramatic performance improvements for a diverse set of properties such as execution time, energy and memory consumption, as well as results for fixing and extending existing system functionality. Moreover, we present examples of research work that lies on the boundary between GI and other areas, such as program transformation, approximate computing, and software repair, with the intention of encouraging further exchange of ideas between researchers in these fields

    Genetic Improvement of Software: a Comprehensive Survey

    Get PDF
    Genetic improvement uses automated search to find improved versions of existing software. We present a comprehensive survey of this nascent field of research with a focus on the core papers in the area published between 1995 and 2015. We identified core publications including empirical studies, 96% of which use evolutionary algorithms (genetic programming in particular). Although we can trace the foundations of genetic improvement back to the origins of computer science itself, our analysis reveals a significant upsurge in activity since 2012. Genetic improvement has resulted in dramatic performance improvements for a diverse set of properties such as execution time, energy and memory consumption, as well as results for fixing and extending existing system functionality. Moreover, we present examples of research work that lies on the boundary between genetic improvement and other areas, such as program transformation, approximate computing, and software repair, with the intention of encouraging further exchange of ideas between researchers in these fields

    Make UNCOL cool again

    Get PDF
    TCC(graduação) - Universidade Federal de Santa Catarina. Centro Tecnológico. Ciências da Computação.Criar e manter um compilador otimizador requer grandes esforços de desenvolvimento. Ao mesmo tempo, o número de compiladores necessários para traduzir cada linguagem de alto nível para várias arquiteturas de hardware cresce de forma multiplicativa. Uma possível abordagem para resolver esse problema envolve adotar uma representação intermediária comum, também conhecida como Universal Computer Oriented Language (UNCOL). Embora tal solução tenha sido proposta pela primeira vez em 1958, a tecnologia de compiladores e a teoria de linguagens de programação muito evoluíram desde então. No contexto destes avanços, o presente trabalho tem como objetivo reavaliar a ideia de uma representação intermediária universal; começando por um estudo da literatura e do estado da arte de linguagens de programação, de forma a elicitar requisitos e identificar princípios de criação para uma UNCOL moderna. Esses requisitos são a base de uma análise de algumas das representações intermediárias utilizadas em compiladores existentes. Além disso, os princípios extraídos da revisão sistemática motivaram a criação de uma nova representação intermediária, que combina técnicas de compilação originárias de diversas fontes na literatura. A nova representação intermediária é descrita em múltiplos aspectos, incluindo uma definição formal da sua estrutura em multigrafo e uma correspondência informal de sua semântica em termos de um modelo computacional já existente. Por fim, este trabalho descreve alguns algoritmos de análise e transformação de código customizados para a nova representação. Entre estes consta uma formulação de eliminação de código morto (uma otimização global) através de um algoritmo de coleção de lixo.Developing and maintaining an optimizing compiler requires great amounts of effort. At the same time, the number of compilers needed in order to translate many high-level languages to every other target architecture grows multiplicatively. One possible approach to solve this problem is the adoption of a shared intermediate representation, also known as Universal Computer Oriented Language (UNCOL). While the UNCOL solution was first proposed in 1958, there have been many developments in compiler technology and programming language theory since then. This work aims to re-evaluate the idea of a universal intermediate representation in light of these advances, beginning by surveying the programming language literature and state of the art in order to identify requirements and design principles for a modern version of UNCOL. Then, these requirements are used to analyze program representations in existing compiler infrastructures. Furthermore, the set of principles extracted from the systematic review has motivated the design of a new intermediate representation, which combines compilation techniques from various sources in the literature. Multiple aspects of the new intermediate representation are described, encompassing a formal definition of its multigraph structure, an informal explanation of its semantics in terms of the join calculus, and a few custom optimization algorithms, including a formulation of global Dead Code Elimination as a garbage collection process

    Technology Made Legible: A Cultural Study of Software as a Form of Writing in the Theories and Practices of Software Engineering

    Get PDF
    My dissertation proposes an analytical framework for the cultural understanding of the group of technologies commonly referred to as 'new' or 'digital'. I aim at dispelling what the philosopher Bernard Stiegler calls the 'deep opacity' that still surrounds new technologies, and that constitutes one of the main obstacles in their conceptualization today. I argue that such a critical intervention is essential if we are to take new technologies seriously, and if we are to engage with them on both the cultural and the political level. I understand new technologies as technologies based on software. I therefore suggest that a complex understanding of technologies, and of their role in contemporary culture and society, requires, as a preliminary step, an investigation of how software works. This involves going beyond studying the intertwined processes of its production, reception and consumption - processes that typically constitute the focus of media and cultural studies. Instead, I propose a way of accessing the ever present but allegedly invisible codes and languages that constitute software. I thus reformulate the problem of understanding software-based technologies as a problem of making software legible. I build my analysis on the concept of software advanced by Software Engineering, a technical discipline born in the late 1960s that defines software development as an advanced writing technique and software as a text. This conception of software enables me to analyse it through a number of reading strategies. I draw on the philosophical framework of deconstruction as formulated by Jacques Derrida in order to identify the conceptual structures underlying software and hence 'demystify' the opacity of new technologies. Ultimately, I argue that a deconstructive reading of software enables us to recognize the constitutive, if unacknowledged, role of technology in the formation of both the human and academic knowledge. This reading leads to a self-reflexive interrogation of the media and cultural studies' approach to technology and enhances our capacity to engage with new technologies without separating our cultural understanding from our political practices

    Aesthetic Programming

    Get PDF
    Aesthetic Programming explores the technical as well as cultural imaginaries of programming from its insides. It follows the principle that the growing importance of software requires a new kind of cultural thinking — and curriculum — that can account for, and with which to better understand the politics and aesthetics of algorithmic procedures, data processing and abstraction. It takes a particular interest in power relations that are relatively under-acknowledged in technical subjects, concerning class and capitalism, gender and sexuality, as well as race and the legacies of colonialism. This is not only related to the politics of representation but also nonrepresentation: how power differentials are implicit in code in terms of binary logic, hierarchies, naming of the attributes, and how particular worldviews are reinforced and perpetuated through computation. Using p5.js, it introduces and demonstrates the reflexive practice of aesthetic programming, engaging with learning to program as a way to understand and question existing technological objects and paradigms, and to explore the potential for reprogramming wider eco-socio-technical systems. The book itself follows this approach, and is offered as a computational object open to modification and reversioning

    Extempore: The design, implementation and application of a cyber-physical programming language

    Get PDF
    There is a long history of experimental and exploratory programming supported by systems that expose interaction through a programming language interface. These live programming systems enable software developers to create, extend, and modify the behaviour of executing software by changing source code without perceptual breaks for recompilation. These live programming systems have taken many forms, but have generally been limited in their ability to express low-level programming concepts and the generation of efficient native machine code. These shortcomings have limited the effectiveness of live programming in domains that require highly efficient numerical processing and explicit memory management. The most general questions addressed by this thesis are what a systems language designed for live programming might look like and how such a language might influence the development of live programming in performance sensitive domains requiring real-time support, direct hardware control, or high performance computing. This thesis answers these questions by exploring the design, implementation and application of Extempore, a new systems programming language, designed specifically for live interactive programming

    Trustworthy AI Alone Is Not Enough

    Get PDF
    The aim of this book is to make accessible to both a general audience and policymakers the intricacies involved in the concept of trustworthy AI. In this book, we address the issue from philosophical, technical, social, and practical points of view. To do so, we start with a summary definition of Trustworthy AI and its components, according to the HLEG for AI report. From there, we focus in detail on trustworthy AI in large language models, anthropomorphic robots (such as sex robots), and in the use of autonomous drones in warfare, which all pose specific challenges because of their close interaction with humans. To tie these ideas together, we include a brief presentation of the ethical validation scheme for proposals submitted under the Horizon Europe programme as a possible way to address the operationalisation of ethical regulation beyond rigid rules and partial ethical analyses. We conclude our work by advocating for the virtue ethics approach to AI, which we view as a humane and comprehensive approach to trustworthy AI that can accommodate the pace of technological change

    Decoding Digital Culture with Science Fiction: Hyper-Modernism, Hyperreality, and Posthumanism

    Get PDF
    How do digital media technologies affect society and our lives? Through the cultural theory hypotheses of hyper-modernism, hyperreality, and posthumanism, Alan N. Shapiro investigates the social impact of Virtual/Augmented Reality, AI, social media platforms, robots, and the Brain-Computer Interface. His examination of concepts of Jean Baudrillard and Katherine Hayles, as well as films such as Blade Runner 2049, Ghost in the Shell, Ex Machina, and the TV series Black Mirror, suggests that the boundary between science fiction narratives and the »real world« has become indistinct. Science-fictional thinking should be advanced as a principal mode of knowledge for grasping the world and digitalization

    Online Modeling and Tuning of Parallel Stream Processing Systems

    Get PDF
    Writing performant computer programs is hard. Code for high performance applications is profiled, tweaked, and re-factored for months specifically for the hardware for which it is to run. Consumer application code doesn\u27t get the benefit of endless massaging that benefits high performance code, even though heterogeneous processor environments are beginning to resemble those in more performance oriented arenas. This thesis offers a path to performant, parallel code (through stream processing) which is tuned online and automatically adapts to the environment it is given. This approach has the potential to reduce the tuning costs associated with high performance code and brings the benefit of performance tuning to consumer applications where otherwise it would be cost prohibitive. This thesis introduces a stream processing library and multiple techniques to enable its online modeling and tuning. Stream processing (also termed data-flow programming) is a compute paradigm that views an application as a set of logical kernels connected via communications links or streams. Stream processing is increasingly used by computational-x and x-informatics fields (e.g., biology, astrophysics) where the focus is on safe and fast parallelization of specific big-data applications. A major advantage of stream processing is that it enables parallelization without necessitating manual end-user management of non-deterministic behavior often characteristic of more traditional parallel processing methods. Many big-data and high performance applications involve high throughput processing, necessitating usage of many parallel compute kernels on several compute cores. Optimizing the orchestration of kernels has been the focus of much theoretical and empirical modeling work. Purely theoretical parallel programming models can fail when the assumptions implicit within the model are mis-matched with reality (i.e., the model is incorrectly applied). Often it is unclear if the assumptions are actually being met, even when verified under controlled conditions. Full empirical optimization solves this problem by extensively searching the range of likely configurations under native operating conditions. This, however, is expensive in both time and energy. For large, massively parallel systems, even deciding which modeling paradigm to use is often prohibitively expensive and unfortunately transient (with workload and hardware). In an ideal world, a parallel run-time will re-optimize an application continuously to match its environment, with little additional overhead. This work presents methods aimed at doing just that through low overhead instrumentation, modeling, and optimization. Online optimization provides a good trade-off between static optimization and online heuristics. To enable online optimization, modeling decisions must be fast and relatively accurate. Online modeling and optimization of a stream processing system first requires the existence of a stream processing framework that is amenable to the intended type of dynamic manipulation. To fill this void, we developed the RaftLib C++ template library, which enables usage of the stream processing paradigm for C++ applications (it is the run-time which is the basis of almost all the work within this dissertation). An application topology is specified by the user, however almost everything else is optimizable by the run-time. RaftLib takes advantage of the knowledge gained during the design of several prior streaming languages (notably Auto-Pipe). The resultant framework enables online migration of tasks, auto-parallelization, online buffer-reallocation, and other useful dynamic behaviors that were not available in many previous stream processing systems. Several benchmark applications have been designed to assess the performance gains through our approaches and compare performance to other leading stream processing frameworks. Information is essential to any modeling task, to that end a low-overhead instrumentation framework has been developed which is both dynamic and adaptive. Discovering a fast and relatively optimal configuration for a stream processing application often necessitates solving for buffer sizes within a finite capacity queueing network. We show that a generalized gain/loss network flow model can bootstrap the process under certain conditions. Any modeling effort, requires that a model be selected; often a highly manual task, involving many expensive operations. This dissertation demonstrates that machine learning methods (such as a support vector machine) can successfully select models at run-time for a streaming application. The full set of approaches are incorporated into the open source RaftLib framework
    corecore