575 research outputs found

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Guided rewriting and constraint satisfaction for parallel GPU code generation

    Get PDF
    Graphics Processing Units (GPUs) are notoriously hard to optimise for manually due to their scheduling and memory hierarchies. What is needed are good automatic code generators and optimisers for such parallel hardware. Functional approaches such as Accelerate, Futhark and LIFT leverage a high-level algorithmic Intermediate Representation (IR) to expose parallelism and abstract the implementation details away from the user. However, producing efficient code for a given accelerator remains challenging. Existing code generators depend on the user input to choose a subset of hard-coded optimizations or automated exploration of implementation search space. The former suffers from the lack of extensibility, while the latter is too costly due to the size of the search space. A hybrid approach is needed, where a space of valid implementations is built automatically and explored with the aid of human expertise. This thesis presents a solution combining user-guided rewriting and automatically generated constraints to produce high-performance code. The first contribution is an automatic tuning technique to find a balance between performance and memory consumption. Leveraging its functional patterns, the LIFT compiler is empowered to infer tuning constraints and limit the search to valid tuning combinations only. Next, the thesis reframes parallelisation as a constraint satisfaction problem. Parallelisation constraints are extracted automatically from the input expression, and a solver is used to identify valid rewriting. The constraints truncate the search space to valid parallel mappings only by capturing the scheduling restrictions of the GPU in the context of a given program. A synchronisation barrier insertion technique is proposed to prevent data races and improve the efficiency of the generated parallel mappings. The final contribution of this thesis is the guided rewriting method, where the user encodes a design space of structural transformations using high-level IR nodes called rewrite points. These strongly typed pragmas express macro rewrites and expose design choices as explorable parameters. The thesis proposes a small set of reusable rewrite points to achieve tiling, cache locality, data reuse and memory optimisation. A comparison with the vendor-provided handwritten kernel ARM Compute Library and the TVM code generator demonstrates the effectiveness of this thesis' contributions. With convolution as a use case, LIFT-generated direct and GEMM-based convolution implementations are shown to perform on par with the state-of-the-art solutions on a mobile GPU. Overall, this thesis demonstrates that a functional IR yields well to user-guided and automatic rewriting for high-performance code generation

    Towards A Practical High-Assurance Systems Programming Language

    Full text link
    Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation. Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code. To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process

    Ethnographies of Collaborative Economies across Europe: Understanding Sharing and Caring

    Get PDF
    "Sharing economy" and "collaborative economy" refer to a proliferation of initiatives, business models, digital platforms and forms of work that characterise contemporary life: from community-led initiatives and activist campaigns, to the impact of global sharing platforms in contexts such as network hospitality, transportation, etc. Sharing the common lens of ethnographic methods, this book presents in-depth examinations of collaborative economy phenomena. The book combines qualitative research and ethnographic methodology with a range of different collaborative economy case studies and topics across Europe. It uniquely offers a truly interdisciplinary approach. It emerges from a unique, long-term, multinational, cross-European collaboration between researchers from various disciplines (e.g., sociology, anthropology, geography, business studies, law, computing, information systems), career stages, and epistemological backgrounds, brought together by a shared research interest in the collaborative economy. This book is a further contribution to the in-depth qualitative understanding of the complexities of the collaborative economy phenomenon. These rich accounts contribute to the painting of a complex landscape that spans several countries and regions, and diverse political, cultural, and organisational backdrops. This book also offers important reflections on the role of ethnographic researchers, and on their stance and outlook, that are of paramount interest across the disciplines involved in collaborative economy research

    Subjectivity, nature, existence: Foundational issues for enactive phenomenology

    Get PDF
    This thesis explores and discusses foundational issues concerning the relationship between phenomenological philosophy and the enactive approach to cognitive science, with the aim of clarifying, developing, and promoting the project of enactive phenomenology. This project is framed by three general ideas: 1) that the sciences of mind need a phenomenological grounding, 2) that the enactive approach is the currently most promising attempt to provide mind science with such a grounding, and 3) that this attempt involves both a naturalization of phenomenology and a phenomenologization of the concept of nature. More specifically, enactive phenomenology is the project of pursuing mutually illuminative exchanges between, on the one hand, phenomenological investigations of the structures of lived experience and embodied existence and, on the other, scientific accounts of mind and life – in particular those framed by theories of biological self-organization. The thesis consists of two parts. Part one is an introductory essay that seeks to clarify some of enactive phenomenology’s overarching philosophical commitments by tracing some of its historical roots. Part two is a compilation of four articles, each of which intervenes in a different contemporary debate relevant to the dissertation’s project

    Discovering Causal Relations and Equations from Data

    Full text link
    Physics is a field of science that has traditionally used the scientific method to answer questions about why natural phenomena occur and to make testable models that explain the phenomena. Discovering equations, laws and principles that are invariant, robust and causal explanations of the world has been fundamental in physical sciences throughout the centuries. Discoveries emerge from observing the world and, when possible, performing interventional studies in the system under study. With the advent of big data and the use of data-driven methods, causal and equation discovery fields have grown and made progress in computer science, physics, statistics, philosophy, and many applied fields. All these domains are intertwined and can be used to discover causal relations, physical laws, and equations from observational data. This paper reviews the concepts, methods, and relevant works on causal and equation discovery in the broad field of Physics and outlines the most important challenges and promising future lines of research. We also provide a taxonomy for observational causal and equation discovery, point out connections, and showcase a complete set of case studies in Earth and climate sciences, fluid dynamics and mechanics, and the neurosciences. This review demonstrates that discovering fundamental laws and causal relations by observing natural phenomena is being revolutionised with the efficient exploitation of observational data, modern machine learning algorithms and the interaction with domain knowledge. Exciting times are ahead with many challenges and opportunities to improve our understanding of complex systems.Comment: 137 page

    (b2023 to 2014) The UNBELIEVABLE similarities between the ideas of some people (2006-2016) and my ideas (2002-2008) in physics (quantum mechanics, cosmology), cognitive neuroscience, philosophy of mind, and philosophy (this manuscript would require a REVOLUTION in international academy environment!)

    Get PDF
    (b2023 to 2014) The UNBELIEVABLE similarities between the ideas of some people (2006-2016) and my ideas (2002-2008) in physics (quantum mechanics, cosmology), cognitive neuroscience, philosophy of mind, and philosophy (this manuscript would require a REVOLUTION in international academy environment!

    Vitalism and Its Legacy in Twentieth Century Life Sciences and Philosophy

    Get PDF
    This Open Access book combines philosophical and historical analysis of various forms of alternatives to mechanism and mechanistic explanation, focusing on the 19th century to the present. It addresses vitalism, organicism and responses to materialism and its relevance to current biological science. In doing so, it promotes dialogue and discussion about the historical and philosophical importance of vitalism and other non-mechanistic conceptions of life. It points towards the integration of genomic science into the broader history of biology. It details a broad engagement with a variety of nineteenth, twentieth and twenty-first century vitalisms and conceptions of life. In addition, it discusses important threads in the history of concepts in the United States and Europe, including charting new reception histories in eastern and south-eastern Europe. While vitalism, organicism and similar epistemologies are often the concern of specialists in the history and philosophy of biology and of historians of ideas, the range of the contributions as well as the geographical and temporal scope of the volume allows for it to appeal to the historian of science and the historian of biology generally

    LIPIcs, Volume 261, ICALP 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 261, ICALP 2023, Complete Volum

    What does explainable AI explain?

    Get PDF
    Machine Learning (ML) models are increasingly used in industry, as well as in scientific research and social contexts. Unfortunately, ML models provide only partial solutions to real-world problems, focusing on predictive performance in static environments. Problem aspects beyond prediction, such as robustness in employment, knowledge generation in science, or providing recourse recommendations to end-users, cannot be directly tackled with ML models. Explainable Artificial Intelligence (XAI) aims to solve, or at least highlight, problem aspects beyond predictive performance through explanations. However, the field is still in its infancy, as fundamental questions such as “What are explanations?”, “What constitutes a good explanation?”, or “How relate explanation and understanding?” remain open. In this dissertation, I combine philosophical conceptual analysis and mathematical formalization to clarify a prerequisite of these difficult questions, namely what XAI explains: I point out that XAI explanations are either associative or causal and either aim to explain the ML model or the modeled phenomenon. The thesis is a collection of five individual research papers that all aim to clarify how different problems in XAI are related to these different “whats”. In Paper I, my co-authors and I illustrate how to construct XAI methods for inferring associational phenomenon relationships. Paper II directly relates to the first; we formally show how to quantify uncertainty of such scientific inferences for two XAI methods – partial dependence plots (PDP) and permutation feature importance (PFI). Paper III discusses the relationship between counterfactual explanations and adversarial examples; I argue that adversarial examples can be described as counterfactual explanations that alter the prediction but not the underlying target variable. In Paper IV, my co-authors and I argue that algorithmic recourse recommendations should help data-subjects improve their qualification rather than to game the predictor. In Paper V, we address general problems with model agnostic XAI methods and identify possible solutions
    • 

    corecore