10,574 research outputs found

    Modular lifelong machine learning

    Get PDF
    Deep learning has drastically improved the state-of-the-art in many important fields, including computer vision and natural language processing (LeCun et al., 2015). However, it is expensive to train a deep neural network on a machine learning problem. The overall training cost further increases when one wants to solve additional problems. Lifelong machine learning (LML) develops algorithms that aim to efficiently learn to solve a sequence of problems, which become available one at a time. New problems are solved with less resources by transferring previously learned knowledge. At the same time, an LML algorithm needs to retain good performance on all encountered problems, thus avoiding catastrophic forgetting. Current approaches do not possess all the desired properties of an LML algorithm. First, they primarily focus on preventing catastrophic forgetting (Diaz-Rodriguez et al., 2018; Delange et al., 2021). As a result, they neglect some knowledge transfer properties. Furthermore, they assume that all problems in a sequence share the same input space. Finally, scaling these methods to a large sequence of problems remains a challenge. Modular approaches to deep learning decompose a deep neural network into sub-networks, referred to as modules. Each module can then be trained to perform an atomic transformation, specialised in processing a distinct subset of inputs. This modular approach to storing knowledge makes it easy to only reuse the subset of modules which are useful for the task at hand. This thesis introduces a line of research which demonstrates the merits of a modular approach to lifelong machine learning, and its ability to address the aforementioned shortcomings of other methods. Compared to previous work, we show that a modular approach can be used to achieve more LML properties than previously demonstrated. Furthermore, we develop tools which allow modular LML algorithms to scale in order to retain said properties on longer sequences of problems. First, we introduce HOUDINI, a neurosymbolic framework for modular LML. HOUDINI represents modular deep neural networks as functional programs and accumulates a library of pre-trained modules over a sequence of problems. Given a new problem, we use program synthesis to select a suitable neural architecture, as well as a high-performing combination of pre-trained and new modules. We show that our approach has most of the properties desired from an LML algorithm. Notably, it can perform forward transfer, avoid negative transfer and prevent catastrophic forgetting, even across problems with disparate input domains and problems which require different neural architectures. Second, we produce a modular LML algorithm which retains the properties of HOUDINI but can also scale to longer sequences of problems. To this end, we fix the choice of a neural architecture and introduce a probabilistic search framework, PICLE, for searching through different module combinations. To apply PICLE, we introduce two probabilistic models over neural modules which allows us to efficiently identify promising module combinations. Third, we phrase the search over module combinations in modular LML as black-box optimisation, which allows one to make use of methods from the setting of hyperparameter optimisation (HPO). We then develop a new HPO method which marries a multi-fidelity approach with model-based optimisation. We demonstrate that this leads to improvement in anytime performance in the HPO setting and discuss how this can in turn be used to augment modular LML methods. Overall, this thesis identifies a number of important LML properties, which have not all been attained in past methods, and presents an LML algorithm which can achieve all of them, apart from backward transfer

    Towards A Practical High-Assurance Systems Programming Language

    Full text link
    Writing correct and performant low-level systems code is a notoriously demanding job, even for experienced developers. To make the matter worse, formally reasoning about their correctness properties introduces yet another level of complexity to the task. It requires considerable expertise in both systems programming and formal verification. The development can be extremely costly due to the sheer complexity of the systems and the nuances in them, if not assisted with appropriate tools that provide abstraction and automation. Cogent is designed to alleviate the burden on developers when writing and verifying systems code. It is a high-level functional language with a certifying compiler, which automatically proves the correctness of the compiled code and also provides a purely functional abstraction of the low-level program to the developer. Equational reasoning techniques can then be used to prove functional correctness properties of the program on top of this abstract semantics, which is notably less laborious than directly verifying the C code. To make Cogent a more approachable and effective tool for developing real-world systems, we further strengthen the framework by extending the core language and its ecosystem. Specifically, we enrich the language to allow users to control the memory representation of algebraic data types, while retaining the automatic proof with a data layout refinement calculus. We repurpose existing tools in a novel way and develop an intuitive foreign function interface, which provides users a seamless experience when using Cogent in conjunction with native C. We augment the Cogent ecosystem with a property-based testing framework, which helps developers better understand the impact formal verification has on their programs and enables a progressive approach to producing high-assurance systems. Finally we explore refinement type systems, which we plan to incorporate into Cogent for more expressiveness and better integration of systems programmers with the verification process

    Evaluation Methodologies in Software Protection Research

    Full text link
    Man-at-the-end (MATE) attackers have full control over the system on which the attacked software runs, and try to break the confidentiality or integrity of assets embedded in the software. Both companies and malware authors want to prevent such attacks. This has driven an arms race between attackers and defenders, resulting in a plethora of different protection and analysis methods. However, it remains difficult to measure the strength of protections because MATE attackers can reach their goals in many different ways and a universally accepted evaluation methodology does not exist. This survey systematically reviews the evaluation methodologies of papers on obfuscation, a major class of protections against MATE attacks. For 572 papers, we collected 113 aspects of their evaluation methodologies, ranging from sample set types and sizes, over sample treatment, to performed measurements. We provide detailed insights into how the academic state of the art evaluates both the protections and analyses thereon. In summary, there is a clear need for better evaluation methodologies. We identify nine challenges for software protection evaluations, which represent threats to the validity, reproducibility, and interpretation of research results in the context of MATE attacks

    Approximate Computing Survey, Part I: Terminology and Software & Hardware Approximation Techniques

    Full text link
    The rapid growth of demanding applications in domains applying multimedia processing and machine learning has marked a new era for edge and cloud computing. These applications involve massive data and compute-intensive tasks, and thus, typical computing paradigms in embedded systems and data centers are stressed to meet the worldwide demand for high performance. Concurrently, the landscape of the semiconductor field in the last 15 years has constituted power as a first-class design concern. As a result, the community of computing systems is forced to find alternative design approaches to facilitate high-performance and/or power-efficient computing. Among the examined solutions, Approximate Computing has attracted an ever-increasing interest, with research works applying approximations across the entire traditional computing stack, i.e., at software, hardware, and architectural levels. Over the last decade, there is a plethora of approximation techniques in software (programs, frameworks, compilers, runtimes, languages), hardware (circuits, accelerators), and architectures (processors, memories). The current article is Part I of our comprehensive survey on Approximate Computing, and it reviews its motivation, terminology and principles, as well it classifies and presents the technical details of the state-of-the-art software and hardware approximation techniques.Comment: Under Review at ACM Computing Survey

    Raising Critical Consciousness in Engineering Education: A Critical Exploration of Transformative Possibilities in Engineering Education and Research

    Get PDF
    This thesis represents a critical exploration of the opportunities, challenges, and barriers to enacting social justice via the engineering curriculum. Through an ethnographic case study of a British engineering for sustainable development course, I illuminate tensions and contradictions of attempts to “do good” while “doing engineering” in a higher education setting. This work is couched within critical and anti-colonial theoretical frames. Through critical and reflexive analysis, I illustrate attempts of participants to innovate in engineering education toward a counter-hegemonic engineering practice, and highlight transformative possibilities, as well as barriers. This case illustrates how the structures that formed modern engineering continue to shape engineering higher education, restraining attempts to transform engineering training for social good.A central question that has driven this work has been: Is it possible to cultivate a more socially just form of engineering practice through engineering higher education? The function of asking this question has been to interrogate a core assumption in engineering education research – that with the right blend of educational interventions, we can make strides towards social justice. My intent in interrogating this assumption is not to be nihilistic per se. I believe it is entirely possible that engineering could potentially be wielded for just cause and consequence. However, if we do not critically examine our core assumptions around this issue, we may also miss out on the possibility that socially just engineering is not achievable, at least in the way we are currently approaching it or in the current context within which it exists.An examination of this topic is already underway in the US context. However, it is under-explored in a British context. Given the different historical trajectories of engineering and engineering in higher education between these two contexts, a closer look at the British context is warranted

    An American Knightmare: Joker, Fandom, and Malicious Movie Meaning-Making

    Get PDF
    This monograph concerns the long-standing communication problem of how individuals can identify and resist the influence of unethical public speakers. Scholarship on the issue of what Socrates & Plato called the “Evil Lover” – i.e., the ill-intended rhetor – began with the Greek philosophers, but has carried into [post]Modern anxieties. For instance, the study of Nazi propaganda machines, and the rhetoric of Hitler himself, rejuvenated interest in the study of speech and communication in the U.S. and Europe. Whereas unscrupulous sophists used lectures and legal forums, and Hitler used a microphone, contemporary Evil Lovers primarily draw on new, internet-related tools to share their malicious influence. These new tools of influence are both more far-reaching and more subtle than the traditional practices of listening to a designated speaker appearing at an overtly political event. Rhetorician Ashley Hinck has recently noted the ways that popular culture – communication about texts which are commonly accessible and shared – are now significant sites through which citizens learn moral and political values. Accordingly, the talk of internet influencers who interpret popular texts for other fans has the potential to constitute strong persuasive power regarding ethics and civic responsibility. The present work identifies and responds to a particular case example of popular culture text that has been recently, and frequently, leveraged in moral and civic discourses: Todd Phillips’ Joker. Specifically, this study takes a hermeneutic approach to understanding responses, especially those explicitly invoking political ideology, to Joker as a method of examining civic meaning-making. A special emphasis is placed on the online film criticisms of Joker from white nationalist movie fans, who clearly exemplify ways that media responses can be leveraged by unethical speakers (i.e., Evil Lovers) and subtly diffused. The study conveys that these racist movie fans can embed values related to “trolling,” incelism, and xenophobia into otherwise seemingly innocuous talk about film. While the sharing of such speech does not immediately mean its positive reception, this kind of communication yet constitutes a new and understudied attack on democratic values such as justice and equity. The case of white nationalist movie fan film criticism therefore reflects a particular brand of communicative strategy for contemporary Evil Lovers in communicating unethical messages under the covert guise of mundane movie talk

    Strong Invariants Are Hard: On the Hardness of Strongest Polynomial Invariants for (Probabilistic) Programs

    Full text link
    We show that computing the strongest polynomial invariant for single-path loops with polynomial assignments is at least as hard as the Skolem problem, a famous problem whose decidability has been open for almost a century. While the strongest polynomial invariants are computable for affine loops, for polynomial loops the problem remained wide open. As an intermediate result of independent interest, we prove that reachability for discrete polynomial dynamical systems is Skolem-hard as well. Furthermore, we generalize the notion of invariant ideals and introduce moment invariant ideals for probabilistic programs. With this tool, we further show that the strongest polynomial moment invariant is (i) uncomputable, for probabilistic loops with branching statements, and (ii) Skolem-hard to compute for polynomial probabilistic loops without branching statements. Finally, we identify a class of probabilistic loops for which the strongest polynomial moment invariant is computable and provide an algorithm for it

    Aspects Topologiques des Représentations en Analyse Calculable

    Get PDF
    Computable analysis provides a formalization of algorithmic computations over infinite mathematical objects. The central notion of this theory is the symbolic representation of objects, which determines the computation power of the machine, and has a direct impact on the difficulty to solve any given problem. The friction between the discrete nature of computations and the continuous nature of mathematical objects is captured by topology, which expresses the idea of finite approximations of infinite objects.We thoroughly study the multiple interactions between computations and topology, analysing the information that can be algorithmically extracted from a representation. In particular, we focus on the comparison between two representations of a single family of objects, on the precise relationship between algorithmic and topological complexity of problems, and on the relationship between finite and infinite representations.L’analyse calculable permet de formaliser le traitement algorithmique d’objets mathĂ©matiques infinis. La thĂ©orie repose sur une reprĂ©sentation symbolique des objets, dont le choix dĂ©termine les capacitĂ©s de calcul de la machine, notamment sa difficultĂ© Ă  rĂ©soudre chaque problĂšme donnĂ©. La friction entre le caractĂšre discret du calcul et la nature continue des objets est capturĂ©e par la topologie, qui exprime l’idĂ©e d’approximation finie d’objets infinis.Nous Ă©tudions en profondeur les multiples interactions entre calcul et topologie, cherchant Ă  analyser l’information qui peut ĂȘtre extraite algorithmiquement d’une reprĂ©sentation. Je me penche plus particuliĂšrement sur la comparaison entre deux reprĂ©sentations d’une mĂȘme famille d’objets, sur les liens dĂ©taillĂ©s entre complexitĂ© algorithmique et topologique des problĂšmes, ainsi que sur les relations entre reprĂ©sentations finies et infinies

    Serving to secure "Global Korea": Gender, mobility, and flight attendant labor migrants

    Get PDF
    This dissertation is an ethnography of mobility and modernity in contemporary South Korea (the Republic of Korea) following neoliberal restructuring precipitated by the Asian Financial Crisis (1997). It focuses on how comparative “service,” “security,” and “safety” fashioned “Global Korea”: an ongoing state-sponsored project aimed at promoting the economic, political, and cultural maturation of South Korea from a once notoriously inhospitable, “backward” country (hujin’guk) to a now welcoming, “advanced country” (sƏnjin’guk). Through physical embodiments of the culturally-specific idiom of “superior” service (sƏbisƭ), I argue that aspiring, current, and former Korean flight attendants have driven the production and maintenance of this national project. More broadly, as a driver of this national project, this occupation has emerged out of the country’s own aspirational flights from an earlier history of authoritarian rule, labor violence, and xenophobia. Against the backdrop of the Korean state’s aggressive neoliberal restructuring, globalization efforts, and current “Hell Chosun” (HelchosƏn) economy, a group of largely academically and/or class disadvantaged young women have been able secure individualized modes of pleasure, self-fulfillment, and class advancement via what I deem “service mobilities.” Service mobilities refers to the participation of mostly women in a traditionally devalued but growing sector of the global labor market, the “pink collar” economy centered around “feminine” care labor. Korean female flight attendants share labor skills resembling those of other foreign labor migrants (chiefly from the “Global South”), who perform care work deemed less desirable. Yet, Korean female flight attendants elude the stigmatizing, classed, and racialized category of “labor migrant.” Moreover, within the context of South Korea’s unique history of rapid modernization, the flight attendant occupation also commands considerable social prestige. Based on ethnographic and archival research on aspiring, current, and former Korean flight attendants, this dissertation asks how these unique care laborers negotiate a metaphorical and literal series of sustained border crossings and inspections between Korean flight attendants’ contingent status as lowly care-laboring migrants, on the one hand, and ostensibly glamorous, globetrotting elites, on the other. This study contends the following: first, the flight attendant occupation in South Korea represents new politics of pleasure and pain in contemporary East Asia. Second, Korean female flight attendants’ enactments of soft, sanitized, and glamorous (hwaryƏhada) service help to purify South Korea’s less savory past. In so doing, Korean flight attendants reconstitute the historical role of female laborers as burden bearers and caretakers of the Korean state.U of I OnlyAuthor submitted a 2-year U of I restriction extension request

    Waiting Nets: State Classes and Taxonomy

    Full text link
    In time Petri nets (TPNs), time and control are tightly connected: time measurement for a transition starts only when all resources needed to fire it are available. Further, upper bounds on duration of enabledness can force transitions to fire (this is called urgency). For many systems, one wants to decouple control and time, i.e. start measuring time as soon as a part of the preset of a transition is filled, and fire it after some delay \underline{and} when all needed resources are available. This paper considers an extension of TPN called waiting nets that dissociates time measurement and control. Their semantics allows time measurement to start with incomplete presets, and can ignore urgency when upper bounds of intervals are reached but all resources needed to fire are not yet available. Firing of a transition is then allowed as soon as missing resources are available. It is known that extending bounded TPNs with stopwatches leads to undecidability. Our extension is weaker, and we show how to compute a finite state class graph for bounded waiting nets, yielding decidability of reachability and coverability. We then compare expressiveness of waiting nets with that of other models w.r.t. timed language equivalence, and show that they are strictly more expressive than TPNs
    • 

    corecore