386 research outputs found

    A Comprehensive Survey of Forgetting in Deep Learning Beyond Continual Learning

    Full text link
    Forgetting refers to the loss or deterioration of previously acquired information or knowledge. While the existing surveys on forgetting have primarily focused on continual learning, forgetting is a prevalent phenomenon observed in various other research domains within deep learning. Forgetting manifests in research fields such as generative models due to generator shifts, and federated learning due to heterogeneous data distributions across clients. Addressing forgetting encompasses several challenges, including balancing the retention of old task knowledge with fast learning of new tasks, managing task interference with conflicting goals, and preventing privacy leakage, etc. Moreover, most existing surveys on continual learning implicitly assume that forgetting is always harmful. In contrast, our survey argues that forgetting is a double-edged sword and can be beneficial and desirable in certain cases, such as privacy-preserving scenarios. By exploring forgetting in a broader context, we aim to present a more nuanced understanding of this phenomenon and highlight its potential advantages. Through this comprehensive survey, we aspire to uncover potential solutions by drawing upon ideas and approaches from various fields that have dealt with forgetting. By examining forgetting beyond its conventional boundaries, in future work, we hope to encourage the development of novel strategies for mitigating, harnessing, or even embracing forgetting in real applications. A comprehensive list of papers about forgetting in various research fields is available at \url{https://github.com/EnnengYang/Awesome-Forgetting-in-Deep-Learning}

    Convolutional auto-encoded extreme learning machine for incremental learning of heterogeneous images

    Get PDF
    In real-world scenarios, a system's continual updating of learning knowledge becomes more critical as the data grows faster, producing vast volumes of data. Moreover, the learning process becomes complex when the data features become varied due to the addition or deletion of classes. In such cases, the generated model should learn effectively. Incremental learning refers to the learning of data which constantly arrives over time. This learning requires continuous model adaptation but with limited memory resources without sacrificing model accuracy. In this paper, we proposed a straightforward knowledge transfer algorithm (convolutional auto-encoded extreme learning machine (CAE-ELM)) implemented through the incremental learning methodology for the task of supervised classification using an extreme learning machine (ELM). Incremental learning is achieved by creating an individual train model for each set of homogeneous data and incorporating the knowledge transfer among the models without sacrificing accuracy with minimal memory resources. In CAE-ELM, convolutional neural network (CNN) extracts the features, stacked autoencoder (SAE) reduces the size, and ELM learns and classifies the images. Our proposed algorithm is implemented and experimented on various standard datasets: MNIST, ORL, JAFFE, FERET and Caltech. The results show a positive sign of the correctness of the proposed algorithm

    How to Reuse and Compose Knowledge for a Lifetime of Tasks: A Survey on Continual Learning and Functional Composition

    Full text link
    A major goal of artificial intelligence (AI) is to create an agent capable of acquiring a general understanding of the world. Such an agent would require the ability to continually accumulate and build upon its knowledge as it encounters new experiences. Lifelong or continual learning addresses this setting, whereby an agent faces a continual stream of problems and must strive to capture the knowledge necessary for solving each new task it encounters. If the agent is capable of accumulating knowledge in some form of compositional representation, it could then selectively reuse and combine relevant pieces of knowledge to construct novel solutions. Despite the intuitive appeal of this simple idea, the literatures on lifelong learning and compositional learning have proceeded largely separately. In an effort to promote developments that bridge between the two fields, this article surveys their respective research landscapes and discusses existing and future connections between them

    Dynamic Mathematics for Automated Machine Learning Techniques

    Get PDF
    Machine Learning and Neural Networks have been gaining popularity and are widely considered as the driving force of the Fourth Industrial Revolution. However, modern machine learning techniques such as backpropagation training was firmly established in 1986 while computer vision was revolutionised in 2012 with the introduction of AlexNet. Given all these accomplishments, why are neural networks still not an integral part of our society? ``Because they are difficult to implement in practice.'' I'd like to use machine learning, but I can't invest much time. The concept of Automated Machine Learning (AutoML) was first proposed by Professor Frank Hutter of the University of Freiburg. Machine learning is not simple; it requires a practitioner to have thorough understanding on the attributes of their data and the components which their model entails. AutoML is the effort to automate all tedious aspects of machine learning to form a clean data analysis pipeline. This thesis is our effort to develop and to understand ways to automate machine learning. Specifically, we focused on Recurrent Neural Networks (RNNs), Meta-Learning, and Continual Learning. We studied continual learning to enable a network to sequentially acquire skills in a dynamic environment; we studied meta-learning to understand how a network can be configured efficiently; and we studied RNNs to understand the consequences of consecutive actions. Our RNN-study focused on mathematical interpretability. We described a large variety of RNNs as one mathematical class to understand their core network mechanism. This enabled us to extend meta-learning beyond network configuration for network pruning and continual learning. This also provided insights for us to understand how a single network should be consecutively configured and led us to the creation of a simple generic patch that is compatible to several existing continual learning archetypes. This patch enhanced the robustness of continual learning techniques and allowed them to generalise data better. By and large, this thesis presented a series of extensions to enable AutoML to be made simple, efficient, and robust. More importantly, all of our methods are motivated with mathematical understandings through the lens of dynamical systems. Thus, we also increased the interpretability of AutoML concepts

    Prevention and the Pillars of a Dynamic Theory of Civil Liability: A Comparative Study on Preventive Remedies

    Get PDF
    The purpose of this study is to draw the coordinates and identify the main vectors for the development of a comprehensive theory of prevention in the law of torts

    Reputational Privacy and the Internet: A Matter for Law?

    Get PDF
    Reputation - we all have one. We do not completely comprehend its workings and are mostly unaware of its import until it is gone. When we lose it, our traditional laws of defamation, privacy, and breach of confidence rarely deliver the vindication and respite we seek due, primarily, to legal systems that cobble new media methods of personal injury onto pre-Internet laws. This dissertation conducts an exploratory study of the relevance of law to loss of individual reputation perpetuated on the Internet. It deals with three interrelated concepts: reputation, privacy, and memory. They are related in that the increasing lack of privacy involved in our online activities has had particularly powerful reputational effects, heightened by the Internet’s duplicative memory. The study is framed within three research questions: 1) how well do existing legal mechanisms address loss of reputation and informational privacy in the new media environment; 2) can new legal or extra-legal solutions fill any gaps; and 3) how is the role of law pertaining to reputation affected by the human-computer interoperability emerging as the Internet of Things? Through a review of international and domestic legislation, case law, and policy initiatives, this dissertation explores the extent of control held by the individual over her reputational privacy. Two emerging regulatory models are studied for improvements they offer over current legal responses: the European Union’s General Data Protection Regulation, and American Do Not Track policies. Underscoring this inquiry are the challenges posed by the Internet’s unique architecture and the fact that the trove of references to reputation in international treaties is not making its way into domestic jurisprudence or daily life. This dissertation examines whether online communications might be developing a new form of digital speech requiring new legal responses and new gradients of personal harm; it also proposes extra-legal solutions to the paradox that our reputational needs demand an overt sociality while our desire for privacy has us shunning the limelight. As we embark on the Web 3.0 era of human-machine interoperability and the Internet of Things, our expectations of the role of law become increasingly important

    Deep Neural Networks and Data for Automated Driving

    Get PDF
    This open access book brings together the latest developments from industry and research on automated driving and artificial intelligence. Environment perception for highly automated driving heavily employs deep neural networks, facing many challenges. How much data do we need for training and testing? How to use synthetic data to save labeling costs for training? How do we increase robustness and decrease memory usage? For inevitably poor conditions: How do we know that the network is uncertain about its decisions? Can we understand a bit more about what actually happens inside neural networks? This leads to a very practical problem particularly for DNNs employed in automated driving: What are useful validation techniques and how about safety? This book unites the views from both academia and industry, where computer vision and machine learning meet environment perception for highly automated driving. Naturally, aspects of data, robustness, uncertainty quantification, and, last but not least, safety are at the core of it. This book is unique: In its first part, an extended survey of all the relevant aspects is provided. The second part contains the detailed technical elaboration of the various questions mentioned above

    Being serious about games:Whether, how, why, and when computer- and game-based interventions could facilitate in reducing burdens of chronic somatic symptoms

    Get PDF
    Computertoepassingen hebben de potentie om patiënten met een chronische aandoening op een toegankelijke of kosteneffectieve wijze te ondersteunen. ‘Serious games’ zijn computergames die bedoeld zijn om spelers van de games te vermaken, maar ook hun kennis, gedrag, of (mentale) gezondheid te beïnvloeden. Voor patiënten met chronische klachten van pijn of vermoeidheid werd de game LAKA ontwikkeld om hen te motiveren tot oefening in zelfbewustzijn in de omgang met dagelijkse onzekere sociaal-emotionele situaties. Naar schatting heeft één op de vijf Europeanen langer dan zes maanden pijn. Chronische pijn eist aandacht op en daagt uit om daarmee om te gaan. In veel gevallen ontstaan daarbij psychosociale problemen, zoals depressie en verzuim, met sterke maatschappelijke gevolgen. Veel is nog onduidelijk over de haalbaarheid en effectiviteit van computerinterventies, zoals serious games, ter vermindering van individuele lasten van chronische lichamelijke symptomen. In dit onderzoek is met diverse methoden geprobeerd antwoord te geven op de vragen: In hoeverre werken computerinterventies en serious games bij welke mensen met langdurige pijn, hoe werken ze en onder welke omstandigheden? Allereerst werd een literatuurstudie uitgevoerd naar eerder gepubliceerde experimenten over de effectiviteit van computerinterventies voor patiënten met chronische pijn of onverklaarde chronische lichamelijke klachten. Voor onderzoek naar de haalbaarheid van ‘LAKA’, oftewel de acceptatie en het gebruik van de game door patiënten tijdens een multidisciplinair revalidatieprogramma, werden gegevens uit patiëntendossiers, aanvullende vragenlijsten, automatische gebruiksregistraties en patiëntinterviews gebruikt. In een daaropvolgend experiment werden veranderingen in ervaren pijnintensiteit, vermoeidheid, toekenning van negatieve betekenissen aan pijn en psychische lasten vergeleken tussen 2 groepen patiënten: (1) een groep die een door zorgverleners ondersteunde interventie met LAKA volgde tijdens multidisciplinaire revalidatie en (2) een groep die hetzelfde revalidatieprogramma volgde zonder serious gaming. Ook werden patiënten en zorgpersoneel bevraagd naar hun ervaringen met de game. Er werden geen eerdere experimenten met serious games gevonden. De overgrote meerderheid van de geïdentificeerde studies onderzochten het effect van online cognitieve gedragstherapie. Hiervoor werden positieve en blijvende effecten gevonden op patiëntuitkomsten van fysiek en emotioneel functioneren. Echter, de geschatte effecten waren dermate klein en wisselend dat ze voor veel patiënten niet of nauwelijks merkbaar zijn. In het eigen experiment met serious gaming tijdens multidisciplinaire revalidatie werd een ‘zeer klein’ effect gevonden. De inzet van LAKA is haalbaar gebleken tijdens multidisciplinaire revalidatie voor mensen met langdurige pijn. Acceptatie en gebruik varieerden met gewoonten, percepties over plezier en gemak, een coping stijl van actief aanpakken van patiënten en adequate implementatieprocessen. Eerdere experimenten en het experiment met LAKA suggereerden dat leer- en gezondheidseffecten ontstaan door aandachtig gebruik van strategieën voor gedragsverandering. De verklaring hiervoor is dat mensen het piekeren of het toekennen van een negatieve betekenis aan pijn tegengaan. Daarnaast leren zij acceptatie en zelfbewustzijn te bevorderen. Hiervan is sprake bij patiënten die symptomen van depressie tonen of weinig controle over stress of pijn ervaren. Deskundige begeleiding en leren uit blootstelling aan sociale omgevingen, in het echt of in een serious game, zijn daarbij belangrijke voorwaarden. Dit onderzoek kan als basis worden gezien voor meer theoriegerichte en contextgevoelige evaluaties van serious games voor mensen die chronische pijn ervaren. Meer informatie is nodig voor patiënten, zorgverleners en andere besluitvormers, zodat zij beter weten wat ze in persoonlijke en lokale omstandigheden kunnen verwachten van serious games als onderdeel van de behandeling

    Mnidoo-Worlding: Merleau-Ponty and Anishinaabe Philosophical Translations

    Get PDF
    This dissertation develops a concept of mnidoo-worlding, whereby consciousness emerges as a kind of possession by what is outside of ‘self’ and simultaneously by what is internal as self-possession. Weaving together phenomenology, post structural philosophy and Ojibwe Anishinaabe orally transmitted knowledges, I examine Ojibwe Anishinaabe mnidoo, or ‘other than human,’ ontologies. Mnidoo refers to energy, potency or processes that suffuse all of existence and includes humans, animals, plants, inanimate ‘objects’ and invisible and intangible forces (i.e. Thunder Beings). Such Anishinaabe philosophies engage with what I articulate as all-encompassing and interpenetrating mnidoo co-responsiveness. The result is a resistance to cooption that concedes to the heterogeneity of being. I define this murmuration, that is, this concurrent gathering of divergent and fluctuating actuation/signals as mnidoo-worlding. Mnidoo-worlding entails a possession by one’s surroundings that subsumes and conditions the possibility of agency as entwined and plural co-presence. The introductory chapter defines the terms of mnidoo philosophy, and my particular translations of it. The chapter further disentangles mnidoo-philosophy from the ways it has been appropriated, and misinterpreted by western interlocuters. It also situates the mnidoo ontology I am developing in broader conversations in phenomenology about the relational world. Chapter Two explores the complex implications of conducting Anishinaabe philosophy in colonial languages and institutions, framed in the context of settler colonialism and discourses of reconciliation and indigenizing the academy. In Chapter Three I engage with the ‘Indigenous Renaissance’ in Indigenous arts and scholarship, outlining epistemological-pedagogical methods including oral traditions, embodied knowing, land-based pedagogy and non-interference pedagogy. The fourth chapter forwards a critique of liberal humanism and posthumanism through an interrogation of Deleuze and Guattari’s concept “becoming-animal.” The final, culminating chapter brings Anishinaabe ontologies, tacitly found embedded in our everydayness, together with Indigenous ways of being attuned to what is there in the world. In dialogue with Merleau-Ponty’s phenomenology I take up Anishinaabe mnidoo philosophies to consider everyday phenomena from the collective movement of birds, to intuition and dreams. These are profoundly imbued in these philosophically-lived practices as embodied ciphers—languages and knowledge hidden in our “encrustation” with the world—subtly revealed as a simultaneous presence and elsewhere paradox

    Trusted Artificial Intelligence in Manufacturing; Trusted Artificial Intelligence in Manufacturing

    Get PDF
    The successful deployment of AI solutions in manufacturing environments hinges on their security, safety and reliability which becomes more challenging in settings where multiple AI systems (e.g., industrial robots, robotic cells, Deep Neural Networks (DNNs)) interact as atomic systems and with humans. To guarantee the safe and reliable operation of AI systems in the shopfloor, there is a need to address many challenges in the scope of complex, heterogeneous, dynamic and unpredictable environments. Specifically, data reliability, human machine interaction, security, transparency and explainability challenges need to be addressed at the same time. Recent advances in AI research (e.g., in deep neural networks security and explainable AI (XAI) systems), coupled with novel research outcomes in the formal specification and verification of AI systems provide a sound basis for safe and reliable AI deployments in production lines. Moreover, the legal and regulatory dimension of safe and reliable AI solutions in production lines must be considered as well. To address some of the above listed challenges, fifteen European Organizations collaborate in the scope of the STAR project, a research initiative funded by the European Commission in the scope of its H2020 program (Grant Agreement Number: 956573). STAR researches, develops, and validates novel technologies that enable AI systems to acquire knowledge in order to take timely and safe decisions in dynamic and unpredictable environments. Moreover, the project researches and delivers approaches that enable AI systems to confront sophisticated adversaries and to remain robust against security attacks. This book is co-authored by the STAR consortium members and provides a review of technologies, techniques and systems for trusted, ethical, and secure AI in manufacturing. The different chapters of the book cover systems and technologies for industrial data reliability, responsible and transparent artificial intelligence systems, human centered manufacturing systems such as human-centred digital twins, cyber-defence in AI systems, simulated reality systems, human robot collaboration systems, as well as automated mobile robots for manufacturing environments. A variety of cutting-edge AI technologies are employed by these systems including deep neural networks, reinforcement learning systems, and explainable artificial intelligence systems. Furthermore, relevant standards and applicable regulations are discussed. Beyond reviewing state of the art standards and technologies, the book illustrates how the STAR research goes beyond the state of the art, towards enabling and showcasing human-centred technologies in production lines. Emphasis is put on dynamic human in the loop scenarios, where ethical, transparent, and trusted AI systems co-exist with human workers. The book is made available as an open access publication, which could make it broadly and freely available to the AI and smart manufacturing communities
    corecore