12 research outputs found

    Style Transfer with Generative Adversarial Networks

    Get PDF
    This dissertation is focused on trying to use concepts from style transfer and image-to-image translation to address the problem of defogging. Defogging (or dehazing) is the ability to remove fog from an image, restoring it as if the photograph was taken during optimal weather conditions. The task of defogging is of particular interest in many fields, such as surveillance or self driving cars. In this thesis an unpaired approach to defogging is adopted, trying to translate a foggy image to the correspondent clear picture without having pairs of foggy and ground truth haze-free images during training. This approach is particularly significant, due to the difficult of gathering an image collection of exactly the same scenes with and without fog. Many of the models and techniques used in this dissertation already existed in literature, but they are extremely difficult to train, and often it is highly problematic to obtain the desired behavior. Our contribute was a systematic implementative and experimental activity, conducted with the aim of attaining a comprehensive understanding of how these models work, and the role of datasets and training procedures in the final results. We also analyzed metrics and evaluation strategies, in order to seek to assess the quality of the presented model in the most correct and appropriate manner. First, the feasibility of an unpaired approach to defogging was analyzed, using the cycleGAN model. Then, the base model was enhanced with a cycle perceptual loss, inspired by style transfer techniques. Next, the role of the training set was investigated, showing that improving the quality of data is at least as important as the utilization of more powerful models. Finally, our approach is compared with state-of-the art defogging methods, showing that the quality of our results is in line with preexisting approaches, even if our model was trained using unpaired data

    Towards Artifacts-free Image Defogging

    Get PDF
    In this paper we present a novel defogging technique,named CurL-Defog, aimed at minimizing the creation of unwanted artifacts during the defogging process. The majority of learning based defogging approaches rely on paired data (i.e.,the same images with and without fog), where fog is artificially added to clear images: this often provides good results on mildly fogged images but does not generalize well to real difficult cases. On the other hand, the models trained with real unpaired data (e.g. CycleGAN) can provide visually impressive results but they often produce unwanted artifacts. In this paper we propose a curriculum learning strategy coupled with an enhanced CycleGAN model in order to reduce the number of produced artifacts, while maintaining state-of-the-art performance in terms of contrast enhancement and image reconstruction. We also introduce a new metric, called HArD (Hazy Artifact Detector) to numerically quantify the amount of artifacts in the defogged images, thus avoiding the tedious and subjective manual inspection of the results. The proposed approach compares favorably with state-of-the-art techniques on both real and synthetic datasets

    Progettazione e implementazione di una incarnazione biochimica per il simulatore Alchemist

    Get PDF
    Scopo di questo elaborato di tesi è la modellazione e l’implementazione di una estensione del simulatore Alchemist, denominata Biochemistry, che permetta di simulare un ambiente multi-cellulare. Al fine di simulare il maggior numero possibile di processi biologici, il simulatore dovrà consentire di modellare l’eterogeneità cellulare attraverso la modellazione di diversi aspetti dei sistemi cellulari, quali: reazioni intracellulari, segnalazione tra cellule adiacenti, giunzioni cellulari e movimento. Dovrà, inoltre, essere ammissibile anche l’esecuzione di azioni impossibili nel mondo reale, come la distruzione o la creazione dal nulla di molecole chimiche. In maniera più specifica si sono modellati ed implementati i seguenti processi biochimici: creazione e distruzione di molecole chimiche, reazioni biochimiche intracellulari, scambio di molecole tra cellule adiacenti, creazione e distruzione di giunzioni cellulari. È stata dunque posta particolare enfasi nella modellazione delle reazioni tra cellule vicine, il cui meccanismo è simile a quello usato nella segnalazione cellulare. Ogni parte del sistema è stata modellata seguendo fenomeni realmente presenti nei sistemi multi-cellulari, e documentati in letteratura. Per la specifica delle reazioni chimiche, date in ingresso alla simulazione, è stata necessaria l’implementazione di un Domain Specific Language (DSL) che consente la scrittura di reazioni in modo simile al linguaggio naturale, consentendo l’uso del simulatore anche a persone senza particolari conoscenze di biologia. La correttezza del progetto è stata validata tramite test compiuti con dati presenti in letteratura e inerenti a processi biologici noti e ampiamente studiati

    Latent Replay for Real-Time Continual Learning

    Full text link
    Training deep neural networks at the edge on light computational devices, embedded systems and robotic platforms is nowadays very challenging. Continual learning techniques, where complex models are incrementally trained on small batches of new data, can make the learning problem tractable even for CPU-only embedded devices enabling remarkable levels of adaptiveness and autonomy. However, a number of practical problems need to be solved: catastrophic forgetting before anything else. In this paper we introduce an original technique named "Latent Replay" where, instead of storing a portion of past data in the input space, we store activations volumes at some intermediate layer. This can significantly reduce the computation and storage required by native rehearsal. To keep the representation stable and the stored activations valid we propose to slow-down learning at all the layers below the latent replay one, leaving the layers above free to learn at full pace. In our experiments we show that Latent Replay, combined with existing continual learning techniques, achieves state-of-the-art performance on complex video benchmarks such as CORe50 NICv2 (with nearly 400 small and highly non-i.i.d. batches) and OpenLORIS. Finally, we demonstrate the feasibility of nearly real-time continual learning on the edge through the deployment of the proposed technique on a smartphone device.Comment: Pre-print v3: 13 pages, 9 figures, 10 tables, 1 algorith

    Is Class-Incremental Enough for Continual Learning?

    Get PDF
    The ability of a model to learn continually can be empirically assessed in different continual learning scenarios. Each scenario defines the constraints and the opportunities of the learning environment. Here, we challenge the current trend in the continual learning literature to experiment mainly on class-incremental scenarios, where classes present in one experience are never revisited. We posit that an excessive focus on this setting may be limiting for future research on continual learning, since class-incremental scenarios artificially exacerbate catastrophic forgetting, at the expense of other important objectives like forward transfer and computational efficiency. In many real-world environments, in fact, repetition of previously encountered concepts occurs naturally and contributes to softening the disruption of previous knowledge. We advocate for a more in-depth study of alternative continual learning scenarios, in which repetition is integrated by design in the stream of incoming information. Starting from already existing proposals, we describe the advantages such class-incremental with repetition scenarios could offer for a more comprehensive assessment of continual learning models

    Avalanche: An end-to-end library for continual learning

    Get PDF
    Learning continually from non-stationary data streams is a long-standing goal and a challenging problem in machine learning. Recently, we have witnessed a renewed and fast-growing interest in continual learning, especially within the deep learning community. However, algorithmic solutions are often difficult to re-implement, evaluate and port across different settings, where even results on standard benchmarks are hard to reproduce. In this work, we propose Avalanche, an open-source end-to-end library for continual learning research based on PyTorch. Avalanche is designed to provide a shared and collaborative codebase for fast prototyping, training, and reproducible evaluation of continual learning algorithms

    Generative negative replay for continual learning

    No full text
    Learning continually is a key aspect of intelligence and a necessary ability to solve many real-life problems. One of the most effective strategies to control catastrophic forgetting, the Achilles’ heel of continual learning, is storing part of the old data and replaying them interleaved with new experiences (also known as the replay approach). Generative replay, which is using generative models to provide replay patterns on demand, is particularly intriguing, however, it was shown to be effective mainly under simplified assumptions, such as simple scenarios and low-dimensional data. In this paper, we show that, while the generated data are usually not able to improve the classification accuracy for the old classes, they can be effective as negative examples (or antagonists) to better learn the new classes, especially when the learning experiences are small and contain examples of just one or few classes. The proposed approach is validated on complex class-incremental and data-incremental continual learning scenarios (CORe50 and ImageNet-1000) composed of high-dimensional data and a large number of training experiences: a setup where existing generative replay approaches usually fail

    AlchemistSimulator/Alchemist: 28.4.3

    No full text
    <h2><a href="https://github.com/AlchemistSimulator/Alchemist/compare/28.4.2...28.4.3">28.4.3</a> (2023-10-26)</h2> <h3>Dependency updates</h3> <ul> <li><strong>deps:</strong> update dependency commons-io:commons-io to v2.15.0 (<a href="https://github.com/AlchemistSimulator/Alchemist/commit/a94377f2edcda77c614fc2bae5511b9bfd033fff">a94377f</a>)</li> <li><strong>deps:</strong> update dependency de.flapdoodle.embed:de.flapdoodle.embed.mongo to v4.9.3 (<a href="https://github.com/AlchemistSimulator/Alchemist/commit/1bec5f48d80aef1ac4f83b4fd5a338b2749b7c3f">1bec5f4</a>)</li> <li><strong>deps:</strong> update node.js to 20.9 (<a href="https://github.com/AlchemistSimulator/Alchemist/commit/fb5f7be0e568edb8f375df4de544fdeb26bb6c19">fb5f7be</a>)</li> <li><strong>deps:</strong> update plugin gitsemver to v2 (<a href="https://github.com/AlchemistSimulator/Alchemist/commit/925a8f5ed578aa4722c93cb08cb9ed139b496b3b">925a8f5</a>)</li> <li><strong>deps:</strong> update react to v18.2.0-pre.636 (<a href="https://github.com/AlchemistSimulator/Alchemist/commit/b9af8e0aa97cf8b5afcae5e40cc501f1c7398417">b9af8e0</a>)</li> <li><strong>deps:</strong> update site/themes/hugo-theme-relearn digest to 5a534d0 (<a href="https://github.com/AlchemistSimulator/Alchemist/commit/937efb925a4b5618b47ce9634ca5879e21299dbb">937efb9</a>)</li> <li><strong>deps:</strong> update site/themes/hugo-theme-relearn digest to d2583cf (<a href="https://github.com/AlchemistSimulator/Alchemist/commit/00cafff57dcd387c5bfb36f944a3e5e1b883e2c2">00cafff</a>)</li> </ul> <h3>Documentation</h3> <ul> <li><strong>website:</strong> use https URIs over ssh ones for the tutorial (<a href="https://github.com/AlchemistSimulator/Alchemist/issues/2770">#2770</a>) (<a href="https://github.com/AlchemistSimulator/Alchemist/commit/ee9b50663d656aa8dacffd6e83fae22d52895b9f">ee9b506</a>)</li> </ul&gt

    AlchemistSimulator/Alchemist: 29.0.1

    No full text
    <h2><a href="https://github.com/AlchemistSimulator/Alchemist/compare/29.0.0...29.0.1">29.0.1</a> (2023-11-22)</h2> <h3>Dependency updates</h3> <ul> <li><strong>core-deps:</strong> update protelis to v17.3.0 (<a href="https://github.com/AlchemistSimulator/Alchemist/commit/58b44a52b9dfb603f3c7cfc2079624702f95afcd">58b44a5</a>)</li> <li><strong>deps:</strong> update dependency org.apache.commons:commons-lang3 to v3.14.0 (<a href="https://github.com/AlchemistSimulator/Alchemist/commit/478829e4ebfef75178f58066e2ed326f5a581ca8">478829e</a>)</li> <li><strong>deps:</strong> update node.js to 20.10 (<a href="https://github.com/AlchemistSimulator/Alchemist/commit/ac46c31f0fce4e9600496e7eead0e728dfc68494">ac46c31</a>)</li> </ul&gt
    corecore