95 research outputs found

    SCHA-VAE: Hierarchical Context Aggregation for Few-Shot Generation

    Full text link
    A few-shot generative model should be able to generate data from a novel distribution by only observing a limited set of examples. In few-shot learning the model is trained on data from many sets from distributions sharing some underlying properties such as sets of characters from different alphabets or objects from different categories. We extend current latent variable models for sets to a fully hierarchical approach with an attention-based point to set-level aggregation and call our method SCHA-VAE for Set-Context-Hierarchical-Aggregation Variational Autoencoder. We explore likelihood-based model comparison, iterative data sampling, and adaptation-free out-of-distribution generalization. Our results show that the hierarchical formulation better captures the intrinsic variability within the sets in the small data regime. This work generalizes deep latent variable approaches to few-shot learning, taking a step toward large-scale few-shot generation with a formulation that readily works with current state-of-the-art deep generative models.Comment: ICML 202

    La ricostruzione digitale al servizio della memoria: Messina 1780

    Full text link
    [EN] In recent years, the study of the evolution of the appearance and conformation of cities over the centuries has found new forms of representation through the use of digital modelling and related immersive techniques. These technologies, spread through the gaming industry, are now finding more and more space also in the world of archaeology and the rediscovery of cultural heritage to allow us to catapult ourselves into scenarios that belonged to the past. These investigation methods lend themselves remarkably well in the case of large urban places that no longer exist due to destructive events but of which there is a sufficient amount of documentation such as to be able to reconstruct its appearance with excellent detail and high reliability. This project aims to rebuild the city of Messina as it appeared in the eighteenth century before being razed to the ground by natural disasters.Giannone, L.; Verdiani, G. (2020). Digital reconstruction at the service of memory: Messina 1780. EGE Revista de Expresión Gráfica en la Edificación. 0(13):115-127. https://doi.org/10.4995/ege.2020.14800OJS11512701

    Aligning Optimization Trajectories with Diffusion Models for Constrained Design Generation

    Full text link
    Generative models have had a profound impact on vision and language, paving the way for a new era of multimodal generative applications. While these successes have inspired researchers to explore using generative models in science and engineering to accelerate the design process and reduce the reliance on iterative optimization, challenges remain. Specifically, engineering optimization methods based on physics still outperform generative models when dealing with constrained environments where data is scarce and precision is paramount. To address these challenges, we introduce Diffusion Optimization Models (DOM) and Trajectory Alignment (TA), a learning framework that demonstrates the efficacy of aligning the sampling trajectory of diffusion models with the optimization trajectory derived from traditional physics-based methods. This alignment ensures that the sampling process remains grounded in the underlying physical principles. Our method allows for generating feasible and high-performance designs in as few as two steps without the need for expensive preprocessing, external surrogate models, or additional labeled data. We apply our framework to structural topology optimization, a fundamental problem in mechanical design, evaluating its performance on in- and out-of-distribution configurations. Our results demonstrate that TA outperforms state-of-the-art deep generative models on in-distribution configurations and halves the inference computational cost. When coupled with a few steps of optimization, it also improves manufacturability for out-of-distribution conditions. By significantly improving performance and inference efficiency, DOM enables us to generate high-quality designs in just a few steps and guide them toward regions of high performance and manufacturability, paving the way for the widespread application of generative models in large-scale data-driven design.Comment: arXiv admin note: text overlap with arXiv:2303.0976

    Learning from Invalid Data: On Constraint Satisfaction in Generative Models

    Full text link
    Generative models have demonstrated impressive results in vision, language, and speech. However, even with massive datasets, they struggle with precision, generating physically invalid or factually incorrect data. This is particularly problematic when the generated data must satisfy constraints, for example, to meet product specifications in engineering design or to adhere to the laws of physics in a natural scene. To improve precision while preserving diversity and fidelity, we propose a novel training mechanism that leverages datasets of constraint-violating data points, which we consider invalid. Our approach minimizes the divergence between the generative distribution and the valid prior while maximizing the divergence with the invalid distribution. We demonstrate how generative models like GANs and DDPMs that we augment to train with invalid data vastly outperform their standard counterparts which solely train on valid data points. For example, our training procedure generates up to 98 % fewer invalid samples on 2D densities, improves connectivity and stability four-fold on a stacking block problem, and improves constraint satisfaction by 15 % on a structural topology optimization benchmark in engineering design. We also analyze how the quality of the invalid data affects the learning procedure and the generalization properties of models. Finally, we demonstrate significant improvements in sample efficiency, showing that a tenfold increase in valid samples leads to a negligible difference in constraint satisfaction, while less than 10 % invalid samples lead to a tenfold improvement. Our proposed mechanism offers a promising solution for improving precision in generative models while preserving diversity and fidelity, particularly in domains where constraint satisfaction is critical and data is limited, such as engineering design, robotics, and medicine

    Copyright Collecting Societies, Monopolistic Positions and Competition in the Eu Single Market

    Get PDF
    The paper will discuss the reform of the legal framework in the light of the EU proposal directive on collecting societies. The focus will be specifically devoted to the Italian situation, where, like in Austria, there is a legal monopoly. The basis of this monopoly has been recently discussed and the Italian legislator has liberalized neighboring rights. According to some scholars this would lead to a re-thinking of the legal system and to the liberalization of copyrights too. On the other hand, we will take into account the relations between the CISAC decision and the EU Services Directive. The re-thinking of the de facto or legal monopoly positions will be analyzed by the paper also from an economic perspective, discussing economies of scale in the peculiar perspective of the division of the relevant markets. In fact, the final view is that the overcoming the legal monopoly is likely to lead to partitioning of the markets which will have the result of, on the one hand, promoting the creation of small collecting societies, which will be dedicated to specific sectors, but, on the other hand, it should facilitate the growth of the power of collecting societies already dominant in Europe (e.g. GEMA, PRS)
    • …
    corecore