110 research outputs found

    Parametric amplification of optical phonons

    Full text link
    Amplification of light through stimulated emission or nonlinear optical interactions has had a transformative impact on modern science and technology. The amplification of other bosonic excitations, like phonons in solids, is likely to open up new remarkable physical phenomena. Here, we report on an experimental demonstration of optical phonon amplification. A coherent mid-infrared optical field is used to drive large amplitude oscillations of the Si-C stretching mode in silicon carbide. Upon nonlinear phonon excitation, a second probe pulse experiences parametric optical gain at all wavelengths throughout the reststrahlen band, which reflects the amplification of optical-phonon fluctuations. Starting from first principle calculations, we show that the high-frequency dielectric permittivity and the phonon oscillator strength depend quadratically on the lattice coordinate. In the experimental conditions explored here, these oscillate then at twice the frequency of the optical field and provide a parametric drive for lattice fluctuations. Parametric gain in phononic four wave mixing is a generic mechanism that can be extended to all polar modes of solids, as a new means to control the kinetics of phase transitions, to amplify many body interactions or to control phonon-polariton waves

    Future Perspectives in Acute Myocarditis Complicated by Cardiogenic Shock

    Get PDF
    Acute myocarditis is an inflammatory disease of the myocardium with a highly variable clinical course. Fulminant myocarditis (FM) represents the most threatening scenario with hemodynamic compromise and cardiogenic shock at presentation. Despite medical advances and the availability of promising mechanical circulatory support (MCS), FM is burdened by a dismal prognosis. Early referral to tertiary hospitals with MCS facilities and prompt diagnosis with endomyocardial biopsy are critical steps toward optimal management. Moreover, beyond supportive care, the prevention of irreversible myocardial damage with immunomodulating therapies must be proven in clinical trials. In this editorial, we briefly describe current evidence and future perspectives regarding the management of myocarditis complicated by cardiogenic shock

    Adaptive mitigation of the Air-Time pressure in LoRa multi-gateway architectures

    Full text link
    LoRa is a promising technology in the current Internet of Things market, which operates in un-licensed bands achieving long-range communications and with ultra power devices. In this work we capitalize on the idea introduced in [1], i.e. balance the Air-Time of the different modulation spreading factors (SF), and adapt it to operate in a typical metropolitan scenario comprising multiple gateways (GWs) interconnected to a same network server. Our proposed approach, named ADaptive Mitigation of the AIr-time pressure in lORa (AD MAIORA), relies on a suitable measure of the per-spreading-factor load at each GW - quantified by means of a so-called pressure table -, and on a relevant heuristic algorithm which attempts to balance such a per-SF-pressure. Especially in cases of very loaded scenarios, where a high number of nodes insist on the same GWs, the use of AD MAIORA shows significant performance gains, up to a factor of 5 improvements with respect to the legacy LoRaWAN's Adaptive Data Rate

    OpenFashionCLIP: Vision-and-Language Contrastive Learning with Open-Source Fashion Data

    Full text link
    The inexorable growth of online shopping and e-commerce demands scalable and robust machine learning-based solutions to accommodate customer requirements. In the context of automatic tagging classification and multimodal retrieval, prior works either defined a low generalizable supervised learning approach or more reusable CLIP-based techniques while, however, training on closed source data. In this work, we propose OpenFashionCLIP, a vision-and-language contrastive learning method that only adopts open-source fashion data stemming from diverse domains, and characterized by varying degrees of specificity. Our approach is extensively validated across several tasks and benchmarks, and experimental results highlight a significant out-of-domain generalization capability and consistent improvements over state-of-the-art methods both in terms of accuracy and recall. Source code and trained models are publicly available at: https://github.com/aimagelab/open-fashion-clip.Comment: International Conference on Image Analysis and Processing (ICIAP) 202

    Multimodal Garment Designer: Human-Centric Latent Diffusion Models for Fashion Image Editing

    Full text link
    Fashion illustration is used by designers to communicate their vision and to bring the design idea from conceptualization to realization, showing how clothes interact with the human body. In this context, computer vision can thus be used to improve the fashion design process. Differently from previous works that mainly focused on the virtual try-on of garments, we propose the task of multimodal-conditioned fashion image editing, guiding the generation of human-centric fashion images by following multimodal prompts, such as text, human body poses, and garment sketches. We tackle this problem by proposing a new architecture based on latent diffusion models, an approach that has not been used before in the fashion domain. Given the lack of existing datasets suitable for the task, we also extend two existing fashion datasets, namely Dress Code and VITON-HD, with multimodal annotations collected in a semi-automatic manner. Experimental results on these new datasets demonstrate the effectiveness of our proposal, both in terms of realism and coherence with the given multimodal inputs. Source code and collected multimodal annotations will be publicly released at: https://github.com/aimagelab/multimodal-garment-designer

    OpenFashionCLIP: Vision-and-Language Contrastive Learning with Open-Source Fashion Data

    Get PDF
    The inexorable growth of online shopping and e-commerce demands scalable and robust machine learning-based solutions to accommodate customer requirements. In the context of automatic tagging classification and multimodal retrieval, prior works either defined a low generalizable supervised learning approach or more reusable CLIP-based techniques while, however, training on closed source data. In this work, we propose OpenFashionCLIP, a vision-and-language contrastive learning method that only adopts open-source fashion data stemming from diverse domains, and characterized by varying degrees of specificity. Our approach is extensively validated across several tasks and benchmarks, and experimental results highlight a significant out-of-domain generalization capability and consistent improvements over state-of-the-art methods both in terms of accuracy and recall. Source code and trained models are publicly available at: https://github.com/aimagelab/open-fashion-clip

    LaDI-VTON: Latent Diffusion Textual-Inversion Enhanced Virtual Try-On

    Full text link
    The rapidly evolving fields of e-commerce and metaverse continue to seek innovative approaches to enhance the consumer experience. At the same time, recent advancements in the development of diffusion models have enabled generative networks to create remarkably realistic images. In this context, image-based virtual try-on, which consists in generating a novel image of a target model wearing a given in-shop garment, has yet to capitalize on the potential of these powerful generative solutions. This work introduces LaDI-VTON, the first Latent Diffusion textual Inversion-enhanced model for the Virtual Try-ON task. The proposed architecture relies on a latent diffusion model extended with a novel additional autoencoder module that exploits learnable skip connections to enhance the generation process preserving the model's characteristics. To effectively maintain the texture and details of the in-shop garment, we propose a textual inversion component that can map the visual features of the garment to the CLIP token embedding space and thus generate a set of pseudo-word token embeddings capable of conditioning the generation process. Experimental results on Dress Code and VITON-HD datasets demonstrate that our approach outperforms the competitors by a consistent margin, achieving a significant milestone for the task. Source code and trained models will be publicly released at: https://github.com/miccunifi/ladi-vton

    Hidden Semi-Markov Models for Predictive Maintenance

    Get PDF
    Realistic predictive maintenance approaches are essential for condition monitoring and predictive maintenance of industrial machines. In this work, we propose Hidden Semi-Markov Models (HSMMs) with (i) no constraints on the state duration density function and (ii) being applied to continuous or discrete observation. To deal with such a type of HSMM, we also propose modifications to the learning, inference, and prediction algorithms. Finally, automatic model selection has been made possible using the Akaike Information Criterion. This paper describes the theoretical formalization of the model as well as several experiments performed on simulated and real data with the aim of methodology validation. In all performed experiments, the model is able to correctly estimate the current state and to effectively predict the time to a predefined event with a low overall average absolute error. As a consequence, its applicability to real world settings can be beneficial, especially where in real time the Remaining Useful Lifetime (RUL) of the machine is calculated
    corecore