40 research outputs found

    Let's speak more? How the ECB responds to public contestation

    Get PDF
    Although the post-crisis politicisation of the ECB is widely acknowledged, little empirical evidence exists about how this important non-majoritarian institution has responded to public contestation. This article starts filling this gap by investigating whether and how public opinion affects ECB communication. Based on automated text analysis of the speeches delivered by Executive Board members (2001\u20132017), the article shows that negative public opinion is associated with an expansion of the scope of ECB communication and a reduction in the salience attributed to monetary policy issues. These results challenge the view according to which the ECB conceives of its sources of legitimation based almost exclusive on the achievement of its mandate. In particular, our findings suggest that increased politicisation leads the ECB to reassess the sources of its legitimation strategy from a strategy based on output achievement towards one based on participation to broader policy debates

    Shape Registration in the Time of Transformers

    Get PDF
    In this paper, we propose a transformer-based procedure for the efficient registration of non-rigid 3D point clouds. The proposed approach is data-driven and adopts for the first time the transformers architecture in the registration task. Our method is general and applies to different settings. Given a fixed template with some desired properties (e.g. skinning weights or other animation cues), we can register raw acquired data to it, thereby transferring all the template properties to the input geometry. Alternatively, given a pair of shapes, our method can register the first onto the second (or vice-versa), obtaining a high-quality dense correspondence between the two. In both contexts, the quality of our results enables us to target real applications such as texture transfer and shape interpolation. Furthermore, we also show that including an estimation of the underlying density of the surface eases the learning process. By exploiting the potential of this architecture, we can train our model requiring only a sparse set of ground truth correspondences (10∼20% of the total points). The proposed model and the analysis that we perform pave the way for future exploration of transformer-based architectures for registration and matching applications. Qualitative and quantitative evaluations demonstrate that our pipeline outperforms state-of-the-art methods for deformable and unordered 3D data registration on different datasets and scenarios

    ASIF: Coupled Data Turns Unimodal Models to Multimodal Without Training

    Full text link
    Aligning the visual and language spaces requires to train deep neural networks from scratch on giant multimodal datasets; CLIP trains both an image and a text encoder, while LiT manages to train just the latter by taking advantage of a pretrained vision network. In this paper, we show that sparse relative representations are sufficient to align text and images without training any network. Our method relies on readily available single-domain encoders (trained with or without supervision) and a modest (in comparison) number of image-text pairs. ASIF redefines what constitutes a multimodal model by explicitly disentangling memory from processing: here the model is defined by the embedded pairs of all the entries in the multimodal dataset, in addition to the parameters of the two encoders. Experiments on standard zero-shot visual benchmarks demonstrate the typical transfer ability of image-text models. Overall, our method represents a simple yet surprisingly strong baseline for foundation multimodal models, raising important questions on their data efficiency and on the role of retrieval in machine learning.Comment: 13 pages, 5 figure

    Relative representations enable zero-shot latent space communication

    Full text link
    Neural networks embed the geometric structure of a data manifold lying in a high-dimensional space into latent representations. Ideally, the distribution of the data points in the latent space should depend only on the task, the data, the loss, and other architecture-specific constraints. However, factors such as the random weights initialization, training hyperparameters, or other sources of randomness in the training phase may induce incoherent latent spaces that hinder any form of reuse. Nevertheless, we empirically observe that, under the same data and modeling choices, distinct latent spaces typically differ by an unknown quasi-isometric transformation: that is, in each space, the distances between the encodings do not change. In this work, we propose to adopt pairwise similarities as an alternative data representation, that can be used to enforce the desired invariance without any additional training. We show how neural architectures can leverage these relative representations to guarantee, in practice, latent isometry invariance, effectively enabling latent space communication: from zero-shot model stitching to latent space comparison between diverse settings. We extensively validate the generalization capability of our approach on different datasets, spanning various modalities (images, text, graphs), tasks (e.g., classification, reconstruction) and architectures (e.g., CNNs, GCNs, transformers).Comment: 20 pages, 8 figures, 16 table

    Bootstrapping Parallel Anchors for Relative Representations

    Full text link
    The use of relative representations for latent embeddings has shown potential in enabling latent space communication and zero-shot model stitching across a wide range of applications. Nevertheless, relative representations rely on a certain amount of parallel anchors to be given as input, which can be impractical to obtain in certain scenarios. To overcome this limitation, we propose an optimization-based method to discover new parallel anchors from a limited known set (seed). Our approach can be used to find semantic correspondence between different domains, align their relative spaces, and achieve competitive results in several tasks.Comment: 9 pages, 7 table

    From Charts to Atlas: Merging Latent Spaces into One

    Full text link
    Models trained on semantically related datasets and tasks exhibit comparable inter-sample relations within their latent spaces. We investigate in this study the aggregation of such latent spaces to create a unified space encompassing the combined information. To this end, we introduce Relative Latent Space Aggregation, a two-step approach that first renders the spaces comparable using relative representations, and then aggregates them via a simple mean. We carefully divide a classification problem into a series of learning tasks under three different settings: sharing samples, classes, or neither. We then train a model on each task and aggregate the resulting latent spaces. We compare the aggregated space with that derived from an end-to-end model trained over all tasks and show that the two spaces are similar. We then observe that the aggregated space is better suited for classification, and empirically demonstrate that it is due to the unique imprints left by task-specific embedders within the representations. We finally test our framework in scenarios where no shared region exists and show that it can still be used to merge the spaces, albeit with diminished benefits over naive merging.Comment: To appear in the NeurReps workshop @ NeurIPS 202

    Clinical correlates of "pure" essential tremor: the TITAN study

    Get PDF
    BackgroundTo date, there are no large studies delineating the clinical correlates of "pure" essential tremor (ET) according to its new definition.MethodsFrom the ITAlian tremor Network (TITAN) database, we extracted data from patients with a diagnosis of "pure" ET and excluded those with other tremor classifications, including ET-plus, focal, and task-specific tremor, which were formerly considered parts of the ET spectrum.ResultsOut of 653 subjects recruited in the TITAN study by January 2022, the data of 208 (31.8%) "pure" ET patients (86M/122F) were analyzed. The distribution of age at onset was found to be bimodal. The proportion of familial cases by the age-at-onset class of 20 years showed significant differences, with sporadic cases representing the large majority of the class with an age at onset above 60 years. Patients with a positive family history of tremor had a younger onset and were more likely to have leg involvement than sporadic patients despite a similar disease duration. Early-onset and late-onset cases were different in terms of tremor distribution at onset and tremor severity, likely as a function of longer disease duration, yet without differences in terms of quality of life, which suggests a relatively benign progression. Treatment patterns and outcomes revealed that up to 40% of the sample was unsatisfied with the current pharmacological options.DiscussionThe findings reported in the study provide new insights, especially with regard to a possible inversed sex distribution, and to the genetic backgrounds of "pure" ET, given that familial cases were evenly distributed across age-at-onset classes of 20 years. Deep clinical profiling of "pure" ET, for instance, according to age at onset, might increase the clinical value of this syndrome in identifying pathogenetic hypotheses and therapeutic strategies

    Outcomes of COVID-19 patients treated with continuous positive airway pressure outside ICU

    Get PDF
    Aim We aim at characterizing a large population of Coronavirus 19 (COVID-19) patients with moderate-to-severe hypoxemic acute respiratory failure (ARF) receiving CPAP outside intensive care unit (ICU), and ascertaining whether the duration of CPAP application increased the risk of mortality for patients requiring intubation. Methods In this retrospective, multicentre cohort study, we included COVID-19 adult patients, treated with CPAP outside ICU for hypoxemic ARF from March 1 st to April 15th, 2020. We collected demographic and clinical data, including CPAP therapeutic goal, hospital length of stay (LOS), and 60- day in-hospital mortality. Results The study includes 537 patients with a median age of 69 (IQR, 60-76) years. Males were 391 (73%). According to predefined CPAP therapeutic goal, 397 (74%) patients were included in full treatment subgroup, and 140 (26%) in the do-not intubate (DNI) subgroup. Median CPAP duration was 4 (IQR, 1-8) days, while hospital LOS 16 (IQR, 9-27) days. Sixty-day in-hospital mortality was overall 34% (95%CI, 0.304-0.384), and 21% (95%CI, 0.169-0.249) and 73% (95%CI, 0.648-0.787) for full treatment and DNI subgroups, respectively. In the full treatment subgroup, in-hospital mortality was 42% (95%CI, 0.345-0.488) for 180 (45%) CPAP failures requiring intubation, while 2% (95%CI, 0.008- 0.035) for the remaining 217 (55%) patients who succeeded. Delaying intubation was associated with increased mortality [HR, 1.093 (95%CI, 1.010-1.184)]. Conclusions We described a large population of COVID-19 patients treated with CPAP outside ICU. Intubation delay represents a risk factor for mortality. Further investigation is needed for early identification of CPAP failures
    corecore