227 research outputs found

    Technical Trading, Predictability and Learning in Currency Markets

    Get PDF
    This paper studies predictability of currency returns over time and the extent to which it is captured by trading rules commonly used in currency markets. We consider the strategies that an investor endowed with rational expectations could have pursued to exploit out-of-sample currency predictability and generate abnormal returns. We find a close relation between these strategies and indices that track popular technical trading rules, namely moving average cross-over rules and the carry trade, implying that the technical rules represent heuristics by which professional market participants exploit currency mispricing. We find evidence that such mispricing reflects initially wrong investors’ beliefs (wrong priors), but information is efficiently processed as it becomes available. Predictability is highest in the mid ’90, subsequently decreases sharply, but increases again in the final part of the sample period, especially for the Euro and other emerging currencies

    A monolithic ASIC demonstrator for the Thin Time-of-Flight PET scanner

    Full text link
    Time-of-flight measurement is an important advancement in PET scanners to improve image reconstruction with a lower delivered radiation dose. This article describes the monolithic ASIC for the TT-PET project, a novel idea for a high-precision PET scanner for small animals. The chip uses a SiGe Bi-CMOS process for timing measurements, integrating a fully-depleted pixel matrix with a low-power BJT-based front-end per channel, integrated on the same 100 ÎĽm\mu{} m thick die. The target timing resolution is 30 ps RMS for electrons from the conversion of 511 keV photons. A novel synchronization scheme using a patent-pending TDC is used to allow the synchronization of 1.6 million channels across almost 2000 different chips at picosecond-level. A full-featured demonstrator chip with a 3x10 matrix of 500x500 ÎĽm2\mu{} m^{2} pixels was produced to validate each block. Its design and experimental results are presented here

    The Italian Emergency Regime at the Covid-19 “Stress Test”: Decline of Political Responsiveness, Output Legitimation and Politicization of Expertise

    Get PDF
    During the Covid-19 pandemic, public trust necessarily shifted towards science and technical expertise worldwide. In some liberal democracies, the Constitution and Parliament have been by-passed, with Executives using scientific and technical expertise to legitimate political choices within the crisis management process. In Italy (March-August 2020), the Executive set up expert teams (such as the comitato tecnico-scientifico) acting mostly by Decrees of the President of Council of Ministers (DPCM). The Italian Parliament was not sufficiently consulted. After reviewing the current research literature on constitutional changes during emergency regimes within representative democracies, and using insights from Italy, we try to frame the discourse concerning Executive’s choices during emergency regimes in terms of (i) decline of political responsiveness, (ii) prevalence of out-put legitimation and (iii) politicization of expertise (with the possibility for expertise, in turn, to influence policy making) to contribute to the overall debate on the reconfiguration of powers in times of crises.

    Deep Tweets: from Entity Linking to Sentiment Analysis

    Get PDF
    International audienceThe huge amount of information streaming from online social networking is increasingly attracting the interest of researchers on sentiment analysis on micro-blogging platforms. We provide an overview on the open challenges of sentiment analysis on Italian tweets. We discuss methodological issues as well as new directions for investigation with particular focus on sentiment analysis of tweets containing figurative language and entity-based sentiment analysis of micro-posts

    Argument Mining on Italian News Blogs

    Get PDF
    International audienceThe goal of argument mining is to extract structured information, namely the arguments and their relations, from un-structured text. In this paper, we propose an approach to argument relation prediction based on supervised learning of linguistic and semantic features of the text. We test our method on the CorEA corpus of user comments to online newspaper articles, evaluating our system's performances in assigning the correct relation, i.e., support or attack, to pairs of arguments. We obtain results consistently better than a sentiment analysis-based base-line (over two out three correctly classified pairs), and we observe that sentiment and lexical semantics are the most informative features with respect to the relation prediction task.L'estrazione automatica di argomenti ha come scopo recuperare informazione strutturata, in particolare gli argomenti e le loro relazioni, a partire da testo semplice. In questo con-tributo proponiamo un metodo di predizione delle relazioni tra argomenti basato sull'apprendimento supervisionato di feature linguistiche e semantiche del testo. Il metodò e testato sul corpus di commenti di news CorEA, edèed`edè valutata la capacità del sistema di classificare le relazioni di supporto ed attacco tra coppie di argomenti. I risultati ottenuti sono superiori ad una baseline basata sulla sola analisi del sentimento (oltre due coppie di argomenti su trè e classificata correttamente) ed osserviamo che il sentimento e la semantica lessicale sono gli indicatoripì u informativi per la predizione delle relazioni tra argomenti

    Altered spreading of neuronal avalanches in temporal lobe epilepsy relates to cognitive performance: A resting-state hdEEG study

    Get PDF
    Objective: Large aperiodic bursts of activations named neuronal avalanches have been used to characterize whole-brain activity, as their presence typically relates to optimal dynamics. Epilepsy is characterized by alterations in large-scale brain network dynamics. Here we exploited neuronal avalanches to characterize differences in electroencephalography (EEG) basal activity, free from seizures and/or interictal spikes, between patients with temporal lobe epilepsy (TLE) and matched controls.Method: We defined neuronal avalanches as starting when the z-scored source-reconstructed EEG signals crossed a specific threshold in any region and ending when all regions returned to baseline. This technique avoids data manipulation or assumptions of signal stationarity, focusing on the aperiodic, scale-free components of the signals. We computed individual avalanche transition matrices to track the probability of avalanche spreading across any two regions, compared them between patients and controls, and related them to memory performance in patients.Results: We observed a robust topography of significant edges clustering in regions functionally and structurally relevant for the TLE, such as the entorhinal cortex, the inferior parietal and fusiform area, the inferior temporal gyrus, and the anterior cingulate cortex. We detected a significant correlation between the centrality of the entorhinal cortex in the transition matrix and the long-term memory performance (delay recall Rey-Osterrieth Complex Figure Test).Significance: Our results show that the propagation patterns of large-scale neuronal avalanches are altered in TLE during the resting state, suggesting a potential diagnostic application in epilepsy. Furthermore, the relationship between specific patterns of propagation and memory performance support the neurophysiological relevance of neuronal avalanches

    AlBERTo: Modeling Italian Social Media Language with BERT

    Get PDF
    Natural Language Processing tasks recently achieved considerable interest and progresses following the development of numerous innovative artificial intelligence models released in recent years. The increase in available computing power has made possible the application of machine learning approaches on a considerable amount of textual data, demonstrating how they can obtain very encouraging results in challenging NLP tasks by generalizing the properties of natural language directly from the data. Models such as ELMo, GPT/GPT-2, BERT, ERNIE, and RoBERTa have proved to be extremely useful in NLP tasks such as entailment, sentiment analysis, and question answering. The availability of these resources mainly in the English language motivated us towards the realization of AlBERTo, a natural language model based on BERT and trained on the Italian language. We decided to train AlBERTo from scratch on social network language, Twitter in particular, because many of the classic tasks of content analysis are oriented to data extracted from the digital sphere of users. The model was distributed to the community through a repository on GitHub and the Transformers library (Wolf et al. 2019) released by the development group huggingface.co. We have evaluated the validity of the model on the classification tasks of sentiment polarity, irony, subjectivity, and hate speech. The specifications of the model, the code developed for training and fine-tuning, and the instructions for using it in a research project are freely available
    • …
    corecore