1,997 research outputs found

    Better Safe Than Sorry: An Adversarial Approach to Improve Social Bot Detection

    Full text link
    The arm race between spambots and spambot-detectors is made of several cycles (or generations): a new wave of spambots is created (and new spam is spread), new spambot filters are derived and old spambots mutate (or evolve) to new species. Recently, with the diffusion of the adversarial learning approach, a new practice is emerging: to manipulate on purpose target samples in order to make stronger detection models. Here, we manipulate generations of Twitter social bots, to obtain - and study - their possible future evolutions, with the aim of eventually deriving more effective detection techniques. In detail, we propose and experiment with a novel genetic algorithm for the synthesis of online accounts. The algorithm allows to create synthetic evolved versions of current state-of-the-art social bots. Results demonstrate that synthetic bots really escape current detection techniques. However, they give all the needed elements to improve such techniques, making possible a proactive approach for the design of social bot detection systems.Comment: This is the pre-final version of a paper accepted @ 11th ACM Conference on Web Science, June 30-July 3, 2019, Boston, U

    PPAR genomics and pharmacogenomics: Implications for cardiovascular disease

    Get PDF
    The peroxisome proliferator-activated receptors (PPARs) consist of three related transcription factors that serve to regulate a number of cellular processes that are central to cardiovascular health and disease. Numerous pharmacologic studies have assessed the effects of specific PPAR agonists in clinical trials and have provided insight into the clinical effects of these genes while genetic studies have demonstrated clinical associations between PPAR polymorphisms and abnormal cardiovascular phenotypes. With the abundance of data available from these studies as a background, PPAR pharmacogenetics has become a promising and rapidly advancing field. This review focuses on summarizing the current state of understanding of PPAR genetics and pharmacogenetics and the important implications for the individualization of therapy for patients with cardiovascular diseases

    Misurazioni

    Get PDF
    Mario Cresci ha dato vita tra il 1977 e il 1979 ad una esperienza, estremamente significativa e molto interessante, che si è interrotta poiché non più supportata istituzionalmente e quindi si configura come un progetto che, seppur già perfettamente messo a punto, resta irrealizzato o, quantomeno, incompiuto. In Basilicata, dove già da anni l’autore portava avanti Misurazioni, la sua ricerca fotografica di chiara marca antropologica e sociale, Cresci si impegna nella didattica presso un istituto molto particolare. Si tratta di una scuola di design, istituita dalla Regione Basilicata, in applicazione della legge n. 285 del 1 giugno 1977 e destinata alla formazione professionale per le tecniche artigianali. Una scuola, insomma, che cerca di sfruttare la creatività come risorsa economica. Il progetto si è purtroppo subito interrotto, restando in sostanza irrealizzato, poiché le istituzioni non hanno più sostenuto questo istituto.Mario Cresci started between 1977 and 1979 a meaningful and interesting experience that was interrupted because of the lack of support from the institutions and can therefore be conceived as an unrealised or at least unfinished project. In Basilicata, where the author was already developing the project Misurazioni, a photographic research with a social and anthropological approach, Cresci started teaching in an unusual school. A design school, founded by Regione Basilicata in application of the law n. 285 of June 1st 1977, devoted to the vocational training in craftsmanship techniques. A school that aimed at using creativity as an economic resource. Unfortunately the project was interrupted almost immediately, remaining therefore unrealized, since the institutions suspended their support to the school

    The Eternal Detective : Poe’s Creative and Resolvent Duality in the Hardboiled Era

    Get PDF
    This paper traces the continuity of Edgar Allan Poe’s archetypal “creative and resolvent” detective from the nineteenth century’s classical detective fiction into the twentieth century’s hardboiled detective fiction. Specifically, this paper asserts that the duality first suggested by Poe in “The Murders in the Rue Morgue” (1841) did not only define classical era detectives, but it also persisted into the radically different hardboiled era of American detective fiction. First, this paper examines the cultural contexts of each era and establishes the shared links between the resolvent—or analytical—traits and creative—or abstract and Romantic—traits of classical era detectives C. Auguste Dupin and Sherlock Holmes and hardboiled detectives Race Williams, the Continental Op, Sam Spade, and Philip Marlowe. This paper claims that the analytical skills of classical detectives are similarly present in hardboiled detectives, and that the creative eccentricity and melancholy of the classical detectives manifests as personal codes of Romantic honor in the hardboiled era. This complicates the traditional understanding of each era, as the two sub-genres share the same core character type yet tend to produce opposite messaging about the nature of liberal society. This paper contends that the creative and resolvent duality of both era’s detectives made them perfectly suited to either address or expose the contradictions of the capitalist liberal democracies that produced them. Ultimately, this paper concludes with an examination of the socioeconomic motivations of each era’s detectives and the resultant societal critique enabled by creative and resolvent duality

    Cashtag piggybacking: uncovering spam and bot activity in stock microblogs on Twitter

    Full text link
    Microblogs are increasingly exploited for predicting prices and traded volumes of stocks in financial markets. However, it has been demonstrated that much of the content shared in microblogging platforms is created and publicized by bots and spammers. Yet, the presence (or lack thereof) and the impact of fake stock microblogs has never systematically been investigated before. Here, we study 9M tweets related to stocks of the 5 main financial markets in the US. By comparing tweets with financial data from Google Finance, we highlight important characteristics of Twitter stock microblogs. More importantly, we uncover a malicious practice - referred to as cashtag piggybacking - perpetrated by coordinated groups of bots and likely aimed at promoting low-value stocks by exploiting the popularity of high-value ones. Among the findings of our study is that as much as 71% of the authors of suspicious financial tweets are classified as bots by a state-of-the-art spambot detection algorithm. Furthermore, 37% of them were suspended by Twitter a few months after our investigation. Our results call for the adoption of spam and bot detection techniques in all studies and applications that exploit user-generated content for predicting the stock market

    A Decade of Social Bot Detection

    Full text link
    On the morning of November 9th 2016, the world woke up to the shocking outcome of the US Presidential elections: Donald Trump was the 45th President of the United States of America. An unexpected event that still has tremendous consequences all over the world. Today, we know that a minority of social bots, automated social media accounts mimicking humans, played a central role in spreading divisive messages and disinformation, possibly contributing to Trump's victory. In the aftermath of the 2016 US elections, the world started to realize the gravity of widespread deception in social media. Following Trump's exploit, we witnessed to the emergence of a strident dissonance between the multitude of efforts for detecting and removing bots, and the increasing effects that these malicious actors seem to have on our societies. This paradox opens a burning question: What strategies should we enforce in order to stop this social bot pandemic? In these times, during the run-up to the 2020 US elections, the question appears as more crucial than ever. What stroke social, political and economic analysts after 2016, deception and automation, has been however a matter of study for computer scientists since at least 2010. In this work, we briefly survey the first decade of research in social bot detection. Via a longitudinal analysis, we discuss the main trends of research in the fight against bots, the major results that were achieved, and the factors that make this never-ending battle so challenging. Capitalizing on lessons learned from our extensive analysis, we suggest possible innovations that could give us the upper hand against deception and manipulation. Studying a decade of endeavours at social bot detection can also inform strategies for detecting and mitigating the effects of other, more recent, forms of online deception, such as strategic information operations and political trolls.Comment: Forthcoming in Communications of the AC
    corecore