84 research outputs found

    Adaptation Algorithm and Theory Based on Generalized Discrepancy

    Full text link
    We present a new algorithm for domain adaptation improving upon a discrepancy minimization algorithm previously shown to outperform a number of algorithms for this task. Unlike many previous algorithms for domain adaptation, our algorithm does not consist of a fixed reweighting of the losses over the training sample. We show that our algorithm benefits from a solid theoretical foundation and more favorable learning bounds than discrepancy minimization. We present a detailed description of our algorithm and give several efficient solutions for solving its optimization problem. We also report the results of several experiments showing that it outperforms discrepancy minimization

    Revisiting the Rise of Electronic Nicotine Delivery Systems Using Search Query Surveillance

    Get PDF
    Public perceptions of electronic nicotine delivery systems (ENDS) remain poorly understood because surveys are too costly to regularly implement and when implemented there are large delays between data collection and dissemination. Search query surveillance has bridged some of these gaps. Herein, ENDS’ popularity in the U.S. is reassessed using Google searches

    EAIMS: Emergency Analysis Identification and Management System

    Get PDF
    Social media has great potential as a means to enable civil protection and law enforcement agencies to more effectively tackle disasters and emergencies. However, there is currently a lack of tools that enable civil protection agencies to easily make use of social media. The Emergency Analysis Identification and Management System (EAIMS) is a prototype service that provides real-time detection of emergency events, related information finding and credibility analysis tools for use over social media during emergencies. This system exploits machine learning over data gathered from past emergencies and disasters to build effective models for identifying new events as they occur, tracking developments within those events and analyzing those developments for the purposes of enhancing the decision making processes of emergency response agencies

    Polarizing Tweets on Climate Change

    Full text link
    We introduce a framework to analyze the conversation between two competing groups of Twitter users, one who believe in the anthropogenic causes of climate change (Believers) and a second who are skeptical (Disbelievers). As a case study, we use Climate Change related tweets during the United Nation's (UN) Climate Change Conference - COP24 (2018), Katowice, Poland. We find that both Disbelievers and Believers talk within their group more than with the other group; this is more so the case for Disbelievers than for Believers. The Disbeliever messages focused more on attacking those personalities that believe in the anthropogenic causes of climate change. On the other hand, Believer messages focused on calls to combat climate change. We find that in both Disbelievers and Believers bot-like accounts were equally active and that unlike Believers, Disbelievers get their news from a concentrated number of news sources

    Local Chatter or International Buzz? Language Differences on Posts about Zika Research on Twitter and Facebook

    Get PDF
    Background When the Zika virus outbreak became a global health emergency in early 2016, the scientific community responded with an increased output of Zika-related research. This upsurge in research naturally made its way into academic journals along with editorials, news, and reports. However, it is not yet known how or whether these scholarly communications were distributed to the populations most affected by Zika. Methodology/Principal findings To understand how scientific outputs about Zika reached global and local audiences, we collected Tweets and Facebook posts that linked to Zika-related research in the first six months of 2016. Using a language detection algorithm, we found that up to 90% of Twitter and 76% of Facebook posts are in English. However, when none of the authors of the scholarly article are from English-speaking countries, posts on both social media are less likely to be in English. The effect is most pronounced on Facebook, where the likelihood of posting in English is between 11 and 16% lower when none of the authors are from English-speaking countries, as compared to when some or all are. Similarly, posts about papers written with a Brazilian author are 13% more likely to be in Portuguese on Facebook than when made on Twitter. Conclusions/Significance Our main conclusion is that scholarly communication on Twitter and Facebook of Zikarelated research is dominated by English, despite Brazil being the epicenter of the Zika epidemic. This result suggests that scholarly findings about the Zika virus are unlikely to be distributed directly to relevant populations through these popular online mediums. Nevertheless, there are differences between platforms. Compared to Twitter, scholarly communication on Facebook is more likely to be in the language of an author’s country. The Zika outbreak provides a useful case-study for understanding how scientific outputs are communicated to relevant populations. Our results suggest that Facebook is a more effective channel than Twitter, if communication is desired to be in the native language of the affected country. Further research should explore how local media—such as governmental websites, newspapers and magazines, as well as television and radio—disseminate scholarly publication

    Results from the centers for disease control and prevention's predict the 2013-2014 Influenza Season Challenge

    Get PDF
    Background: Early insights into the timing of the start, peak, and intensity of the influenza season could be useful in planning influenza prevention and control activities. To encourage development and innovation in influenza forecasting, the Centers for Disease Control and Prevention (CDC) organized a challenge to predict the 2013-14 Unites States influenza season. Methods: Challenge contestants were asked to forecast the start, peak, and intensity of the 2013-2014 influenza season at the national level and at any or all Health and Human Services (HHS) region level(s). The challenge ran from December 1, 2013-March 27, 2014; contestants were required to submit 9 biweekly forecasts at the national level to be eligible. The selection of the winner was based on expert evaluation of the methodology used to make the prediction and the accuracy of the prediction as judged against the U.S. Outpatient Influenza-like Illness Surveillance Network (ILINet). Results: Nine teams submitted 13 forecasts for all required milestones. The first forecast was due on December 2, 2013; 3/13 forecasts received correctly predicted the start of the influenza season within one week, 1/13 predicted the peak within 1 week, 3/13 predicted the peak ILINet percentage within 1 %, and 4/13 predicted the season duration within 1 week. For the prediction due on December 19, 2013, the number of forecasts that correctly forecasted the peak week increased to 2/13, the peak percentage to 6/13, and the duration of the season to 6/13. As the season progressed, the forecasts became more stable and were closer to the season milestones. Conclusion: Forecasting has become technically feasible, but further efforts are needed to improve forecast accuracy so that policy makers can reliably use these predictions. CDC and challenge contestants plan to build upon the methods developed during this contest to improve the accuracy of influenza forecasts. © 2016 The Author(s)
    • …
    corecore