16,220 research outputs found

    APIs and Your Privacy

    Get PDF
    Application programming interfaces, or APIs, have been the topic of much recent discussion. Newsworthy events, including those involving Facebook’s API and Cambridge Analytica obtaining information about millions of Facebook users, have highlighted the technical capabilities of APIs for prominent websites and mobile applications. At the same time, media coverage of ways that APIs have been misused has sparked concern for potential privacy invasions and other issues of public policy. This paper seeks to educate consumers on how APIs work and how they are used within popular websites and mobile apps to gather, share, and utilize data. APIs are used in mobile games, search engines, social media platforms, news and shopping websites, video and music streaming services, dating apps, and mobile payment systems. If a third-party company, like an app developer or advertiser, would like to gain access to your information through a website you visit or a mobile app or online service you use, what data might they obtain about you through APIs and how? This report analyzes 11 prominent online services to observe general trends and provide you an overview of the role APIs play in collecting and distributing information about consumers. For example, how might your data be gathered and shared when using your Facebook account login to sign up for Venmo or to access the Tinder dating app? How might advertisers use Pandora’s API when you are streaming music? After explaining what APIs are and how they work, this report categorizes and characterizes different kinds of APIs that companies offer to web and app developers. Services may offer content-focused APIs, feature APIs, unofficial APIs, and analytics APIs that developers of other apps and websites may access and use in different ways. Likewise, advertisers can use APIs to target a desired subset of a service’s users and possibly extract user data. This report explains how websites and apps can create user profiles based on your online behavior and generate revenue from advertiser-access to their APIs. The report concludes with observations on how various companies and platforms connecting through APIs may be able to learn information about you and aggregate it with your personal data from other sources when you are browsing the internet or using different apps on your smartphone or tablet. While the paper does not make policy recommendations, it demonstrates the importance of approaching consumer privacy from a broad perspective that includes first parties and third parties, and that considers the integral role of APIs in today’s online ecosystem

    Taxonomy of P2P Applications

    Get PDF
    Peer-to-peer (p2p) networks have gained immense popularity in recent years and the number of services they provide continuously rises. Where p2p-networks were formerly known as file-sharing networks, p2p is now also used for services like VoIP and IPTV. With so many different p2p applications and services the need for a taxonomy framework rises. This paper describes the available p2p applications grouped by the services they provide. A taxonomy framework is proposed to classify old and recent p2p applications based on their characteristics

    Home Computer Security Can Be Improved Using Online Video Streaming Services

    Get PDF
    Home computer users face many new computer security threats. The rise of the internet has enabled viruses to spread rapidly. Educating computer users on security issues is one way to combat security threats. Video streaming web sites can provide a simple way to distribute educational videos to computer users. This project investigates the effectiveness of creating and distributing computer security educational videos on video streaming sites

    Analyzing the Digital Traces of Political Manipulation: The 2016 Russian Interference Twitter Campaign

    Full text link
    Until recently, social media was seen to promote democratic discourse on social and political issues. However, this powerful communication platform has come under scrutiny for allowing hostile actors to exploit online discussions in an attempt to manipulate public opinion. A case in point is the ongoing U.S. Congress' investigation of Russian interference in the 2016 U.S. election campaign, with Russia accused of using trolls (malicious accounts created to manipulate) and bots to spread misinformation and politically biased information. In this study, we explore the effects of this manipulation campaign, taking a closer look at users who re-shared the posts produced on Twitter by the Russian troll accounts publicly disclosed by U.S. Congress investigation. We collected a dataset with over 43 million election-related posts shared on Twitter between September 16 and October 21, 2016, by about 5.7 million distinct users. This dataset included accounts associated with the identified Russian trolls. We use label propagation to infer the ideology of all users based on the news sources they shared. This method enables us to classify a large number of users as liberal or conservative with precision and recall above 90%. Conservatives retweeted Russian trolls about 31 times more often than liberals and produced 36x more tweets. Additionally, most retweets of troll content originated from two Southern states: Tennessee and Texas. Using state-of-the-art bot detection techniques, we estimated that about 4.9% and 6.2% of liberal and conservative users respectively were bots. Text analysis on the content shared by trolls reveals that they had a mostly conservative, pro-Trump agenda. Although an ideologically broad swath of Twitter users was exposed to Russian Trolls in the period leading up to the 2016 U.S. Presidential election, it was mainly conservatives who helped amplify their message

    EveTAR: Building a Large-Scale Multi-Task Test Collection over Arabic Tweets

    Full text link
    This article introduces a new language-independent approach for creating a large-scale high-quality test collection of tweets that supports multiple information retrieval (IR) tasks without running a shared-task campaign. The adopted approach (demonstrated over Arabic tweets) designs the collection around significant (i.e., popular) events, which enables the development of topics that represent frequent information needs of Twitter users for which rich content exists. That inherently facilitates the support of multiple tasks that generally revolve around events, namely event detection, ad-hoc search, timeline generation, and real-time summarization. The key highlights of the approach include diversifying the judgment pool via interactive search and multiple manually-crafted queries per topic, collecting high-quality annotations via crowd-workers for relevancy and in-house annotators for novelty, filtering out low-agreement topics and inaccessible tweets, and providing multiple subsets of the collection for better availability. Applying our methodology on Arabic tweets resulted in EveTAR , the first freely-available tweet test collection for multiple IR tasks. EveTAR includes a crawl of 355M Arabic tweets and covers 50 significant events for which about 62K tweets were judged with substantial average inter-annotator agreement (Kappa value of 0.71). We demonstrate the usability of EveTAR by evaluating existing algorithms in the respective tasks. Results indicate that the new collection can support reliable ranking of IR systems that is comparable to similar TREC collections, while providing strong baseline results for future studies over Arabic tweets
    • …
    corecore