222 research outputs found

    Lethe: {C}onceal Content Deletion from Persistent Observers

    No full text

    Search Bias Quantification: Investigating Political Bias in Social Media and Web Search

    No full text
    Users frequently use search systems on the Web as well as online social media to learn about ongoing events and public opinion on personalities. Prior studies have shown that the top-ranked results returned by these search engines can shape user opinion about the topic (e.g., event or person) being searched. In case of polarizing topics like politics, where multiple competing perspectives exist, the political bias in the top search results can play a significant role in shaping public opinion towards (or away from) certain perspectives. Given the considerable impact that search bias can have on the user, we propose a generalizable search bias quantification framework that not only measures the political bias in ranked list output by the search system but also decouples the bias introduced by the different sources—input data and ranking system. We apply our framework to study the political bias in searches related to 2016 US Presidential primaries in Twitter social media search and find that both input data and ranking system matter in determining the final search output bias seen by the users. And finally, we use the framework to compare the relative bias for two popular search systems—Twitter social media search and Google web search—for queries related to politicians and political events. We end by discussing some potential solutions to signal the bias in the search results to make the users more aware of them.publishe

    Equality of Voice: Towards Fair Representation in Crowdsourced Top-K Recommendations

    Get PDF
    To help their users to discover important items at a particular time, major websites like Twitter, Yelp, TripAdvisor or NYTimes provide Top-K recommendations (e.g., 10 Trending Topics, Top 5 Hotels in Paris or 10 Most Viewed News Stories), which rely on crowdsourced popularity signals to select the items. However, different sections of a crowd may have different preferences, and there is a large silent majority who do not explicitly express their opinion. Also, the crowd often consists of actors like bots, spammers, or people running orchestrated campaigns. Recommendation algorithms today largely do not consider such nuances, hence are vulnerable to strategic manipulation by small but hyper-active user groups. To fairly aggregate the preferences of all users while recommending top-K items, we borrow ideas from prior research on social choice theory, and identify a voting mechanism called Single Transferable Vote (STV) as having many of the fairness properties we desire in top-K item (s)elections. We develop an innovative mechanism to attribute preferences of silent majority which also make STV completely operational. We show the generalizability of our approach by implementing it on two different real-world datasets. Through extensive experimentation and comparison with state-of-the-art techniques, we show that our proposed approach provides maximum user satisfaction, and cuts down drastically on items disliked by most but hyper-actively promoted by a few users.Comment: In the proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* '19). Please cite the conference versio

    EECLA: A Novel Clustering Model for Improvement of Localization and Energy Efficient Routing Protocols in Vehicle Tracking Using Wireless Sensor Networks

    Get PDF
    Due to increase of usage of wireless sensor networks (WSN) for various purposes leads to a required technology in the present world. Many applications are running with the concepts of WSN now, among that vehicle tracking is one which became prominent in security purposes. In our previous works we proposed an algorithm called EECAL (Energy Efficient Clustering Algorithm and Localization) to improve accuracy and performed well. But are not focused more on continuous tracking of a vehicle in better aspects. In this paper we proposed and refined the same algorithm as per the requirement. Detection and tracking of a vehicle when they are in larges areas is an issue. We mainly focused on proximity graphs and spatial interpolation techniques for getting exact boundaries. Other aspect of our work is to reduce consumption of energy which increases the life time of the network. Performance of system when in active state is another issue can be fixed by setting of peer nodes in communication. We made an attempt to compare our results with the existed works and felt much better our work. For handling localization, we used genetic algorithm which handled good of residual energy, fitness of the network in various aspects. At end we performed a simulation task that proved proposed algorithms performed well and experimental analysis gave us faith by getting less localization error factor

    Detecting and Mitigating Test-time Failure Risks via Model-agnostic Uncertainty Learning

    Get PDF

    Glimmers: Resolving the Privacy/Trust Quagmire

    Full text link
    Many successful services rely on trustworthy contributions from users. To establish that trust, such services often require access to privacy-sensitive information from users, thus creating a conflict between privacy and trust. Although it is likely impractical to expect both absolute privacy and trustworthiness at the same time, we argue that the current state of things, where individual privacy is usually sacrificed at the altar of trustworthy services, can be improved with a pragmatic GlimmerGlimmer ofof TrustTrust, which allows services to validate user contributions in a trustworthy way without forfeiting user privacy. We describe how trustworthy hardware such as Intel's SGX can be used client-side -- in contrast to much recent work exploring SGX in cloud services -- to realize the Glimmer architecture, and demonstrate how this realization is able to resolve the tension between privacy and trust in a variety of cases
    • …
    corecore