118 research outputs found
Surge pricing on a service platform under spatial spillovers: evidence from Uber
Ride-sharing platforms employ surge pricing to match anticipated capacity spillover with
demand. We develop an optimization model to characterize the relationship between surge
price and spillover. We test predicted relationships using a spatial panel model on a dataset
from Ubers operation. Results reveal that Ubers pricing accounts for both capacity and price
spillover. There is a debate in the management community on the ecacy of labor welfare
mechanisms associated with shared capacity. We conduct counterfactual analysis to provide
guidance in regards to the debate, for managing congestion, while accounting for consumer
and labor welfare through this online platform.First author draf
When Fair Classification Meets Noisy Protected Attributes
The operationalization of algorithmic fairness comes with several practical
challenges, not the least of which is the availability or reliability of
protected attributes in datasets. In real-world contexts, practical and legal
impediments may prevent the collection and use of demographic data, making it
difficult to ensure algorithmic fairness. While initial fairness algorithms did
not consider these limitations, recent proposals aim to achieve algorithmic
fairness in classification by incorporating noisiness in protected attributes
or not using protected attributes at all.
To the best of our knowledge, this is the first head-to-head study of fair
classification algorithms to compare attribute-reliant, noise-tolerant and
attribute-blind algorithms along the dual axes of predictivity and fairness. We
evaluated these algorithms via case studies on four real-world datasets and
synthetic perturbations. Our study reveals that attribute-blind and
noise-tolerant fair classifiers can potentially achieve similar level of
performance as attribute-reliant algorithms, even when protected attributes are
noisy. However, implementing them in practice requires careful nuance. Our
study provides insights into the practical implications of using fair
classification algorithms in scenarios where protected attributes are noisy or
partially available.Comment: Accepted at the 6th AAAI/ACM Conference on Artificial Intelligence,
Ethics and Society (AIES) 202
Social Turing Tests: Crowdsourcing Sybil Detection
As popular tools for spreading spam and malware, Sybils (or fake accounts)
pose a serious threat to online communities such as Online Social Networks
(OSNs). Today, sophisticated attackers are creating realistic Sybils that
effectively befriend legitimate users, rendering most automated Sybil detection
techniques ineffective. In this paper, we explore the feasibility of a
crowdsourced Sybil detection system for OSNs. We conduct a large user study on
the ability of humans to detect today's Sybil accounts, using a large corpus of
ground-truth Sybil accounts from the Facebook and Renren networks. We analyze
detection accuracy by both "experts" and "turkers" under a variety of
conditions, and find that while turkers vary significantly in their
effectiveness, experts consistently produce near-optimal results. We use these
results to drive the design of a multi-tier crowdsourcing Sybil detection
system. Using our user study data, we show that this system is scalable, and
can be highly effective either as a standalone system or as a complementary
technique to current tools
The COVID-19 Pandemic and the Technology Trust Gap
Industry and government tried to use information technologies to respond to the COVID-19 pandemic, but using the internet as a tool for disease surveillance, public health messaging, and testing logistics turned out to be a disappointment. Why weren’t these efforts more effective? This Essay argues that industry and government efforts to leverage technology were doomed to fail because tech platforms have failed over the past few decades to make their tools trustworthy, and lawmakers have done little to hold these companies accountable. People cannot trust the interfaces they interact with, the devices they use, and the systems that power tech companies’ services.This Essay explores these pre-existing privacy ills that contributed to these problems, including manipulative user interfaces, consent regimes that burden people with all the risks of using technology, and devices that collect far more data than they should. A pandemic response is only as good as its adoption, but pre-existing privacy and technology concerns make it difficult for people seeking lifelines to have confidence in the technologies designed to protect them. We argue that a good way to help close the technology trust gap is through relational duties of loyalty and care, better frameworks regulating the design of information technologies, and substantive rules limiting data collection and use instead of procedural “consent and control” rules. We conclude that the pandemic could prove to be an opportunity to leverage motivated lawmakers to improve our privacy frameworks and make information technologies worthy of our trust
Understanding the Role of Registrars in DNSSEC Deployment
The Domain Name System (DNS) provides a scalable, flexible name resolution service. Unfortunately, its unauthenticated architecture has become the basis for many security attacks. To address this, DNS Security Extensions (DNSSEC) were introduced in 1997. DNSSEC’s deployment requires support from the top-level domain (TLD) registries and registrars, as well as participation by the organization that serves as the DNS operator. Unfortunately, DNSSEC has seen poor deployment thus far: despite being proposed nearly two decades ago, only 1% of .com, .net, and .org domains are properly signed. In this paper, we investigate the underlying reasons why DNSSEC adoption has been remarkably slow. We focus on registrars, as most TLD registries already support DNSSEC and registrars often serve as DNS operators for their customers. Our study uses large-scale, longitudinal DNS measurements to study DNSSEC adoption, coupled with experiences collected by trying to deploy DNSSEC on domains we purchased from leading domain name registrars and resellers. Overall, we find that a select few registrars are responsible for the (small) DNSSEC deployment today, and that many leading registrars do not support DNSSEC at all, or require customers to take cumbersome steps to deploy DNSSEC. Further frustrating deployment, many of the mechanisms for conveying DNSSEC information to registrars are error-prone or present security vulnerabilities. Finally, we find that using DNSSEC with third-party DNS operators such as Cloudflare requires the domain owner to take a number of steps that 40% of domain owners do not complete. Having identified several operational challenges for full DNSSEC deployment, we make recommendations to improve adoption
A Comparative Study of Dark Patterns Across Mobile and Web Modalities
Dark patterns are user interface elements that can influence a person\u27s behavior against their intentions or best interests. Prior work identified these patterns in websites and mobile apps, but little is known about how the design of platforms might impact dark pattern manifestations and related human vulnerabilities. In this paper, we conduct a comparative study of mobile application, mobile browser, and web browser versions of 105 popular services to investigate variations in dark patterns across modalities. We perform manual tests, identify dark patterns in each service, and examine how they persist or differ by modality. Our findings show that while services can employ some dark patterns equally across modalities, many dark patterns vary between platforms, and that these differences saddle people with inconsistent experiences of autonomy, privacy, and control. We conclude by discussing broader implications for policymakers and practitioners, and provide suggestions for furthering dark patterns research
Understanding Dark Patterns in Home IoT Devices
Internet-of-Things (IoT) devices are ubiquitous, but little attention has been paid to how they may incorporate dark patterns despite consumer protections and privacy concerns arising from their unique access to intimate spaces and always-on capabilities. This paper conducts a systematic investigation of dark patterns in 57 popular, diverse smart home devices. We update manual interaction and annotation methods for the IoT context, then analyze dark pattern frequency across device types, manufacturers, and interaction modalities. We find that dark patterns are pervasive in IoT experiences, but manifest in diverse ways across device traits. Speakers, doorbells, and camera devices contain the most dark patterns, with manufacturers of such devices (Amazon and Google) having the most dark patterns compared to other vendors. We investigate how this distribution impacts the potential for consumer exposure to dark patterns, discuss broader implications for key stakeholders like designers and regulators, and identify opportunities for future dark patterns study
- …