225 research outputs found
Limits To Certainty in QoS Pricing and Bandwidth
Advanced services require more reliable bandwidth than currently provided by
the Internet Protocol, even with the reliability enhancements provided by TCP.
More reliable bandwidth will be provided through QoS (quality of service), as
currently discussed widely. Yet QoS has some implications beyond providing
ubiquitous access to advance Internet service, which are of interest from a
policy perspective. In particular, what are the implications for price of
Internet services? Further, how will these changes impact demand and universal
service for the Internet. This paper explores the relationship between
certainty of bandwidth and certainty of price for Internet services over a
statistically shared network and finds that these are mutually exclusive goals.Comment: 29th TPRC Conference, 200
Using Bursty Announcements for Detecting BGP Routing Anomalies
Despite the robust structure of the Internet, it is still susceptible to
disruptive routing updates that prevent network traffic from reaching its
destination. Our research shows that BGP announcements that are associated with
disruptive updates tend to occur in groups of relatively high frequency,
followed by periods of infrequent activity. We hypothesize that we may use
these bursty characteristics to detect anomalous routing incidents. In this
work, we use manually verified ground truth metadata and volume of
announcements as a baseline measure, and propose a burstiness measure that
detects prior anomalous incidents with high recall and better precision than
the volume baseline. We quantify the burstiness of inter-arrival times around
the date and times of four large-scale incidents: the Indosat hijacking event
in April 2014, the Telecom Malaysia leak in June 2015, the Bharti Airtel Ltd.
hijack in November 2015, and the MainOne leak in November 2018; and three
smaller scale incidents that led to traffic interception: the Belarusian
traffic direction in February 2013, the Icelandic traffic direction in July
2013, and the Russian telecom that hijacked financial services in April 2017.
Our method leverages the burstiness of disruptive update messages to detect
these incidents. We describe limitations, open challenges, and how this method
can be used for routing anomaly detection.Comment: 16 pages, 13 figures, 4 tabl
An Atomicity-Generating Layer for Anonymous Currencies
Atomicity is a necessary element for reliable transactions (Financial Service Technology
Consortium, 1995; Camp, Sirbu and Tygar, 1995; Tygar, 1996). Anonymity is also an
issue of great importance not only to designers of commerce systems, (Chaum, 1982;
Chaum, 1989; Chaum, Fiat & Naor, 1988; Medvinski, 1993), but also to those concerned
with the societal effects of information technologies (Branscomb 1994. Compaine 1985,
National Research Council 1996, Neumann 1993, Poole 1983). Yet there has been a tradeoff
between these two elements in commerce system design. Reliable systems, which
provide highly atomic transactions, offer limited anonymity (Visa, 1995; Sirbu and Tygar,
1995; Mastercard, 1995, Low, Maxemchuk and Paul, 1993) . Anonymous systems
(Chaum, 1985; Chaum 1989; Medvinski, 1993) do not offer reliable transactions as shown
in Yee, 1994; Camp, 1999; and Tygar, 1996. This work illustrates that any electronic token
currency can be made reliable with the addition of this atomicity-generating layer.IB
Towards Coherent Regulation of Law Enforcement Surveillance in the Network Society
In this paper, we study the evolution of telecommunications technology and its impact on law
enforcement surveillance. Privacy and the need for law enforcement to conduct investigations
have not been at the center of the recent public policy debate. Yet, policy environments have
approved law enforcement surveillance that can be and is intrusive. Law enforcement
surveillance therefore deserves particular attention when discussing the basic human right to
privacy. We illustrate that despite the gradual acceptance of the basic human right to privacy, in
the digital age the United States (US) government continues its historical pattern of using
technology to enhance its power of search . The most recent example is the installation of the
Digital Collection System 1000 (DCS1000), formerly known as Carnivore, a classified packet
sniffer, on American networks by the American federal law enforcement agency.NSF grant 9985433; HP equipment gran
Conceptualizing human resilience in the face of the global epidemiology of cyber attacks
Computer security is a complex global phenomenon where different populations interact, and the infection of one person creates risk for another. Given the dynamics and scope of cyber campaigns, studies of local resilience without reference to global populations are inadequate. In this paper we describe a set of minimal requirements for implementing a global epidemiological infrastructure to understand and respond to large-scale computer security outbreaks. We enumerate the relevant dimensions, the applicable measurement tools, and define a systematic approach to evaluate cyber security resilience. From the experience in conceptualizing and designing a cross-national coordinated phishing resilience evaluation we describe the cultural, logistic, and regulatory challenges to this proposed public health approach to global computer assault resilience. We conclude that mechanisms for systematic evaluations of global attacks and the resilience against those attacks exist. Coordinated global science is needed to address organised global ecrime
Criteria and Analysis for Human-Centered Browser Fingerprinting Countermeasures
Browser fingerprinting is a surveillance technique that uses browser and device attributes to track visitors across the web. Defeating fingerprinting requires blocking attribute information or spoofing attributes, which can result in loss of functionality. To address the challenge of escaping surveillance while obtaining functionality, we identify six design criteria for an ideal spoofing system. We present three fingerprint generation algorithms as well as a baseline algorithm that simply samples a dataset of fingerprints. For each algorithm, we identify trade-offs among the criteria: distinguishability from a non-spoofed fingerprint, uniqueness, size of the anonymity set, efficient generation, loss of web functionality, and whether or not the algorithm protects the confidentiality of the underlying dataset. We report on a series of experiments illustrating that the use of our partially-dependent algorithm for spoofing fingerprints will avoid detection by Machine Learning approaches to surveillance
When Proof of Work Works
Proof of work (POW) is a set of cryptographic mechanisms which increase
the cost of initiating a connection. Currently recipients bear as much
or more cost per connection as initiators. The design goal of POW is to
reverse the economics of connection initiation on the Internet. In the
case of spam, the first economic examination of POW argued that POW
would not, in fact, work. This result was based on the difference in
production cost between legitimate and criminal enterprises. We
illustrate that the difference in production costs enabled by zombies
does not remove the efficacy of POW when work requirements are weighted.
We illustrate that POW will work with a reputation system modeled on the
systems currently used by commercial anti-spam companies. We also
discuss how the variation on POW changes the nature of corresponding
proofs from token currency to a notational currency
MFA is A Necessary Chore!: Exploring User Mental Models of Multi-Factor Authentication Technologies
With technological advancements, traditional single-factor authentication methods, such as passwords, have become more vulnerable to cyber-threats. One potential solution, multi-factor authentication (MFA), enhances security with additional steps of verification. Yet, MFA has a slow adoption rate among users, and frequent data breaches continue to impact online and real-world services. Little research has investigated users\u27 understanding and usage of MFA while specifically focusing on the their mental models and social behaviors in a work setting. We conducted semi-structured interviews with 28 individuals (11 experts, 17 non-experts), while focusing on their risk perceptions, MFA usage, and understanding of required technologies. We identified that experts treated MFA as a useful added layer of authentication, while non-experts did not perceive any additional benefits of using MFA. Both non-experts and experts expressed frustration with MFA usage, often referring to it as a \u27chore.\u27 Based on these findings, we make several actionable recommendations for improving the adoption, acceptability, and usability of MFA tools
- …