135 research outputs found

    Sub-Operating Systems: A New Approach to Application Security

    Get PDF
    In the current highly interconnected computing environments, users regularly use insecure software. Many popular applications, such as Netscape Navigator and Microsoft Word, are targeted by hostile applets or malicious documents, and might therefore compromise the integrity of the system. Current operating systems are unable to protect their users from these kinds of attacks, since the hostile software is running with the user\u27s privileges and permissions. We introduce the notion of the SubOS, a process-specific protection mechanism. Under SubOS, any application that might deal with incoming, possibly malicious objects, behaves like an operating system. It views those objects the same way an operating system views users - it assigns sub-user id\u27s - and restricts their accesses to the system resources

    Preventing Intimate Image Abuse Via Privacy-Preserving Credentials

    Get PDF
    The problem of non-consensual pornography (“NCP”), sometimes known as intimate image abuse or revenge porn, is well known. Despite its distribution being illegal in most states, it remains a serious problem, if only because it is often difficult to prove who uploaded the pictures. Furthermore, the Federal statute commonly known as Section 230 generally protects Internet sites, such as PornHub, from liability for content created by their users; only the users are liable, not the sites. One obvious countermeasure would be to require Internet sites to strongly authenticate their users, but this is not an easy problem to solve. Furthermore, while strong authentication would provide accountability for the immedi- ate upload, such a policy would threaten the ability to speak anonymously, a vital constitutional right. Also, it often would not help identify the original offender—many people download images from one site and upload them to another, which adds another layer of complexity. We instead propose a more complex scheme, based on a privacy- preserving cryptographic credential scheme originally devised by researcher Jan Camenisch and Professor Anna Lysyanskaya. While the details (and the underlying mathematics) are daunting, the essential properties of their scheme are straightforward. Users first obtain a primary credential from a trusted iden- tity provider; this provider verifies the person’s identity, generally via the usual types of government-issued ID documents, and hence knows a user’s real iden- tity. To protect privacy, this primary credential can be used to arbitrarily generate many anonymous but provably valid sub-credentials, perhaps one per website; these sub-credentials cannot be linked either to each other or to the primary credential. For technical reasons, sub-credentials cannot be used directly to digitally sign images. Instead, they are used to obtain industry-standard crypto- graphic “certificates,” which can be used to verify digital signatures on images. The certificate-issuing authority also receives and retains an encrypted, random pseudonym known by the identity provider, which is used to identify the web- site user. If NCP is alleged to be present in an image, information extracted from the image’s metadata—plus the encrypted pseudonym—can be sent to a deanonymization agent, the only party who can decrypt it. The final step to reveal the uploader’s identity is to send the decrypted pseudonym to the identity provider; which knows the linkage between the pseudonym and real person. In other words, three separate parties must cooperate to identify someone. The scheme is thus privacy-preserving, accountable, and abuse-resistant. It is privacy-preserving because sub-credentials are anonymous and not link- able to anything. It provides accountability, because all images are signed before upload and the identity of the original uploader can be determined if necessary. It is abuse-resistant, because it requires the cooperation of those three parties—the certificate issuer, the deanonymization agent, and the identity provider—to identify an image uploader. The paper contains a reasonably detailed description of how the scheme works technically, albeit without the mathematics. Our paper describes the necessary legal framework for this scheme. We start with a First Amendment analysis, to show that this potential violation of the constitutional right to anonymity is acceptable. We conclude that exacting scrutiny (as opposed to the generally higher standard of strict scrutiny), which balances different rights, is the proper standard to use. Exacting Scrutiny is what the Supreme Court has used in, e.g., Citizens United, to justify viola- tions of anonymity. Here, the balance is the right to anonymous publication of images versus the right to intimate privacy, a concept that we show has also been endorsed by the Supreme Court. We go on to discuss the requirements for the different parties—e.g., their trustworthiness and if they are in a juris- diction where aggrieved parties would have effective recourse—and the legal and procedural requirements, including standing, for opposing deanonymization. We suggest that all three parties should have the right to challenge dean- onymization requests, to ensure that they are valid. We also discuss how to change Section 230 in a way that would be constitutional (it is unclear if use of this scheme can be mandated), to induce Internet sites to adopt it. Finally, we discuss other barriers to adoption of this scheme and how to work around them: not everyone will have a suitable government-issued ID, and some sites, especially news and whistleblower sites, may wish to eschew strongly authenticated images to protect the identities of their sources

    Transient Addressing for Related Processes: Improved Firewalling by Using IPV6 and Multiple Addresses per Host

    Get PDF
    Traditionally, hosts have tended to assign relatively few network addresses to an interface for extended periods. Encouraged by the new abundance of addressing possibilities provided by IPv6, we propose a new method, called Transient Addressing for Related Processes (TARP), whereby hosts temporarily employ and subsequently discard IPv6 addresses in servicing a client host's network requests. The method provides certain security advantages and neatly finesses some well-known firewall problems caused by dynamic port negotiation used in a variety of application protocols. A prototype implementation exists as a small set of kame/BSD kernel enhancements and allows socket programmers and applications nearly transparent access to TARP addressing's advantages

    Who Coined the Phrase "Data Shadow"?

    Get PDF

    When Enough is Enough: Location Tracking, Mosaic Theory, and Machine Learning

    Get PDF
    Since 1967, when it decided Katz v. United States, the Supreme Court has tied the right to be free of unwanted government scrutiny to the concept of reasonable xpectations of privacy.[1] An evaluation of reasonable expectations depends, among other factors, upon an assessment of the intrusiveness of government action. When making such assessment historically the Court has considered police conduct with clear temporal, geographic, or substantive limits. However, in an era where new technologies permit the storage and compilation of vast amounts of personal data, things are becoming more complicated. A school of thought known as “mosaic theory” has stepped into the void, ringing the alarm that our old tools for assessing the intrusiveness of government conduct potentially undervalue privacy rights. Mosaic theorists advocate a cumulative approach to the evaluation of data collection. Under the theory, searches are “analyzed as a collective sequence of steps rather than as individual steps.”[2] The approach is based on the recognition that comprehensive aggregation of even seemingly innocuous data reveals greater insight than consideration of each piece of information in isolation. Over time, discrete units of surveillance data can be processed to create a mosaic of habits, relationships, and much more. Consequently, a Fourth Amendment analysis that focuses only on the government’s collection of discrete units of trivial data fails to appreciate the true harm of long-term surveillance—the composite. In the context of location tracking, the Court has previously suggested that the Fourth Amendment may (at some theoretical threshold) be concerned with the accumulated information revealed by surveillance.[3] Similarly, in the Court’s recent decision in United States v. Jones, a majority of concurring justices indicated willingness to explore such an approach.[4] However, in general, the Court has rejected any notion that technological enhancement matters to the constitutional treatment of location tracking.[5] Rather, it has found that such surveillance in public spaces, which does not require physical trespass, is equivalent to a human tail and thus not regulated by the Fourth Amendment. In this way, the Court has avoided quantitative analysis of the amendment’s protections. The Court’s reticence is built on the enticingly direct assertion that objectivity under the mosaic theory is impossible. This is true in large part because there has been no rationale yet offered to objectively distinguish relatively short-term monitoring from its counterpart of greater duration. As Justice Scalia recently observed in Jones: “it remains unexplained why a 4-week investigation is ‘surely’ too long.”[6] This article suggests that by combining the lessons of machine learning with the mosaic theory and applying the pairing to the Fourth Amendment we can see the contours of a response. Machine learning makes clear that mosaics can be created. Moreover, there are also important lessons to be learned on when that is the case. Machine learning is the branch of computer science that studies systems that can draw inferences from collections of data, generally by means of mathematical algorithms. In a recent competition called “The Nokia Mobile Data Challenge,”[7] researchers evaluated machine learning’s applicability to GPS and cell phone tower data. From a user’s location history alone, the researchers were able to estimate the user’s gender, marital status, occupation and age.[8] Algorithms developed for the competition were also able to predict a user’s likely future location by observing past location history. The prediction of a user’s future location could be even further improved by using the location data of friends and social contacts.[9] Machine learning of the sort on display during the Nokia competition seeks to harness the data deluge of today’s information society by efficiently organizing data, finding statistical regularities and other patterns in it, and making predictions therefrom. Machine learning algorithms are able to deduce information—including information that has no obvious linkage to the input data—that may otherwise have remained private due to the natural limitations of manual and human-driven investigation. Analysts can “train” machine learning programs using one dataset to find similar characteristics in new datasets. When applied to the digital “bread crumbs” of data generated by people, machine learning algorithms can make targeted personal predictions. The greater the number of data points evaluated, the greater the accuracy of the algorithm’s results. In five parts, this article advances the conclusion that the duration of investigations is relevant to their substantive Fourth Amendment treatment because duration affects the accuracy of the predictions. Though it was previously difficult to explain why an investigation of four weeks was substantively different from an investigation of four hours, we now have a better understanding of the value of aggregated data when viewed through a machine learning lens. In some situations, predictions of startling accuracy can be generated with remarkably few data points. Furthermore, in other situations accuracy can increase dramatically above certain thresholds. For example, a 2012 study found the ability to deduce ethnicity moved sideways through five weeks of phone data monitoring, jumped sharply to a new plateau at that point, and then increased sharply again after twenty-eight weeks.[10] More remarkably, the accuracy of identification of a target’s significant other improved dramatically after five days’ worth of data inputs.[11] Experiments like these support the notion of a threshold, a point at which it makes sense to draw a Fourth Amendment line. In order to provide an objective basis for distinguishing between law enforcement activities of differing duration the results of machine learning algorithms can be combined with notions of privacy metrics, such as k-anonymity or l-diversity. While reasonable minds may dispute the most suitable minimum accuracy threshold, this article makes the case that the collection of data points allowing predictions that exceed selected thresholds should be deemed unreasonable searches in the absence of a warrant.[12] Moreover, any new rules should take into account not only the data being collected but also the foreseeable improvements in the machine learning technology that will ultimately be brought to bear on it; this includes using future algorithms on older data. In 2001, the Supreme Court asked “what limits there are upon the power of technology to shrink the realm of guaranteed privacy.”[13] In this piece, we explore an answer and investigate what lessons there are in the power of technology to protect the realm of guaranteed privacy. After all, as technology takes away, it also gives. The objective understanding of data compilation and analysis that is revealed by machine learning provides important Fourth Amendment insights. We should begin to consider these insights more closely. [1] Katz v. United States, 389 U.S. 347, 361 (1967) (Harlan, J., concurring). [2] Orin Kerr, The Mosaic Theory of the Fourth Amendment, 111 Mich. L. Rev. 311, 312 (2012). [3] United States v. Knotts, 460 U.S. 276, 284 (1983). [4] Justice Scalia writing for the majority left the question open. United States v. Jones, 132 S. Ct. 945, 954 (2012) (“It may be that achieving the same result [as in traditional surveillance] through electronic means, without an accompanying trespass, is an unconstitutional invasion of privacy, but the present case does not require us to answer that question.”). [5] Compare Knotts, 460 U.S. at 276 (rejecting the contention that an electronic beeper should be treated differently than a human tail) and Smith v. Maryland, 442 U.S. 735, 744 (1979) (approving the warrantless use of a pen register in part because the justices were “not inclined to hold that a different constitutional result is required because the telephone company has decided to automate”) with Kyllo v. United States, 533 U.S. 27, 33 (2001) (recognizing that advances in technology affect the degree of privacy secured by the Fourth Amendment). [6] United States v. Jones, 132 S.Ct. 945 (2012); see also Kerr, 111 Mich. L. Rev. at 329-330. [7] See Nokia Research Center, Mobile Data Challenge 2012 Workshop, http://research.nokia.com/page/12340. [8] Demographic Attributes Prediction on the Real-World Mobile Data, Sanja Brdar, Dubravko Culibrk & Vladimir Crnojevic, Nokia Mobile Data Challenge Workshop 2012. [9] Interdependence and Predictability of Human Mobility and Social Interactions, Manlio de Domenico, Antonio Lima & Mirco Musolesi, Nokia Mobile Data Challenge Workshop 2012. [10] See Yaniv Altshuler, Nadav Aharony, Michael Fire, Yuval Elovici, Alex Pentland, Incremental Learning with Accuracy Prediction of Social and Individual Properties from Mobile-Phone Data, WS3P, IEEE Social Computing (2012), Figure 10. [11] Id., Figure 9. [12] Admittedly, there are differing views on sources of authority beyond the Constitution that might justify location tracking. See, e.g., Stephanie K. Pell & Christopher Soghoian, Can You See Me Now? Toward Reasonable Standards for Law Enforcement Access to Location Data That Congress Could Enact, 27 Berkeley Tech. L.J. 117 (2012). [13] Kyllo, 533 U.S. at 34

    Lawful Hacking: Using Existing Vulnerabilities for Wiretapping on the Internet

    Get PDF
    For years, legal wiretapping was straightforward: the officer doing the intercept connected a tape recorder or the like to a single pair of wires. By the 1990s, however, the changing structure of telecommunications—there was no longer just “Ma Bell” to talk to—and new technologies such as ISDN and cellular telephony made executing a wiretap more complicated for law enforcement. Simple technologies would no longer suffice. In response, Congress passed the Communications Assistance for Law Enforcement Act (CALEA) which mandated a standardized lawful intercept interface on all local phone switches. Since its passage, technology has continued to progress, and in the face of new forms of communication—Skype, voice chat during multiplayer online games, instant messaging, etc.—law enforcement is again experiencing problems. The FBI has called this “Going Dark”: their loss of access to suspects’ communication. According to news reports, law enforcement wants changes to the wiretap laws to require a CALEA-like interface in Internet software. CALEA, though, has its own issues: it is complex software specifically intended to create a security hole—eavesdropping capability—in the already-complex environment of a phone switch. It has unfortunately made wiretapping easier for everyone, not just law enforcement. Congress failed to heed experts’ warnings of the danger posed by this mandated vulnerability, and time has proven the experts right. The so-called “Athens Affair,” where someone used the built-in lawful intercept mechanism to listen to the cell phone calls of high Greek officials, including the Prime Minister, is but one example. In an earlier work, we showed why extending CALEA to the Internet would create very serious problems, including the security problems it has visited on the phone system. In this paper, we explore the viability and implications of an alternative method for addressing law enforcements need to access communications: legalized hacking of target devices through existing vulnerabilities in end-user software and platforms. The FBI already uses this approach on a small scale; we expect that its use will increase, especially as centralized wiretapping capabilities become less viable. Relying on vulnerabilities and hacking poses a large set of legal and policy questions, some practical and some normative. Among these are: (1) Will it create disincentives to patching? (2) Will there be a negative effect on innovation? (Lessons from the so-called “Crypto Wars” of the 1990s, and in particular the debate over export controls on cryptography, are instructive here.) (3) Will law enforcement’s participation in vulnerabilities purchasing skew the market? (4) Do local and even state law enforcement agencies have the technical sophistication to develop and use exploits? If not, how should this be handled? A larger FBI role? (5) Should law enforcement even be participating in a market where many of the sellers and other buyers are themselves criminals? (6) What happens if these tools are captured and repurposed by miscreants? (7) Should we sanction otherwise illegal network activity to aid law enforcement? (8) Is the probability of success from such an approach too low for it to be useful? As we will show, these issues are indeed challenging. We regard the issues raised by using vulnerabilities as, on balance, preferable to adding more complexity and insecurity to online systems

    That Was Close! Reward Reporting of Cybersecurity “Near Misses”

    Get PDF
    Building, deploying, and maintaining systems with sufficient cybersecurity is challenging. Faster improvement would be valuable to society as a whole. Are we doing as much as we can to improve? We examine robust and long-standing systems for learning from near misses in aviation, and propose the creation of a Cyber Safety Reporting System (CSRS). To support this argument, we examine the liability concerns which inhibit learning, including both civil and regulatory liability. We look to the way in which cybersecurity engineering and science is done today, and propose that a small amount of ‘policy entrepreneurship’ could have substantial positive impact. We close by considering how a CSRS should be organized and housed

    That Was Close! Reward Reporting of Cybersecurity “Near Misses”

    Get PDF
    Building, deploying, and maintaining systems with sufficient cybersecurity is challenging. Faster improvement would be valuable to society as a whole. Are we doing as much as we can to improve? We examine robust and long-standing systems for learning from near misses in aviation, and propose the creation of a Cyber Safety Reporting System (CSRS). To support this argument, we examine the liability concerns which inhibit learning, including both civil and regulatory liability. We look to the way in which cybersecurity engineering and science is done today, and propose that a small amount of ‘policy entrepreneurship’ could have substantial positive impact. We close by considering how a CSRS should be organized and housed
    • 

    corecore