39 research outputs found

    Pre-litigation Mediation as a Privacy Policy: Exploring the Interaction of Economics and Privacy

    Get PDF
    Pre-litigation mediation is a perfect example of the economic trade-offs that exist in privacy policy. In pre-litigation mediation, costs and confidentiality work independently. However, there is a precarious balance that exists where, if either confidentiality or cost became less effective the entire mediation process might be damaged.mediation; confidentiality, privacy, prelitigation; prelitigation mediation

    Determining What Characteristics Constitute a Darknet

    Get PDF
    Privacy on the Internet has always been a concern, but monitoring of content by both private corporations and Government departments has pushed people to search for ways to communicate over the Internet in a more secure manner. This has given rise to the creations of Darknets, which are networks that operate “inside” the Internet, and allow anonymous participation via a de‐centralised, encrypted, peer‐to‐peer network topology. This research investigates some sources of known Internet content monitoring, and how they provided the template for the creation of a system to avoid such surveillance. It then highlights how communications on the Clearnet is fundamentally different to that of a Darknet, and examines the characteristics that determine whether or not a network could be classified as a Darknet. Selection of said characteristics is based on how the network was developed, what its intended goals were, and how it implemented technical measures to meet said goals. Five characteristics were found that could be used to determine if a network is to be be classified as a Darknet

    Cyber-crime Science = Crime Science + Information Security

    Get PDF
    Cyber-crime Science is an emerging area of study aiming to prevent cyber-crime by combining security protection techniques from Information Security with empirical research methods used in Crime Science. Information security research has developed techniques for protecting the confidentiality, integrity, and availability of information assets but is less strong on the empirical study of the effectiveness of these techniques. Crime Science studies the effect of crime prevention techniques empirically in the real world, and proposes improvements to these techniques based on this. Combining both approaches, Cyber-crime Science transfers and further develops Information Security techniques to prevent cyber-crime, and empirically studies the effectiveness of these techniques in the real world. In this paper we review the main contributions of Crime Science as of today, illustrate its application to a typical Information Security problem, namely phishing, explore the interdisciplinary structure of Cyber-crime Science, and present an agenda for research in Cyber-crime Science in the form of a set of suggested research questions

    An Analysis of Tools for Online Anonymity

    Get PDF
    Purpose The purpose of this paper is to examine the possible explanations for the slow adoption and development of online anonymity technology. The ability to remain anonymous while engaging in different activities, online is increasingly sought after by consumers with privacy concerns. Currently, the only way to maintain online anonymity is through the use of technology. This paper reviews and analyzes the tools currently available to consumers to maintain online anonymity. There are only four tools available to consumers to ensure online anonymity: anonymous remailers, rewebbers, The Onion Router (Tor) and the Invisible Internet Project (I2P). These tools provide the protection needed for an Internet user to remain anonymous but suffer from a lack of usability and adoption. Design/methodology/approach The authors have selected a few specific online anonymity technologies based on the following criteria: the technology satisfies our full anonymity definition, the technology is currently available for public use and the technology has been academically researched. Findings Few anonymity technologies are available for public use that offer the ability for full online anonymity, and these technologies are difficult for the average computer user to operate. Further research is still needed to help determine what the average user wants to see in an anonymity technology as well as ways to help users integrate the technology into their commodity software (such as Web browsers). Future online anonymity technologies should enable the user to decide when, how and with whom their information is shared if it is shared at all with ease and simplicity. Originality/value The authors identify, explain and analyze publicly available online anonymity technologies in terms of their usability. The authors identified ways as to how online anonymity technology can be improved to increase public adoption. The authors make pertinent recommendations on how the design and development of online anonymity technology can be improved in the future

    Privacy Please: A Privacy Curriculum Taxonomy (PCT) For The Era Of Personal Intelligence

    Get PDF
    This paper extends forward thinking by information ethics and business education scholars to introduce a Privacy Curriculum Taxonomy (PCT) that repurposes business curricula around the emerging personal information privacy paradigm. The seminal challenge confronting business education leaders is to respond to the ontological paradigm shift from a physical society driven by material and monetary processes, towards a digital society driven by information supply and the growing demand for information privacy. The PCT is advanced as an initial framework for engaging business curriculum planners in the considerations required to repurpose existing disciplines around digital society information and privacy processes. After a current literature review, the PCT is developed using a foundational set of information assurance principles. The PCT is business discipline specific, to catalyze incubation and further development within and across functional areas

    Distributed Performance Measurement and Usability Assessment of the Tor Anonymization Network

    Get PDF
    While the Internet increasingly permeates everyday life of individuals around the world, it becomes crucial to prevent unauthorized collection and abuse of personalized information. Internet anonymization software such as Tor is an important instrument to protect online privacy. However, due to the performance overhead caused by Tor, many Internet users refrain from using it. This causes a negative impact on the overall privacy provided by Tor, since it depends on the size of the user community and availability of shared resources. Detailed measurements about the performance of Tor are crucial for solving this issue. This paper presents comparative experiments on Tor latency and throughput for surfing to 500 popular websites from several locations around the world during the period of 28 days. Furthermore, we compare these measurements to critical latency thresholds gathered from web usability research, including our own user studies. Our results indicate that without massive future optimizations of Tor performance, it is unlikely that a larger part of Internet users would adopt it for everyday usage. This leads to fewer resources available to the Tor community than theoretically possible, and increases the exposure of privacy-concerned individuals. Furthermore, this could lead to an adoption barrier of similar privacy-enhancing technologies for a Future Internet. View Full-Tex

    Stochastic Tools for Network Security: Anonymity Protocol Analysis and Network Intrusion Detection

    Get PDF
    With the rapid development of Internet and the sharp increase of network crime, network security has become very important and received a lot of attention. In this dissertation, we model security issues as stochastic systems. This allows us to find weaknesses in existing security systems and propose new solutions. Exploring the vulnerabilities of existing security tools can prevent cyber-attacks from taking advantages of the system weaknesses. We consider The Onion Router (Tor), which is one of the most popular anonymity systems in use today, and show how to detect a protocol tunnelled through Tor. A hidden Markov model (HMM) is used to represent the protocol. Hidden Markov models are statistical models of sequential data like network traffic, and are an effective tool for pattern analysis. New, flexible and adaptive security schemes are needed to cope with emerging security threats. We propose a hybrid network security scheme including intrusion detection systems (IDSs) and honeypots scattered throughout the network. This combines the advantages of two security technologies. A honeypot is an activity-based network security system, which could be the logical supplement of the passive detection policies used by IDSs. This integration forces us to balance security performance versus cost by scheduling device activities for the proposed system. By formulating the scheduling problem as a decentralized partially observable Markov decision process (DEC-POMDP), decisions are made in a distributed manner at each device without requiring centralized control. When using a HMM, it is important to ensure that it accurately represents both the data used to train the model and the underlying process. Current methods assume that observations used to construct a HMM completely represent the underlying process. It is often the case that the training data size is not large enough to adequately capture all statistical dependencies in the system. It is therefore important to know the statistical significance level that the constructed model represents the underlying process, not only the training set. We present a method to determine if the observation data and constructed model fully express the underlying process with a given level of statistical significance. We apply this approach to detecting the existence of protocols tunnelled through Tor. While HMMs are a powerful tool for representing patterns allowing for uncertainties, they cannot be used for system control. The partially observable Markov decision process (POMDP) is a useful choice for controlling stochastic systems. As a combination of two Markov models, POMDPs combine the strength of HMM (capturing dynamics that depend on unobserved states) and that of Markov decision process (MDP) (taking the decision aspect into account). Decision making under uncertainty is used in many parts of business and science. We use here for security tools. We propose three approximation methods for discrete-time infinite-horizon POMDPs. One of the main contributions of our work is high-quality approximation solution for finite-space POMDPs with the average cost criterion, and their extension to DEC-POMDPs. The solution of the first algorithm is built out of the observable portion when the underlying MDP operates optimally. The other two methods presented here can be classified as the policy-based approximation schemes, in which we formulate the POMDP planning as a quadratically constrained linear program (QCLP), which defines an optimal controller of a desired size. This representation allows a wide range of powerful nonlinear programming (NLP) algorithms to be used to solve POMDPs. Simulation results for a set of benchmark problems illustrate the effectiveness of the proposed method. We show how this tool could be used to design a network security framework

    Traffic Analysis of Anonymity Systems

    Get PDF
    This research applies statistical methods in pattern recognition to test the privacy capabilities of a very popular anonymity tool used on the Internet known as Tor. Using a recently developed algorithm known as Causal State Splitting and Reconstruction (CSSR), we can create hidden Markov models of network processes proxied through Tor. In contrast to other techniques, our CSSR extensions create a minimum entropy model without any prior knowledge of the underlying state structure. The inter-packet time delays of the network process, preserved by Tor, can be symbolized into ranges and used to construct the models. After the construction of training models, detection is performed using Confidence Intervals. New test data can be fed through a model to determine the intervals and estimate how well the data matches the model. If a match is found, the state sequence, or path, can be used to uniquely describe the data with respect to the model. It is by comparing these paths that Tor users can be identified. Packet data from any two computers using the Tor network can be matched to a model and their state sequences can be compared to give a statistical likelihood that the two systems are actually communicating together over Tor. We perform experiments on a private Tor network to validate this. Results showed that communicating systems could be identified with a 95% accuracy in our test scenario. This attack differs from previous maximum likelihood-based approaches in that it can be performed between just two computers using Tor. The adversary does not need to be a global observer. The attack can also be performed in real-time provided that a matching model had already been constructed

    Technical Privacy Metrics: a Systematic Survey

    Get PDF
    The file attached to this record is the author's final peer reviewed versionThe goal of privacy metrics is to measure the degree of privacy enjoyed by users in a system and the amount of protection offered by privacy-enhancing technologies. In this way, privacy metrics contribute to improving user privacy in the digital world. The diversity and complexity of privacy metrics in the literature makes an informed choice of metrics challenging. As a result, instead of using existing metrics, new metrics are proposed frequently, and privacy studies are often incomparable. In this survey we alleviate these problems by structuring the landscape of privacy metrics. To this end, we explain and discuss a selection of over eighty privacy metrics and introduce categorizations based on the aspect of privacy they measure, their required inputs, and the type of data that needs protection. In addition, we present a method on how to choose privacy metrics based on nine questions that help identify the right privacy metrics for a given scenario, and highlight topics where additional work on privacy metrics is needed. Our survey spans multiple privacy domains and can be understood as a general framework for privacy measurement
    corecore