61 research outputs found

    Situational Prevention of Child Abuse in the New Technologies

    Get PDF
    Situational prevention shifts attention from the psychological characteristics of the individual performing behaviour to the facilitating role played by the immediate environment in which the behaviour occurs. Applied to the problem of Internet child pornography, the situational approach emphasises the role of opportunity in driving consumption. It is argued that under the right environmental conditions the potential to view children as sexual objects is more widespread than sexual deviance models suggest. The Internet allows individuals to satisfy their secret desires conveniently, cheaply and relatively risk-free. Situational prevention of Internet child pornography requires strategies that reduce the opportunities for accessing child abuse images by making the activity less rewarding, more difficult, and riskier

    Mitigating Distributed Denial of Service Attacks in an Anonymous Routing Environment: Client Puzzles and Tor

    Get PDF
    Online intelligence operations use the Internet to gather information on the activities of U.S. adversaries. The security of these operations is paramount, and one way to avoid being linked to the Department of Defense (DoD) is to use anonymous communication systems. One such system, Tor, makes interactive TCP services anonymous. Tor uses the Transport Layer Security (TLS) protocol and is thus vulnerable to a distributed denial-of-service (DDoS) attack that can significantly delay data traversing the Tor network. This research uses client puzzles to mitigate TLS DDoS attacks. A novel puzzle protocol, the Memoryless Puzzle Protocol (MPP), is conceived, implemented, and analyzed for anonymity and DDoS vulnerabilities. Consequently, four new secondary DDoS and anonymity attacks are identified and defenses are proposed. Furthermore, analysis of the MPP identified and resolved two important shortcomings of the generalized client puzzle technique. Attacks that normally induce victim CPU utilization rates of 80-100% are reduced to below 70%. Also, the puzzle implementation allows for user-data latency to be reduced by close to 50% during a large-scale attack .Finally, experimental results show successful mitigation can occur without sending a puzzle to every requesting client. By adjusting the maximum puzzle strength, CPU utilization can be capped at 70% even when an arbitrary client has only a 30% chance of receiving a puzzle

    Privacy In The Age Of Technology

    Get PDF
    The rapid developments in technology have brought increased convenience, but at the price of loss of privacy.  Consumers should be aware of the potential threat to their personal information and should learn the facts and take steps to protect themselves.

    Conducting Research on the Internet:: Online Survey Design, Development and Implementation Guidelines

    Get PDF
    Using the Internet to conduct quantitative research presents challenges not found in conventional research. Some of our knowledge concerning the effective design and use of paper-based surveys does translate into electronic formats. However, electronic surveys have distinctive technological, demographic and response characteristics that affect how they should be designed, when they can be used and how they can be implemented. Survey design, subject privacy and confidentiality, sampling and subject solicitation, distribution methods and response rates and survey piloting are critical methodological components that must be addressed in order to conduct sound online research. This paper focuses on those distinctive characteristics. It reviews the current literature on the subject of electronic surveys and presents guidelines for designing, developing and implementing them, particularly web-based surveys. This paper argues that Web-based surveys are superior to email surveys in many aspects, but that email combined, perhaps with offline media, is an excellent vehicle for inviting individuals to participate in Web-based surveys. The application of these guidelines are demonstrated through the authors’ current research involving defining the nature of “non-public participation” (commonly referred to as lurking) in online discussion groups. Guidelines do not eliminate the many “trade-off” decisions required in the use of online surveys

    Compromising Anonymous Communication Systems Using Blind Source Separation

    Get PDF
    We propose a class of anonymity attacks to both wired and wireless anonymity networks. These attacks are based on the blind source separation algorithms widely used to recover individual signals from mixtures of signals in statistical signal processing. Since the philosophy behind the design of current anonymity networks is to mix traffic or to hide in crowds, the proposed anonymity attacks are very effective. The flow separation attack proposed for wired anonymity networks can separate the traffic in a mix network. Our experiments show that this attack is effective and scalable. By combining the flow separation method with frequency spectrum matching, a passive attacker can derive the traffic map of the mix network. We use a nontrivial network to show that the combined attack works. The proposed anonymity attacks for wireless networks can identify nodes in fully anonymized wireless networks using collections of very simple sensors. Based on a time series of counts of anonymous packets provided by the sensors, we estimate the number of nodes with the use of principal component analysis. We then proceed to separate the collected packet data into traffic flows that, with help of the spatial diversity in the available sensors, can be used to estimate the location of the wireless nodes. Our simulation experiments indicate that the estimators show high accuracy and high confidence for anonymized TCP traffic. Additional experiments indicate that the estimators perform very well in anonymous wireless networks that use traffic padding

    Neyman-Pearson Decision in Traffic Analysis

    Get PDF
    The increase of encrypted traffic on the Internet may become a problem for network-security applications such as intrusion-detection systems or interfere with forensic investigations. This fact has increased the awareness for traffic analysis, i.e., inferring information from communication patterns instead of its content. Deciding correctly that a known network flow is either the same or part of an observed one can be extremely useful for several network-security applications such as intrusion detection and tracing anonymous connections. In many cases, the flows of interest are relayed through many nodes that reencrypt the flow, making traffic analysis the only possible solution. There exist two well-known techniques to solve this problem: passive traffic analysis and flow watermarking. The former is undetectable but in general has a much worse performance than watermarking, whereas the latter can be detected and modified in such a way that the watermark is destroyed. In the first part of this dissertation we design techniques where the traffic analyst (TA) is one end of an anonymous communication and wants to deanonymize the other host, under this premise that the arrival time of the TA\u27s packets/requests can be predicted with high confidence. This, together with the use of an optimal detector, based on Neyman-Pearson lemma, allow the TA deanonymize the other host with high confidence even with short flows. We start by studying the forensic problem of leaving identifiable traces on the log of a Tor\u27s hidden service, in this case the used predictor comes in the HTTP header. Afterwards, we propose two different methods for locating Tor hidden services, the first one is based on the arrival time of the request cell and the second one uses the number of cells in certain time intervals. In both of these methods, the predictor is based on the round-trip time and in some cases in the position inside its burst, hence this method does not need the TA to have access to the decrypted flow. The second part of this dissertation deals with scenarios where an accurate predictor is not feasible for the TA. This traffic analysis technique is based on correlating the inter-packet delays (IPDs) using a Neyman-Pearson detector. Our method can be used as a passive analysis or as a watermarking technique. This algorithm is first made robust against adversary models that add chaff traffic, split the flows or add random delays. Afterwards, we study this scenario from a game-theoretic point of view, analyzing two different games: the first deals with the identification of independent flows, while the second one decides whether a flow has been watermarked/fingerprinted or not
    corecore