208 research outputs found

    Storytelling Security: User-Intention Based Traffic Sanitization

    Get PDF
    Malicious software (malware) with decentralized communication infrastructure, such as peer-to-peer botnets, is difficult to detect. In this paper, we describe a traffic-sanitization method for identifying malware-triggered outbound connections from a personal computer. Our solution correlates user activities with the content of outbound traffic. Our key observation is that user-initiated outbound traffic typically has corresponding human inputs, i.e., keystroke or mouse clicks. Our analysis on the causal relations between user inputs and packet payload enables the efficient enforcement of the inter-packet dependency at the application level. We formalize our approach within the framework of protocol-state machine. We define new application-level traffic-sanitization policies that enforce the inter-packet dependencies. The dependency is derived from the transitions among protocol states that involve both user actions and network events. We refer to our methodology as storytelling security. We demonstrate a concrete realization of our methodology in the context of peer-to-peer file-sharing application, describe its use in blocking traffic of P2P bots on a host. We implement and evaluate our prototype in Windows operating system in both online and offline deployment settings. Our experimental evaluation along with case studies of real-world P2P applications demonstrates the feasibility of verifying the inter-packet dependencies. Our deep packet inspection incurs overhead on the outbound network flow. Our solution can also be used as an offline collect-and-analyze tool

    Catch, Clean, and Release: A Survey of Obstacles and Opportunities for Network Trace Sanitization

    Get PDF
    Network researchers benefit tremendously from access to traces of production networks, and several repositories of such network traces exist. By their very nature, these traces capture sensitive business and personal activity. Furthermore, network traces contain significant operational information about the target network, such as its structure, identity of the network provider, or addresses of important servers. To protect private or proprietary information, researchers must “sanitize” a trace before sharing it. \par In this chapter, we survey the growing body of research that addresses the risks, methods, and evaluation of network trace sanitization. Research on the risks of network trace sanitization attempts to extract information from published network traces, while research on sanitization methods investigates approaches that may protect against such attacks. Although researchers have recently proposed both quantitative and qualitative methods to evaluate the effectiveness of sanitization methods, such work has several shortcomings, some of which we highlight in a discussion of open problems. Sanitizing a network trace, however challenging, remains an important method for advancing network–based research

    The role of Signal Processing in Meeting Privacy Challenges [an overview]

    No full text
    International audienceWith the increasing growth and sophistication of information technology, personal information is easily accessible electronically. This flood of released personal data raises important privacy concerns. However, electronic data sources exist to be used and have tremendous value (utility) to their users and collectors, leading to a tension between privacy and utility. This article aims to quantify that tension by means of an information-theoretic framework and motivate signal processing approaches to privacy problems. The framework is applied to a number of case studies to illustrate concretely how signal processing can be harnessed to provide data privacy

    Large-scale Wireless Local-area Network Measurement and Privacy Analysis

    Get PDF
    The edge of the Internet is increasingly becoming wireless. Understanding the wireless edge is therefore important for understanding the performance and security aspects of the Internet experience. This need is especially necessary for enterprise-wide wireless local-area networks (WLANs) as organizations increasingly depend on WLANs for mission- critical tasks. To study a live production WLAN, especially a large-scale network, is a difficult undertaking. Two fundamental difficulties involved are (1) building a scalable network measurement infrastructure to collect traces from a large-scale production WLAN, and (2) preserving user privacy while sharing these collected traces to the network research community. In this dissertation, we present our experience in designing and implementing one of the largest distributed WLAN measurement systems in the United States, the Dartmouth Internet Security Testbed (DIST), with a particular focus on our solutions to the challenges of efficiency, scalability, and security. We also present an extensive evaluation of the DIST system. To understand the severity of some potential trace-sharing risks for an enterprise-wide large-scale wireless network, we conduct privacy analysis on one kind of wireless network traces, a user-association log, collected from a large-scale WLAN. We introduce a machine-learning based approach that can extract and quantify sensitive information from a user-association log, even though it is sanitized. Finally, we present a case study that evaluates the tradeoff between utility and privacy on WLAN trace sanitization

    Participation Cost Estimation: Private Versus Non-Private Study

    Full text link
    In our study, we seek to learn the real-time crowd levels at popular points of interests based on users continually sharing their location data. We evaluate the benefits of users sharing their location data privately and non-privately, and show that suitable privacy-preserving mechanisms provide incentives for user participation in a private study as compared to a non-private study

    Privacy-preserving data outsourcing in the cloud via semantic data splitting

    Full text link
    Even though cloud computing provides many intrinsic benefits, privacy concerns related to the lack of control over the storage and management of the outsourced data still prevent many customers from migrating to the cloud. Several privacy-protection mechanisms based on a prior encryption of the data to be outsourced have been proposed. Data encryption offers robust security, but at the cost of hampering the efficiency of the service and limiting the functionalities that can be applied over the (encrypted) data stored on cloud premises. Because both efficiency and functionality are crucial advantages of cloud computing, in this paper we aim at retaining them by proposing a privacy-protection mechanism that relies on splitting (clear) data, and on the distributed storage offered by the increasingly popular notion of multi-clouds. We propose a semantically-grounded data splitting mechanism that is able to automatically detect pieces of data that may cause privacy risks and split them on local premises, so that each chunk does not incur in those risks; then, chunks of clear data are independently stored into the separate locations of a multi-cloud, so that external entities cannot have access to the whole confidential data. Because partial data are stored in clear on cloud premises, outsourced functionalities are seamlessly and efficiently supported by just broadcasting queries to the different cloud locations. To enforce a robust privacy notion, our proposal relies on a privacy model that offers a priori privacy guarantees; to ensure its feasibility, we have designed heuristic algorithms that minimize the number of cloud storage locations we need; to show its potential and generality, we have applied it to the least structured and most challenging data type: plain textual documents

    The Role of Signal Processing in Meeting Privacy Challenges: An Overview

    Full text link

    Voluntary Disclosure and Earnings Management

    Get PDF
    Discretion pervades the accounting rules. Proponents argue that allowing discretion enables managers to incorporate more information in their disclosures, while opponents believe that managers can abuse discretion and engage in earnings management at the expense of shareholders. We explicitly model accounting discretion and earnings management in a disclosure setting motivated by Shin (1994). We use this setting to study the interaction between management’s voluntary disclosure and the subsequent mandatory disclosure of value-relevant information. We show that, in equilibrium, allowing the manager to have some discretion over the mandatory financial reports may enhance the informativeness of the more-timely voluntary disclosure. However, allowing too much discretion for earnings management may result in less informative voluntary disclosure. Thus there may be a hidden benefit of granting some (but not too much) discretion in firms’ mandatory financial statements

    From Map to Dist: the Evolution of a Large-Scale Wlan Monitoring System

    Get PDF
    The edge of the Internet is increasingly becoming wireless. Therefore, monitoring the wireless edge is important to understanding the security and performance aspects of the Internet experience. We have designed and implemented a large-scale WLAN monitoring system, the Distributed Internet Security Testbed (DIST), at Dartmouth College. It is equipped with distributed arrays of “sniffers” that cover 210 diverse campus locations and more than 5,000 users. In this paper, we describe our approach, designs and solutions for addressing the technical challenges that have resulted from efficiency, scalability, security, and management perspectives. We also present extensive evaluation results on a production network, and summarize the lessons learned
    • 

    corecore