1,963 research outputs found

    Lime: Data Lineage in the Malicious Environment

    Full text link
    Intentional or unintentional leakage of confidential data is undoubtedly one of the most severe security threats that organizations face in the digital era. The threat now extends to our personal lives: a plethora of personal information is available to social networks and smartphone providers and is indirectly transferred to untrustworthy third party and fourth party applications. In this work, we present a generic data lineage framework LIME for data flow across multiple entities that take two characteristic, principal roles (i.e., owner and consumer). We define the exact security guarantees required by such a data lineage mechanism toward identification of a guilty entity, and identify the simplifying non repudiation and honesty assumptions. We then develop and analyze a novel accountable data transfer protocol between two entities within a malicious environment by building upon oblivious transfer, robust watermarking, and signature primitives. Finally, we perform an experimental evaluation to demonstrate the practicality of our protocol

    Data Leakage Detection

    Get PDF

    Data Leakage Detection by Using Fake Objects

    Get PDF
    Modern business activities rely on extensive email exchange. Email leakage have became widespread throughout the world, and severe damage has been caused by these leakages it constitutes a problem for organization. We study the following problem: A data distributor has given sensitive data to a set of supposedly trusted agents (third parties).If the data distributed to the third parties is found in a public\private domain then finding the guilty party is a nontrivial task to a distributor. Traditionally, this leakage of data has handled by water marking technique which requires modification of data. If the watermarked copy is found at Some unauthorized site then distributor claim his ownership. To overcome the disadvantage of using watermark, data allocation strategies are used to improve the probability of identifying guilty third parties. The distributor must assess the likelihood that the leaked data come from one or more agents, as opposed to having been gathered from other means. In this project, we implement and analyze a guilt model that detects the agents using allocation strategies without modifying the original data .the guilt agent is one who leaks a portion of distributed data. We propose data 201C;realistic but fake201D; data records to further improve our chances of detecting leakage and identifying the guilty party. And Algorithms implemented using fake objects will improve the distributor chance of detecting the guilt agent. It is observed that by minimizing the sum objective the chance of detecting guilt agents will increase. We also develop a framework for generating fake objects

    Instructive of Ooze Information

    Get PDF
    We study the following problem: A data distributor has given sensitive data to a set of supposedly trusted agents (third parties). Some of the data are leaked and bring into being in an unconstitutional place (e.g., on the web or somebody2019;s laptop). The distributor must evaluate the likelihood that the leaked data came from one or more agents, as opposed to having been independently gathered by other means. We propose data distribution strategies (across the agents) that improve the likelihood of identifying leakages. These methods do not rely on alterations of the released data (e.g., watermarks). In some cases, we can also inject 201C;realistic but replica201D; data records to further improve our chances of detecting leakage and identifying the guilty party. In the course of doing business, sometimes sensitive data must be handed over to supposedly trusted third parties. For example, a hospital may give patient records to Researchers who will devise new treatments. Similarly, a company may have partnerships with other companies that require sharing customer data. Another enterprise may outsource its data processing, so data must be given to various other companies. There always remains a risk of data getting leaked from the agent. Perturbation is a very valuable technique where the data are modified and made 201C;less sensitive201D; before being handed to agents. For example, one can add random noise to certain attributes, or one can replace exact values by ranges. But this technique requires modification of data. Leakage detection is handled by watermarking, e.g., a unique code is implanted in each distributed copy. If that copy is later discovered in the hands of an unconstitutional party, the leaker can be identified. But again it requires code modification. Watermarks can sometimes be destroyed if the data recipient is malicious

    Review on Seuring Data by Using Data Leakage Prevention and Detection

    Get PDF
    Today?s life everything including digital economy, data enter and leaves cyberspace at record rates. A typical enterprise sends and receives millions of email messages and downloads, saves, and transfers thousands of files via various channels on a daily basis. Enterprises also hold sensitive data that customers, business partners, regulators, and shareholders expect them to protect. While doing business we need to maintain the sensitive and confidential data. If the confidential data is leaked from the organization then it may influence on the organization heath. So preventing the data many vendors currently offer data leak prevention and detection products; surprisingly, however, there is one technique which is data leak prevention and detection, in this paper review on that Data Leak Prevention and Detection method. Here first term is data leak. Data leaks involve the release of sensitive information to an third party which is unauthorized user intentionally. Data leakage is the unauthorized transmission of data or information within an organization or from an organization to the external destination. The data stored in any device can be leaked in two ways; if the system is hacked or if the internal resources intentionally or unintentionally make the data public. Therefore, organizations should take measures to understand the sensitive data they hold, how it?s controlled, and how to prevent it from being leaked or compromised. So that purpose in this review data is preventing by using different technique of data leak prevention and detection

    Sound and Precise Malware Analysis for Android via Pushdown Reachability and Entry-Point Saturation

    Full text link
    We present Anadroid, a static malware analysis framework for Android apps. Anadroid exploits two techniques to soundly raise precision: (1) it uses a pushdown system to precisely model dynamically dispatched interprocedural and exception-driven control-flow; (2) it uses Entry-Point Saturation (EPS) to soundly approximate all possible interleavings of asynchronous entry points in Android applications. (It also integrates static taint-flow analysis and least permissions analysis to expand the class of malicious behaviors which it can catch.) Anadroid provides rich user interface support for human analysts which must ultimately rule on the "maliciousness" of a behavior. To demonstrate the effectiveness of Anadroid's malware analysis, we had teams of analysts analyze a challenge suite of 52 Android applications released as part of the Auto- mated Program Analysis for Cybersecurity (APAC) DARPA program. The first team analyzed the apps using a ver- sion of Anadroid that uses traditional (finite-state-machine-based) control-flow-analysis found in existing malware analysis tools; the second team analyzed the apps using a version of Anadroid that uses our enhanced pushdown-based control-flow-analysis. We measured machine analysis time, human analyst time, and their accuracy in flagging malicious applications. With pushdown analysis, we found statistically significant (p < 0.05) decreases in time: from 85 minutes per app to 35 minutes per app in human plus machine analysis time; and statistically significant (p < 0.05) increases in accuracy with the pushdown-driven analyzer: from 71% correct identification to 95% correct identification.Comment: Appears in 3rd Annual ACM CCS workshop on Security and Privacy in SmartPhones and Mobile Devices (SPSM'13), Berlin, Germany, 201
    • …
    corecore