83 research outputs found

    Technical Privacy Metrics: a Systematic Survey

    Get PDF
    The file attached to this record is the author's final peer reviewed versionThe goal of privacy metrics is to measure the degree of privacy enjoyed by users in a system and the amount of protection offered by privacy-enhancing technologies. In this way, privacy metrics contribute to improving user privacy in the digital world. The diversity and complexity of privacy metrics in the literature makes an informed choice of metrics challenging. As a result, instead of using existing metrics, new metrics are proposed frequently, and privacy studies are often incomparable. In this survey we alleviate these problems by structuring the landscape of privacy metrics. To this end, we explain and discuss a selection of over eighty privacy metrics and introduce categorizations based on the aspect of privacy they measure, their required inputs, and the type of data that needs protection. In addition, we present a method on how to choose privacy metrics based on nine questions that help identify the right privacy metrics for a given scenario, and highlight topics where additional work on privacy metrics is needed. Our survey spans multiple privacy domains and can be understood as a general framework for privacy measurement

    Weiterentwicklung analytischer Datenbanksysteme

    Get PDF
    This thesis contributes to the state of the art in analytical database systems. First, we identify and explore extensions to better support analytics on event streams. Second, we propose a novel polygon index to enable efficient geospatial data processing in main memory. Third, we contribute a new deep learning approach to cardinality estimation, which is the core problem in cost-based query optimization.Diese Arbeit trĂ€gt zum aktuellen Forschungsstand von analytischen Datenbanksystemen bei. Wir identifizieren und explorieren Erweiterungen um Analysen auf Eventströmen besser zu unterstĂŒtzen. Wir stellen eine neue Indexstruktur fĂŒr Polygone vor, die eine effiziente Verarbeitung von Geodaten im Hauptspeicher ermöglicht. Zudem prĂ€sentieren wir einen neuen Ansatz fĂŒr KardinalitĂ€tsschĂ€tzungen mittels maschinellen Lernens

    Veer: Verifying Equivalence of Workflow Versions in Iterative Data Analytics

    Full text link
    Data analytics using GUI-based workflows is an iterative process in which an analyst makes many iterations of changes to refine the workflow, generating a different version at each iteration. In many cases, the result of executing a workflow version is equivalent to a result of a prior executed version. Identifying such equivalence between the execution results of different workflow versions is important for optimizing the performance of a workflow by reusing results from a previous run. The size of the workflows and the complexity of their operators often make existing equivalence verifiers (EVs) not able to solve the problem. In this paper, we present "Veer," which leverages the fact that two workflow versions can be very similar except for a few changes. The solution divides the workflow version pair into small parts, called windows, and verifies the equivalence within each window by using an existing EV as a black box. We develop solutions to efficiently generate windows and verify the equivalence within each window. Our thorough experiments on real workflows show that Veer is able to not only verify the equivalence of workflows that cannot be supported by existing EVs but also do the verification efficiently

    How Useful are Hand-crafted Data? Making Cases for Anomaly Detection Methods

    Get PDF
    While the importance of small data has been admitted in principle, they have not been widely adopted as a necessity in current machine learning or data mining research. Most predominantly, machine learning methods were typically evaluated under a “bigger is better” presumption. The more (and the more complex) data we could pour at a method, the better we thought we were at estimating its performance. We deem this mindset detrimental to interpretability, explainability, and the sustained development of the field. For example, despite that new outlier detection methods were often inspired by small, low dimensional samples, their performance has been exclusively evaluated by large, high-dimensional datasets resembling real-world use cases. With these “big data” we miss the chance to gain insights from close looks at how exactly the algorithms perform, as we mere humans cannot really comprehend the samples. In this work, we explore in the exactly opposite direction. We run several classical anomaly detection methods against small, mindfully crafted cases on which the results can be examined in detail. In addition to better understanding of these classical algorithms, our exploration has actually led to the discovery of some novel uses of classical anomaly detection methods to our surprise
    • 

    corecore