654 research outputs found
Dynamically Adjusting the Mining Capacity in Cryptocurrency with Binary Blockchain
Many cryptocurrencies rely on Blockchain for its operation. Blockchain serves as a public ledger where all the completed transactions can be looked up. To place transactions in the Blockchain, a mining operation must be performed. However, due to a limited mining capacity, the transaction confirmation time is increasing. To mitigate this problem many ideas have been proposed, but they all come with own challenges. We propose a novel parallel mining method that can adjust the mining capacity dynamically depending on the congestion level. It does not require an increase in the block size or a reduction of the block confirmation time. The proposed scheme can increase the number of parallel blockchains when the mining congestion is experienced, which is especially effective under DDoS attack situation. We describe how and when the Blockchain is split or merged, how to solve the imbalanced mining problem, and how to adjust the difficulty levels and rewards. We then show the simulation results comparing the performance of binary blockchain and the traditional single blockchain
Control What You Include! Server-Side Protection against Third Party Web Tracking
Third party tracking is the practice by which third parties recognize users
accross different websites as they browse the web. Recent studies show that 90%
of websites contain third party content that is tracking its users across the
web. Website developers often need to include third party content in order to
provide basic functionality. However, when a developer includes a third party
content, she cannot know whether the third party contains tracking mechanisms.
If a website developer wants to protect her users from being tracked, the only
solution is to exclude any third-party content, thus trading functionality for
privacy. We describe and implement a privacy-preserving web architecture that
gives website developers a control over third party tracking: developers are
able to include functionally useful third party content, the same time ensuring
that the end users are not tracked by the third parties
Hypothesis Testing Interpretations and Renyi Differential Privacy
Differential privacy is a de facto standard in data privacy, with
applications in the public and private sectors. A way to explain differential
privacy, which is particularly appealing to statistician and social scientists
is by means of its statistical hypothesis testing interpretation. Informally,
one cannot effectively test whether a specific individual has contributed her
data by observing the output of a private mechanism---any test cannot have both
high significance and high power.
In this paper, we identify some conditions under which a privacy definition
given in terms of a statistical divergence satisfies a similar interpretation.
These conditions are useful to analyze the distinguishability power of
divergences and we use them to study the hypothesis testing interpretation of
some relaxations of differential privacy based on Renyi divergence. This
analysis also results in an improved conversion rule between these definitions
and differential privacy
Privacy-Preserving Collaborative Learning through Feature Extraction
We propose a framework in which multiple entities collaborate to build a
machine learning model while preserving privacy of their data. The approach
utilizes feature embeddings from shared/per-entity feature extractors
transforming data into a feature space for cooperation between entities. We
propose two specific methods and compare them with a baseline method. In Shared
Feature Extractor (SFE) Learning, the entities use a shared feature extractor
to compute feature embeddings of samples. In Locally Trained Feature Extractor
(LTFE) Learning, each entity uses a separate feature extractor and models are
trained using concatenated features from all entities. As a baseline, in
Cooperatively Trained Feature Extractor (CTFE) Learning, the entities train
models by sharing raw data. Secure multi-party algorithms are utilized to train
models without revealing data or features in plain text. We investigate the
trade-offs among SFE, LTFE, and CTFE in regard to performance, privacy leakage
(using an off-the-shelf membership inference attack), and computational cost.
LTFE provides the most privacy, followed by SFE, and then CTFE. Computational
cost is lowest for SFE and the relative speed of CTFE and LTFE depends on
network architecture. CTFE and LTFE provide the best accuracy. We use MNIST, a
synthetic dataset, and a credit card fraud detection dataset for evaluations
- …