220,802 research outputs found
Making Code Voting Secure against Insider Threats using Unconditionally Secure MIX Schemes and Human PSMT Protocols
Code voting was introduced by Chaum as a solution for using a possibly
infected-by-malware device to cast a vote in an electronic voting application.
Chaum's work on code voting assumed voting codes are physically delivered to
voters using the mail system, implicitly requiring to trust the mail system.
This is not necessarily a valid assumption to make - especially if the mail
system cannot be trusted. When conspiring with the recipient of the cast
ballots, privacy is broken.
It is clear to the public that when it comes to privacy, computers and
"secure" communication over the Internet cannot fully be trusted. This
emphasizes the importance of using: (1) Unconditional security for secure
network communication. (2) Reduce reliance on untrusted computers.
In this paper we explore how to remove the mail system trust assumption in
code voting. We use PSMT protocols (SCN 2012) where with the help of visual
aids, humans can carry out addition correctly with a 99\% degree of
accuracy. We introduce an unconditionally secure MIX based on the combinatorics
of set systems.
Given that end users of our proposed voting scheme construction are humans we
\emph{cannot use} classical Secure Multi Party Computation protocols.
Our solutions are for both single and multi-seat elections achieving:
\begin{enumerate}[i)]
\item An anonymous and perfectly secure communication network secure against
a -bounded passive adversary used to deliver voting,
\item The end step of the protocol can be handled by a human to evade the
threat of malware. \end{enumerate} We do not focus on active adversaries
Recommended from our members
New Program Abstractions for Privacy
Static program analysis, once seen primarily as a tool for optimising programs, is now increasingly important as a means to provide quality guarantees about programs. One measure of quality is the extent to which programs respect the privacy of user data. Differential privacy is a rigorous quantified definition of privacy which guarantees a bound on the loss of privacy due to the release of statistical queries. Among the benefits enjoyed by the definition of differential privacy are compositionality properties that allow differentially private analyses to be built from pieces and combined in various ways. This has led to the development of frameworks for the construction of differentially private program analyses which are private-by-construction. Past frameworks assume that the sensitive data is collected centrally, and processed by a trusted curator. However, the main examples of differential privacy applied in practice - for example in the use of differential privacy in Google Chrome’s collection of browsing statistics, or Apple’s training of predictive messaging in iOS 10 -use a purely local mechanism applied at the data source, thus avoiding the collection of sensitive data altogether. While this is a benefit of the local approach, with systems like Apple’s, users are required to completely trust that the analysis running on their system has the claimed privacy properties.
In this position paper we outline some key challenges in developing static analyses for analysing differential privacy, and propose novel abstractions for describing the behaviour of probabilistic programs not previously used in static analyses
HardIDX: Practical and Secure Index with SGX
Software-based approaches for search over encrypted data are still either
challenged by lack of proper, low-leakage encryption or slow performance.
Existing hardware-based approaches do not scale well due to hardware
limitations and software designs that are not specifically tailored to the
hardware architecture, and are rarely well analyzed for their security (e.g.,
the impact of side channels). Additionally, existing hardware-based solutions
often have a large code footprint in the trusted environment susceptible to
software compromises. In this paper we present HardIDX: a hardware-based
approach, leveraging Intel's SGX, for search over encrypted data. It implements
only the security critical core, i.e., the search functionality, in the trusted
environment and resorts to untrusted software for the remainder. HardIDX is
deployable as a highly performant encrypted database index: it is logarithmic
in the size of the index and searches are performed within a few milliseconds
rather than seconds. We formally model and prove the security of our scheme
showing that its leakage is equivalent to the best known searchable encryption
schemes. Our implementation has a very small code and memory footprint yet
still scales to virtually unlimited search index sizes, i.e., size is limited
only by the general - non-secure - hardware resources
REALISTIC CORRECT SYSTEMS IMPLEMENTATION
The present article and the forthcoming second part on Trusted Compiler Implementation\ud
address correct construction and functioning of large computer based systems. In view\ud
of so many annoying and dangerous system misbehaviors we ask: Can informaticians\ud
righteously be accounted for incorrectness of systems, will they be able to justify systems\ud
to work correctly as intended? We understand the word justification in the sense: design\ud
of computer based systems, formulation of mathematical models of information flows, and\ud
construction of controlling software are to be such that the expected system effects, the\ud
absence of internal failures, and the robustness towards misuses and malicious external attacks\ud
are foreseeable as logical consequences of the models.\ud
Since more than 40 years, theoretical informatics, software engineering and compiler\ud
construction have made important contributions to correct specification and also to correct\ud
high-level implementation of compilers. But the third step, translation - bootstrapping - of\ud
high level compiler programs to host machine code by existing host compilers, is as important.\ud
So far there are no realistic recipes to close this correctness gap, although it is known\ud
for some years that trust in executable code can dangerously be compromised by Trojan\ud
Horses in compiler executables, even if they pass strongest tests.\ud
In the present first article we will give a comprehensive motivation and develop\ud
a mathematical theory in order to conscientiously prove the correctness of an initial fully\ud
trusted compiler executable. The task will be modularized in three steps. The third step of\ud
machine level compiler implementation verification is the topic of the forthcoming second\ud
part on Trusted Compiler Implementation. It closes the implementation gap, not only for\ud
compilers but also for correct software-based systems in general. Thus, the two articles together\ud
give a rather confident answer to the question raised in the title
Integrity Constraints in Trust Management
We introduce the use, monitoring, and enforcement of integrity constraints in
trust management-style authorization systems. We consider what portions of the
policy state must be monitored to detect violations of integrity constraints.
Then we address the fact that not all participants in a trust management system
can be trusted to assist in such monitoring, and show how many integrity
constraints can be monitored in a conservative manner so that trusted
participants detect and report if the system enters a policy state from which
evolution in unmonitored portions of the policy could lead to a constraint
violation.Comment: An extended abstract appears in the proc. of the 10th ACM Symp. on
Access Control Models and Technologies (SACMAT). 200
Exploring Privacy Preservation in Outsourced K-Nearest Neighbors with Multiple Data Owners
The k-nearest neighbors (k-NN) algorithm is a popular and effective
classification algorithm. Due to its large storage and computational
requirements, it is suitable for cloud outsourcing. However, k-NN is often run
on sensitive data such as medical records, user images, or personal
information. It is important to protect the privacy of data in an outsourced
k-NN system.
Prior works have all assumed the data owners (who submit data to the
outsourced k-NN system) are a single trusted party. However, we observe that in
many practical scenarios, there may be multiple mutually distrusting data
owners. In this work, we present the first framing and exploration of privacy
preservation in an outsourced k-NN system with multiple data owners. We
consider the various threat models introduced by this modification. We discover
that under a particularly practical threat model that covers numerous
scenarios, there exists a set of adaptive attacks that breach the data privacy
of any exact k-NN system. The vulnerability is a result of the mathematical
properties of k-NN and its output. Thus, we propose a privacy-preserving
alternative system supporting kernel density estimation using a Gaussian
kernel, a classification algorithm from the same family as k-NN. In many
applications, this similar algorithm serves as a good substitute for k-NN. We
additionally investigate solutions for other threat models, often through
extensions on prior single data owner systems
- …