1,214 research outputs found
Panini -- Anonymous Anycast and an Instantiation
Anycast messaging (i.e., sending a message to an unspecified receiver) has
long been neglected by the anonymous communication community. An anonymous
anycast prevents senders from learning who the receiver of their message is,
allowing for greater privacy in areas such as political activism and
whistleblowing. While there have been some protocol ideas proposed, formal
treatment of the problem is absent. Formal definitions of what constitutes
anonymous anycast and privacy in this context are however a requirement for
constructing protocols with provable guarantees. In this work, we define the
anycast functionality and use a game-based approach to formalize its privacy
and security goals. We further propose Panini, the first anonymous anycast
protocol that only requires readily available infrastructure. We show that
Panini allows the actual receiver of the anycast message to remain anonymous,
even in the presence of an honest but curious sender. In an empirical
evaluation, we find that Panini adds only minimal overhead over regular
unicast: Sending a message anonymously to one of eight possible receivers
results in an end-to-end latency of 0.76s
On Privacy Notions in Anonymous Communication
Many anonymous communication networks (ACNs) with different privacy goals
have been developed. However, there are no accepted formal definitions of
privacy and ACNs often define their goals and adversary models ad hoc. However,
for the understanding and comparison of different flavors of privacy, a common
foundation is needed. In this paper, we introduce an analysis framework for
ACNs that captures the notions and assumptions known from different analysis
frameworks. Therefore, we formalize privacy goals as notions and identify their
building blocks. For any pair of notions we prove whether one is strictly
stronger, and, if so, which. Hence, we are able to present a complete
hierarchy. Further, we show how to add practical assumptions, e.g. regarding
the protocol model or user corruption as options to our notions. This way, we
capture the notions and assumptions of, to the best of our knowledge, all
existing analytical frameworks for ACNs and are able to revise inconsistencies
between them. Thus, our new framework builds a common ground and allows for
sharper analysis, since new combinations of assumptions are possible and the
relations between the notions are known
Onion Routing with Replies
Onion routing (OR) protocols are a crucial tool for providing anonymous internet communication. An OR protocol enables a user to anonymously send requests to a server. A fundamental problem of OR protocols is how to deal with replies: ideally, we would want the server to be able to send a reply back to the anonymous user without knowing or disclosing the user\u27s identity.
Existing OR protocols do allow for such replies, but do not provably protect the payload (i.e., message) of replies against manipulation. Kuhn et al. (IEEE S&P 2020) show that such manipulations can in fact be leveraged to break anonymity of the whole protocol.
In this work, we close this gap and provide the first framework and protocols for OR with protected replies. We define security in the sense of an ideal functionality in the universal composability model, and provide corresponding (less complex) game-based security notions for the individual properties.
We also provide two secure instantiations of our framework: one based on updatable encryption, and one based on succinct non-interactive arguments (SNARGs) to authenticate payloads both in requests and replies. In both cases, our central technical handle is an implicit authentication of the transmitted payload data, as opposed to an explicit, but insufficient authentication (with MACs) in previous solutions. Our results exhibit a new and surprising application of updatable encryption outside of long-term data storage
Onion Routing with Replies
Onion routing (OR) protocols are a crucial tool for providing anonymous internet communication. An OR protocol enables a user to anonymously send requests to a server. A fundamental problem of OR protocols is how to deal with replies: ideally, we would want the server to be able to send a reply back to the anonymous user without knowing or disclosing the user’s identity.
Existing OR protocols do allow for such replies, but do not provably protect the payload (i.e., message) of replies against manipulation. Kuhn et al. (IEEE S&P 2020) show that such manipulations can in fact be leveraged to break anonymity of the whole protocol.
In this work, we close this gap and provide the first framework and protocols for OR with protected replies. We define security in the sense of an ideal functionality in the universal composability model, and provide corresponding (less complex) game-based security notions for the individual properties.
We also provide two secure instantiations of our framework: one based on updatable encryption, and one based on succinct non-interactive arguments (SNARGs) to authenticate payloads both in requests and replies. In both cases, our central technical handle is an implicit authentication of the transmitted payload data, as opposed to an explicit, but insufficient authentication (with MACs) in previous solutions. Our results exhibit a new and surprising application of updatable encryption outside of long-term data storage.ISSN:0302-9743ISSN:1611-334
On privacy notions in anonymous communication
Many anonymous communication networks (ACNs) with different privacy goals have been devel- oped. Still, there are no accepted formal definitions of privacy goals, and ACNs often define their goals ad hoc. However, the formal definition of privacy goals benefits the understanding and comparison of different flavors of privacy and, as a result, the improvement of ACNs. In this paper, we work towards defining and comparing pri- vacy goals by formalizing them as privacy notions and identifying their building blocks. For any pair of no- tions we prove whether one is strictly stronger, and, if so, which. Hence, we are able to present a complete hier- archy. Using this rigorous comparison between notions, we revise inconsistencies between the existing works and improve the understanding of privacy goals
- …