75 research outputs found
Computer-aided verification in mechanism design
In mechanism design, the gold standard solution concepts are dominant
strategy incentive compatibility and Bayesian incentive compatibility. These
solution concepts relieve the (possibly unsophisticated) bidders from the need
to engage in complicated strategizing. While incentive properties are simple to
state, their proofs are specific to the mechanism and can be quite complex.
This raises two concerns. From a practical perspective, checking a complex
proof can be a tedious process, often requiring experts knowledgeable in
mechanism design. Furthermore, from a modeling perspective, if unsophisticated
agents are unconvinced of incentive properties, they may strategize in
unpredictable ways.
To address both concerns, we explore techniques from computer-aided
verification to construct formal proofs of incentive properties. Because formal
proofs can be automatically checked, agents do not need to manually check the
properties, or even understand the proof. To demonstrate, we present the
verification of a sophisticated mechanism: the generic reduction from Bayesian
incentive compatible mechanism design to algorithm design given by Hartline,
Kleinberg, and Malekian. This mechanism presents new challenges for formal
verification, including essential use of randomness from both the execution of
the mechanism and from the prior type distributions. As an immediate
consequence, our work also formalizes Bayesian incentive compatibility for the
entire family of mechanisms derived via this reduction. Finally, as an
intermediate step in our formalization, we provide the first formal
verification of incentive compatibility for the celebrated
Vickrey-Clarke-Groves mechanism
On the Security of Cryptographic Protocols Using the Little Theorem of Witness Functions
In this paper, we show how practical the little theorem of witness functions
is in detecting security flaws in some category of cryptographic protocols. We
convey a formal analysis of the Needham-Schroeder symmetric-key protocol in the
theory of witness functions. We show how it helps to teach about a security
vulnerability in a given step of this protocol where the value of security of a
particular sensitive ticket in a sent message unexpectedly plummets compared
with its value when received. This vulnerability may be exploited by an
intruder to mount a replay attack as described by Denning and Sacco.Comment: Accepted at the 2019 IEEE Canadian Conference on Electrical &
Computer Engineering (CCECE) on March 1, 201
Recommended from our members
Web Service Trust: Towards A Dynamic Assessment Framework
Trust in software services is a key prerequisite for the success and wide adoption of services-oriented computing (SOC) in an open Internet world. However, trust is poorly assessed by existing methods and technologies, especially in dynamically composed and deployed SOC systems. In this paper, we discuss current methods for assessing trust in service-oriented computing and identify gaps of current platforms, in particular with regards to runtime trust assessment. To address these gaps, we propose a model of runtime trust assessment of software services and introduce a framework for realizing the model. A key characteristic of our approach is the support that it offers for customizable assessment of trust based on evidence collected during the operation of software services and its ability to combine this evidence with subjective assessments coming from service clients
Verification of Shared-Reading Synchronisers
Synchronisation classes are an important building block for shared memory
concurrent programs. Thus to reason about such programs, it is important to be
able to verify the implementation of these synchronisation classes, considering
atomic operations as the synchronisation primitives on which the
implementations are built. For synchronisation classes controlling exclusive
access to a shared resource, such as locks, a technique has been proposed to
reason about their behaviour. This paper proposes a technique to verify
implementations of both exclusive access and shared-reading synchronisers. We
use permission-based Separation Logic to describe the behaviour of the main
atomic operations, and the basis for our technique is formed by a specification
for class AtomicInteger, which is commonly used to implement synchronisation
classes in java.util.concurrent. To demonstrate the applicability of our
approach, we mechanically verify the implementation of various synchronisation
classes like Semaphore, CountDownLatch and Lock.Comment: In Proceedings MeTRiD 2018, arXiv:1806.0933
Computer-aided proofs for multiparty computation with active security
Secure multi-party computation (MPC) is a general cryptographic technique
that allows distrusting parties to compute a function of their individual
inputs, while only revealing the output of the function. It has found
applications in areas such as auctioning, email filtering, and secure
teleconference. Given its importance, it is crucial that the protocols are
specified and implemented correctly. In the programming language community it
has become good practice to use computer proof assistants to verify correctness
proofs. In the field of cryptography, EasyCrypt is the state of the art proof
assistant. It provides an embedded language for probabilistic programming,
together with a specialized logic, embedded into an ambient general purpose
higher-order logic. It allows us to conveniently express cryptographic
properties. EasyCrypt has been used successfully on many applications,
including public-key encryption, signatures, garbled circuits and differential
privacy. Here we show for the first time that it can also be used to prove
security of MPC against a malicious adversary. We formalize additive and
replicated secret sharing schemes and apply them to Maurer's MPC protocol for
secure addition and multiplication. Our method extends to general polynomial
functions. We follow the insights from EasyCrypt that security proofs can be
often be reduced to proofs about program equivalence, a topic that is well
understood in the verification of programming languages. In particular, we show
that in the passive case the non-interference-based definition is equivalent to
a standard game-based security definition. For the active case we provide a new
NI definition, which we call input independence
Unified Model for Data Security -- A Position Paper
One of the most crucial components of modern Information Technology (IT) systems is data. It can be argued that the majority of IT systems are built to collect, store, modify, communicate and use data, enabling different data stakeholders to access and use it to achieve different business objectives. The confidentiality, integrity, availability, audit ability, privacy, and quality of the data is of paramount concern for end-users ranging from ordinary consumers to multi-national companies. Over the course of time, different frameworks have been proposed and deployed to provide data security. Many of these previous paradigms were specific to particular domains such as military or media content providers, while in other cases they were generic to different verticals within an industry. There is a much needed push for a holistic approach to data security instead of the current bespoke approaches. The age of the Internet has witnessed an increased ease of sharing data with or without authorisation. These scenarios have created new challenges for traditional data security. In this paper, we study the evolution of data security from the perspective of past proposed frameworks, and present a novel Unified Model for Data Security (UMDS). The discussed UMDS reduces the friction from several cross-domain challenges, and has the functionality to possibly provide comprehensive data security to data owners and privileged users
Privacy Harm and Non-Compliance from a Legal Perspective
In today\u27s data-sharing paradigm, personal data has become a valuable resource that intensifies the risk of unauthorized access and data breach. Increased data mining techniques used to analyze big data have posed significant risks to data security and privacy. Consequently, data breaches are a significant threat to individual privacy. Privacy is a multifaceted concept covering many areas, including the right to access, erasure, and rectify personal data. This paper explores the legal aspects of privacy harm and how they transform into legal action. Privacy harm is the negative impact to an individual as a result of the unauthorized release, gathering, distillation, or expropriation of personal information. Privacy Enhancing Technologies (PETs) emerged as a solution to address data privacy issues and minimize the risk of privacy harm. It is essential to implement privacy enhancement mechanisms to protect Personally Identifiable Information (PII) from unlawful use or access. FIPPs (Fair Information Practice Principles), based on the 1973 Code of Fair Information Practice (CFIP), and the Organization for Economic Cooperation and Development (OECD), are a collection of widely accepted, influential US codes that agencies use when evaluating information systems, processes, programs, and activities affecting individual privacy. Regulatory compliance places a responsibility on organizations to follow best practices to ensure the protection of individual data privacy rights. This paper will focus on FIPPs, relevance to US state privacy laws, their influence on OECD, and reference to the EU General Data Processing Regulation. (GDPR).
Keywords —Privacy harm, Privacy Enhancing Technologies(PETs),Fair Information Practice Principles (FIPPs
- …