446 research outputs found
Automated Cryptographic Analysis of the Pedersen Commitment Scheme
Aiming for strong security assurance, recently there has been an increasing
interest in formal verification of cryptographic constructions. This paper
presents a mechanised formal verification of the popular Pedersen commitment
protocol, proving its security properties of correctness, perfect hiding, and
computational binding. To formally verify the protocol, we extended the theory
of EasyCrypt, a framework which allows for reasoning in the computational
model, to support the discrete logarithm and an abstraction of commitment
protocols. Commitments are building blocks of many cryptographic constructions,
for example, verifiable secret sharing, zero-knowledge proofs, and e-voting.
Our work paves the way for the verification of those more complex
constructions.Comment: 12 pages, conference MMM-ACNS 201
Semantic and logical foundations of global computing: Papers from the EU-FET global computing initiative (2001â2005)
Overvew of the contents of the volume "Semantic and logical foundations of global computing
Proceedings of International Workshop "Global Computing: Programming Environments, Languages, Security and Analysis of Systems"
According to the IST/ FET proactive initiative on GLOBAL COMPUTING, the goal is to obtain techniques (models, frameworks, methods, algorithms) for constructing systems that are flexible, dependable, secure, robust and efficient.
The dominant concerns are not those of representing and manipulating data efficiently but rather those of handling the co-ordination and interaction, security, reliability, robustness, failure modes, and control of risk of the entities in the system and the overall design, description and performance of the system itself.
Completely different paradigms of computer science may have to be developed to tackle these issues effectively. The research should concentrate on systems having the following characteristics: ⢠The systems are composed of autonomous computational entities where activity is not centrally controlled, either because global control is impossible or impractical, or because the entities are created or controlled by different owners.
⢠The computational entities are mobile, due to the movement of the physical platforms or by movement of the entity from one platform to another.
⢠The configuration varies over time. For instance, the system is open to the introduction of new computational entities and likewise their deletion.
The behaviour of the entities may vary over time.
⢠The systems operate with incomplete information about the environment.
For instance, information becomes rapidly out of date and mobility requires information about the environment to be discovered.
The ultimate goal of the research action is to provide a solid scientific foundation for the design of such systems, and to lay the groundwork for achieving effective principles for building and analysing such systems.
This workshop covers the aspects related to languages and programming environments as well as analysis of systems and resources involving 9 projects (AGILE , DART, DEGAS , MIKADO, MRG, MYTHS, PEPITO, PROFUNDIS, SECURE) out of the 13 founded under the initiative. After an year from the start of the projects, the goal of the workshop is to fix the state of the art on the topics covered by the two clusters related to programming environments and analysis of systems as well as to devise strategies and new ideas to profitably continue the research effort towards the overall objective of the initiative.
We acknowledge the Dipartimento di Informatica and Tlc of the University of Trento, the Comune di Rovereto, the project DEGAS for partially funding the event and the Events and Meetings Office of the University of Trento for the valuable collaboration
Privacy, security, and trust issues in smart environments
Recent advances in networking, handheld computing and sensor technologies have driven forward research towards the realisation of Mark Weiser's dream of calm and ubiquitous computing (variously called pervasive computing, ambient computing, active spaces, the disappearing computer or context-aware computing). In turn, this has led to the emergence of smart environments as one significant facet of research in this domain. A smart environment, or space, is a region of the real world that is extensively equipped with sensors, actuators and computing components [1]. In effect the smart space becomes a part of a larger information system: with all actions within the space potentially affecting the underlying computer applications, which may themselves affect the space through the actuators. Such smart environments have tremendous potential within many application areas to improve the utility of a space. Consider the potential offered by a smart environment that prolongs the time an elderly or infirm person can live an independent life or the potential offered by a smart environment that supports vicarious learning
Trust models in ubiquitous computing
We recapture some of the arguments for trust-based technologies in ubiquitous computing, followed by a brief survey of some of the models of trust that have been introduced in this respect. Based on this, we argue for the need of more formal and foundational trust models
Recommended from our members
Proving Cryptographic C Programs Secure with General-Purpose Verification Tools
Security protocols, such as TLS or Kerberos, and security devices such as the Trusted Platform Module (TPM), Hardware Security Modules (HSMs) or PKCS#11 tokens, are central to many computer interactions.
Yet, such security critical components are still often found vulnerable to attack after their deployment, either because the specification is insecure, or because of implementation errors.
Techniques exist to construct machine-checked proofs of security properties for abstract specifications.
However, this may leave the final executable code, often written in lower level languages such as C, vulnerable both to logical errors, and low-level flaws.
Recent work on verifying security properties of C code is often based on soundly extracting, from C programs, protocol models on which security properties can be proved.
However, in such methods, any change in the C code, however trivial, may require one to perform a new and complex security proof.
Our goal is therefore to develop or identify a framework in which security properties of cryptographic systems can be formally proved, and that can also be used to soundly verify, using existing general-purpose tools, that a C program shares the same security properties.
We argue that the current state of general-purpose verification tools for the C language, as well as for functional languages, is sufficient to achieve this goal, and illustrate our argument by developing two verification frameworks around the VCC verifier.
In the symbolic model, we illustrate our method by proving authentication and weak secrecy for implementations of several network security protocols.
In the computational model, we illustrate our method by proving authentication and strong secrecy properties for an exemplary key management API, inspired by the TPM
Inductive analysis of security protocols in Isabelle/HOL with applications to electronic voting
Security protocols are predefined sequences of message exchanges. Their uses over computer networks aim to provide certain guarantees to protocol participants. The sensitive nature of many applications resting on protocols encourages the use of formal methods to provide rigorous correctness proofs. This dissertation presents extensions to the Inductive Method for protocol verification in the Isabelle/HOL interactive theorem prover. The current state of the Inductive Method and of other protocol analysis techniques are reviewed. Protocol composition modelling in the Inductive Method is introduced and put in practice by holistically verifying the composition of a certification protocol with an authentication protocol. Unlike some existing approaches, we are not constrained by independence requirements or search space limitations. A special kind of identity-based signatures, auditable ones, are specified in the Inductive Method and integrated in an analysis of a recent ISO/IEC 9798-3 protocol. A side-by-side verification features both a version of the protocol with auditable identity-based signatures and a version with plain ones. The largest part of the thesis presents extensions for the verification of electronic voting protocols. Innovative specification and verification strategies are described. The crucial property of voter privacy, being the impossibility of knowing how a specific voter voted, is modelled as an unlinkability property between pieces of information. Unlinkability is then specified in the Inductive Method using novel message operators. An electronic voting protocol by Fujioka, Okamoto and Ohta is modelled in the Inductive Method. Its classic confidentiality properties are verified, followed by voter privacy. The approach is shown to be generic enough to be re-usable on other protocols while maintaining a coherent line of reasoning. We compare our work with the widespread process equivalence model and examine respective strengths
Surveillance and identity: conceptual framework and formal models
Surveillance is recognised as a social phenomenon that is commonplace, employed by governments, companies and communities for a wide variety of reasons. Surveillance is fundamental in cybersecurity as it provides tools for prevention and detection; it is also a source of controversies related to privacy and freedom. Building on general studies of surveillance, we identify and analyse certain concepts that are central to surveillance. To do this we employ formal methods based on elementary algebra. First, we show that disparate forms of surveillance have a common structure and can be unified by abstract mathematical concepts. The model shows that (i) finding identities and (ii) sorting identities into categories are fundamental in conceptualising surveillance. Secondly, we develop a formal model that theorizes identity as abstract data that we call identifiers. The model views identity through the computational lens of the theory of abstract data types. We examine the ways identifiers depend upon each other; and show that the provenance of identifiers depends upon translations between systems of identifiers
- âŚ