42,509 research outputs found
Gaming security by obscurity
Shannon sought security against the attacker with unlimited computational
powers: *if an information source conveys some information, then Shannon's
attacker will surely extract that information*. Diffie and Hellman refined
Shannon's attacker model by taking into account the fact that the real
attackers are computationally limited. This idea became one of the greatest new
paradigms in computer science, and led to modern cryptography.
Shannon also sought security against the attacker with unlimited logical and
observational powers, expressed through the maxim that "the enemy knows the
system". This view is still endorsed in cryptography. The popular formulation,
going back to Kerckhoffs, is that "there is no security by obscurity", meaning
that the algorithms cannot be kept obscured from the attacker, and that
security should only rely upon the secret keys. In fact, modern cryptography
goes even further than Shannon or Kerckhoffs in tacitly assuming that *if there
is an algorithm that can break the system, then the attacker will surely find
that algorithm*. The attacker is not viewed as an omnipotent computer any more,
but he is still construed as an omnipotent programmer.
So the Diffie-Hellman step from unlimited to limited computational powers has
not been extended into a step from unlimited to limited logical or programming
powers. Is the assumption that all feasible algorithms will eventually be
discovered and implemented really different from the assumption that everything
that is computable will eventually be computed? The present paper explores some
ways to refine the current models of the attacker, and of the defender, by
taking into account their limited logical and programming powers. If the
adaptive attacker actively queries the system to seek out its vulnerabilities,
can the system gain some security by actively learning attacker's methods, and
adapting to them?Comment: 15 pages, 9 figures, 2 tables; final version appeared in the
Proceedings of New Security Paradigms Workshop 2011 (ACM 2011); typos
correcte
Recommended from our members
Complex systems science: expert consultation report
Executive SummaryA new programme of research in Complex Systems Science must be initiated by FETThe science of complex systems (CS) is essential to establish rigorous scientific principles on which to develop the future ICT systems that are critical to the well-being, safety and prosperity of Europe and its citizens. As the “ICT incubator and pathfinder for new ideas and themes for long-term research in the area of information and communication technologies” FET must initiate a significant new programme of research in complex systems science to underpin research and development in ICT. Complex Systems Science is a “blue sky” research laboratory for R&D in ICT and their applications. In July 2009, ASSYST was given a set of probing questions concerning FET funding for ICT-related complex systems research. This document is based on the CS community’s response.Complex systems research has made considerable progress and is delivering new scienceSince FET began supporting CS research, considerable progress has been made. Building on previous understanding of concepts such as emergence from interactions, far-from-equilibrium systems, border of chaos and self-organised criticality, recent CS research is now delivering rigorous theory through methods of statistical physics, network theory, and computer simulation. CS research increasingly demands high-throughput data streams and new ICT-based methods of observing and reconstructing, i.e. modelling, the dynamics from those data in areas as diverse as embryogenesis, neuroscience, transport, epidemics, linguistics, meteorology, and robotics. CS research is also beginning to address the problem of engineering robust systems of systems of systems that can adapt to changing environments, including the perplexing problem that ICT systems are too often fragile and non-adaptive.Recommendation: A Programme of Research in Complex Systems Science to Support ICTFundamental theory in Complex Systems Science is needed, but this can only be achieved through real-world applications involving large, heterogeneous, and messy data sets, including people and organisations. A long-term vision is needed. Realistic targets can be set. Fundamental research can be ensured by requiring that teams include mathematicians, computer scientists, physicists and computational social scientists.One research priority is to develop a formalism for multilevel systems of systems of systems, applicable to all areas including biology, economics, security, transportation, robotics, health, agriculture, ecology, and climate change. Another related research priority is a scientific perspective on the integration of the new science with policy and its implementation, including ethical problems related to privacy and equality.A further priority is the need for education in complex systems science. Conventional education continues to be domain-dominated, producing scientists who are for the most part still lacking fundamental knowledge in core areas of mathematics, computation, statistical physics, and social systems. Therefore:1. We recommend that FET fund a new programme of work in complex systems science as essential research for progress in the development of new kinds of ICT systems.2. We have identified the dynamics of multilevel systems as the area in complex systems science requiring a major paradigm shift, beyond which significant scientific progress cannot be made.3. We propose a call requiring: fundamental research in complex systems science; new mathematical and computational formalisms to be developed; involving a large ‘guinea pig’ organisation; research into policy and its meta-level information dynamics; and that all research staff have interdisciplinary knowledge through an education programme.Tangible outcomes, potential users of the new science, its impact and measures of successUsers include (i) the private and public sectors using ICT to manage complex systems and (ii) researchers in ICT, CSS, and all complex domains. The tangible output of a call will be new knowledge on the nature of complex systems in general, new knowledge of the particular complex system(s) studied, and new knowledge of the fundamental role played by ICT in the research and implementation to create real systems addressing real-world problems. The impact of the call will be seen through new high added-value opportunities in the public and private sectors, new high added-value ICT technologies, and new high added-value science to support innovation in ICT research and development. The measure of success will be through the delivery of these high added-value outcomes, and new science to better understand failures
An Elementary Completeness Proof for Secure Two-Party Computation Primitives
In the secure two-party computation problem, two parties wish to compute a
(possibly randomized) function of their inputs via an interactive protocol,
while ensuring that neither party learns more than what can be inferred from
only their own input and output. For semi-honest parties and
information-theoretic security guarantees, it is well-known that, if only
noiseless communication is available, only a limited set of functions can be
securely computed; however, if interaction is also allowed over general
communication primitives (multi-input/output channels), there are "complete"
primitives that enable any function to be securely computed. The general set of
complete primitives was characterized recently by Maji, Prabhakaran, and
Rosulek leveraging an earlier specialized characterization by Kilian. Our
contribution in this paper is a simple, self-contained, alternative derivation
using elementary information-theoretic tools.Comment: 6 pages, extended version of ITW 2014 pape
Benchmarking the Privacy-Preserving People Search
People search is an important topic in information retrieval. Many previous
studies on this topic employed social networks to boost search performance by
incorporating either local network features (e.g. the common connections
between the querying user and candidates in social networks), or global network
features (e.g. the PageRank), or both. However, the available social network
information can be restricted because of the privacy settings of involved
users, which in turn would affect the performance of people search. Therefore,
in this paper, we focus on the privacy issues in people search. We propose
simulating different privacy settings with a public social network due to the
unavailability of privacy-concerned networks. Our study examines the influences
of privacy concerns on the local and global network features, and their impacts
on the performance of people search. Our results show that: 1) the privacy
concerns of different people in the networks have different influences. People
with higher association (i.e. higher degree in a network) have much greater
impacts on the performance of people search; 2) local network features are more
sensitive to the privacy concerns, especially when such concerns come from high
association peoples in the network who are also related to the querying user.
As the first study on this topic, we hope to generate further discussions on
these issues.Comment: 4 pages, 5 figure
- …