14,592 research outputs found

    Reve\{a,i\}ling the risks: a phenomenology of information security

    Get PDF
    In information security research, perceived security usually has a negative meaning, when it is used in contrast to actual security. From a phenomenological perspective, however, perceived security is all we have. In this paper, we develop a phenomenological account of information security, where we distinguish between revealed and reveiled security instead. Linking these notions with the concepts of confidence and trust, we are able to give a phenomenological explanation of the electronic voting controversy in the Netherlands

    Converging technologies and de-perimeterisation: towards risky active insulation

    Get PDF
    In converging technologies (Roco and Bainbridge, 2003), boundaries between previously separated technologies become permeable. A similar process is also taking place within information technology. In what is called de-perimeterisation (Jericho Forum, 2005), the boundaries of the information infrastructures of organisations dissolve. Where previously a firewall was used to separate the untrusted outside from the trusted inside, outsourcing of information management and mobility of employees make it impossible to rely on such a clearly located security perimeter. In this paper, we ask the question to what extent these developments represent a similar underlying shift in design assumptions, and how this relates to risk management (cf. Perrow, 1999). We investigate this question from the perspective of the system theory of Niklas Luhmann (1979, 1988, 2005 [1993])

    Informational precaution

    Get PDF
    In environmental ethics, the precautionary principle states that parties should refrain from actions in the face of scientific uncertainties about serious or irreversible harm to public health or the environment. A similar principle is lacking when judging effects of information technology. Such a principle would be helpful in guiding discussions, and that is why we try to develop a precautionary principle for information technology in this paper.\ud \ud As the effects of information technology are primarily social, social sustainability would be a key concept in developing the principle, where environmental sustainability fulfils this role in the traditional one. However, present definitions of social sustainability often consider it as an additional condition for environmental sustainability, rather than as an end in itself. Social sustainability, as meant in this paper, is the property of a development that it safeguards the continuity and stability of a social system. This may include maintaining trust and power relations in society. Based on this definition of social sustainability, we establish a precautionary principle with respect to the social sustainability of information technology.\ud \ud The principle of informational precaution, as we call it, aims at protecting the social environment of technology by providing information security, just as the traditional precautionary principle aims at protecting the natural environment of technology by providing physical, chemical and biological safety. By providing causal insulation in the infosphere, i.e. separation of pieces of information, information technology may be able to protect the social environment. The principle of informational precaution then states that people should refrain from changing causal insulations in the infosphere, if there is uncertainty about possible serious or irreversible harm to society.\u

    What proof do we prefer? Variants of verifiability in voting

    Get PDF
    In this paper, we discuss one particular feature of Internet voting, verifiability, against the background of scientific literature and experiments in the Netherlands. In order to conceptually clarify what verifiability is about, we distinguish classical verifiability from constructive veriability in both individual and universal verification. In classical individual verifiability, a proof that a vote has been counted can be given without revealing the vote. In constructive individual verifiability, a proof is only accepted if the witness (i.e. the vote) can be reconstructed. Analogous concepts are de- fined for universal veriability of the tally. The RIES system used in the Netherlands establishes constructive individual verifiability and constructive universal verifiability, whereas many advanced cryptographic systems described in the scientific literature establish classical individual verifiability and classical universal verifiability. If systems with a particular kind of verifiability continue to be used successfully in practice, this may influence the way in which people are involved in elections, and their image of democracy. Thus, the choice for a particular kind of verifiability in an experiment may have political consequences. We recommend making a well-informed democratic choice for the way in which both individual and universal verifiability should be realised in Internet voting, in order to avoid these unconscious political side-effects of the technology used. The safest choice in this respect, which maintains most properties of current elections, is classical individual verifiability combined with constructive universal verifiability. We would like to encourage discussion about the feasibility of this direction in scientific research

    Explanation and trust: what to tell the user in security and AI?

    Get PDF
    There is a common problem in artificial intelligence (AI) and information security. In AI, an expert system needs to be able to justify and explain a decision to the user. In information security, experts need to be able to explain to the public why a system is secure. In both cases, the goal of explanation is to acquire or maintain the users' trust. In this paper, we investigate the relation between explanation and trust in the context of computing science. This analysis draws on literature study and concept analysis, using elements from system theory as well as actor-network theory. We apply the conceptual framework to both AI and information security, and show the benefit of the framework for both fields by means of examples. The main focus is on expert systems (AI) and electronic voting systems (security). Finally, we discuss consequences of our analysis for ethics in terms of (un)informed consent and dissent, and the associated division of responsibilities
    • …
    corecore