2,361 research outputs found

    Towards Enhanced Usability of IT Security Mechanisms - How to Design Usable IT Security Mechanisms Using the Example of Email Encryption

    Full text link
    Nowadays, advanced security mechanisms exist to protect data, systems, and networks. Most of these mechanisms are effective, and security experts can handle them to achieve a sufficient level of security for any given system. However, most of these systems have not been designed with focus on good usability for the average end user. Today, the average end user often struggles with understanding and using security mecha-nisms. Other security mechanisms are simply annoying for end users. As the overall security of any system is only as strong as the weakest link in this system, bad usability of IT security mechanisms may result in operating errors, resulting in inse-cure systems. Buying decisions of end users may be affected by the usability of security mechanisms. Hence, software provid-ers may decide to better have no security mechanism then one with a bad usability. Usability of IT security mechanisms is one of the most underestimated properties of applications and sys-tems. Even IT security itself is often only an afterthought. Hence, usability of security mechanisms is often the after-thought of an afterthought. This paper presents some guide-lines that should help software developers to improve end user usability of security-related mechanisms, and analyzes com-mon applications based on these guidelines. Based on these guidelines, the usability of email encryption is analyzed and an email encryption solution with increased usability is presented. The approach is based on an automated key and trust man-agement. The compliance of the proposed email encryption solution with the presented guidelines for usable security mechanisms is evaluated

    Security and Online learning: to protect or prohibit

    Get PDF
    The rapid development of online learning is opening up many new learning opportunities. Yet, with this increased potential come a myriad of risks. Usable security systems are essential as poor usability in security can result in excluding intended users while allowing sensitive data to be released to unacceptable recipients. This chapter presents findings concerned with usability for two security issues: authentication mechanisms and privacy. Usability issues such as memorability, feedback, guidance, context of use and concepts of information ownership are reviewed within various environments. This chapter also reviews the roots of these usability difficulties in the culture clash between the non-user-oriented perspective of security and the information exchange culture of the education domain. Finally an account is provided of how future systems can be developed which maintain security and yet are still usable

    Public Evidence from Secret Ballots

    Full text link
    Elections seem simple---aren't they just counting? But they have a unique, challenging combination of security and privacy requirements. The stakes are high; the context is adversarial; the electorate needs to be convinced that the results are correct; and the secrecy of the ballot must be ensured. And they have practical constraints: time is of the essence, and voting systems need to be affordable and maintainable, and usable by voters, election officials, and pollworkers. It is thus not surprising that voting is a rich research area spanning theory, applied cryptography, practical systems analysis, usable security, and statistics. Election integrity involves two key concepts: convincing evidence that outcomes are correct and privacy, which amounts to convincing assurance that there is no evidence about how any given person voted. These are obviously in tension. We examine how current systems walk this tightrope.Comment: To appear in E-Vote-Id '1

    Obstacles to the Adoption of Secure Communication Tools

    Get PDF
    The computer security community has advocated widespread adoption of secure communication tools to counter mass surveillance. Several popular personal communication tools (e.g., WhatsApp, iMessage) have adopted end-to-end encryption, and many new tools (e.g., Signal, Telegram) have been launched with security as a key selling point. However it remains unclear if users understand what protection these tools offer, and if they value that protection. In this study, we interviewed 60 participants about their experience with different communication tools and their perceptions of the tools' security properties. We found that the adoption of secure communication tools is hindered by fragmented user bases and incompatible tools. Furthermore, the vast majority of participants did not understand the essential concept of end-to-end encryption, limiting their motivation to adopt secure tools. We identified a number of incorrect mental models that underpinned participants' beliefs

    Human Factors in Secure Software Development

    Get PDF
    While security research has made significant progress in the development of theoretically secure methods, software and algorithms, software still comes with many possible exploits, many of those using the human factor. The human factor is often called ``the weakest link'' in software security. To solve this, human factors research in security and privacy focus on the users of technology and consider their security needs. The research then asks how technology can serve users while minimizing risks and empowering them to retain control over their own data. However, these concepts have to be implemented by developers whose security errors may proliferate to all of their software's users. For example, software that stores data in an insecure way, does not secure network traffic correctly, or otherwise fails to adhere to secure programming best practices puts all of the software's users at risk. It is therefore critical that software developers implement security correctly. However, in addition to security rarely being a primary concern while producing software, developers may also not have extensive awareness, knowledge, training or experience in secure development. A lack of focus on usability in libraries, documentation, and tools that they have to use for security-critical components may exacerbate the problem by blowing up the investment of time and effort needed to "get security right". This dissertation's focus is how to support developers throughout the process of implementing software securely. This research aims to understand developers' use of resources, their mindsets as they develop, and how their background impacts code security outcomes. Qualitative, quantitative and mixed methods were employed online and in the laboratory, and large scale datasets were analyzed to conduct this research. This research found that the information sources developers use can contribute to code (in)security: copying and pasting code from online forums leads to achieving functional code quickly compared to using official documentation resources, but may introduce vulnerable code. We also compared the usability of cryptographic APIs, finding that poor usability, unsafe (possibly obsolete) defaults and unhelpful documentation also lead to insecure code. On the flip side, well-thought out documentation and abstraction levels can help improve an API's usability and may contribute to secure API usage. We found that developer experience can contribute to better security outcomes, and that studying students in lieu of professional developers can produce meaningful insights into developers' experiences with secure programming. We found that there is a multitude of online secure development advice, but that these advice sources are incomplete and may be insufficient for developers to retrieve help, which may cause them to choose un-vetted and potentially insecure resources. This dissertation supports that (a) secure development is subject to human factor challenges and (b) security can be improved by addressing these challenges and supporting developers. The work presented in this dissertation has been seminal in establishing human factors in secure development research within the security and privacy community and has advanced the dialogue about the rigorous use of empirical methods in security and privacy research. In these research projects, we repeatedly found that usability issues of security and privacy mechanisms, development practices, and operation routines are what leads to the majority of security and privacy failures that affect millions of end users

    Framework for examination of software quality characteristics in conflict: A security and usability exemplar

    Get PDF
    © 2020, © 2020 The Author(s). This open access article is distributed under a Creative Commons Attribution (CC-BY) 4.0 license. Standards and best practices for software quality guide on handling each quality characteristic individually, but not when two or more characteristics come into conflict such as security and usability. The objectives of this paper are twofold: (a) to argue on the importance of handling the conflicts between quality characteristics in general; (b) to formulate a framework for conflict examination of the software quality characteristics, we do so while considering the specific case of security and usability. In line with the objectives, a framework called Pattern-oriented Design Framework (PoDF) was formulated. The PoDF provides a mechanism for identification of the conflicts, modeling the conflicts to illuminate the reason for their occurrence, and eliciting the suitable trade-offs between the conflicting characteristics. The suitable trade-offs are thus documented as design patterns. The patterns can assist developers and designers in handling the conflicts in other but similar context of use. To validate and instantiate the PoDF, two studies were conducted. Usable security patterns discovered as a result of the studies are also presented in the paper

    Proceedings of the Designing interactive secure systems workshop (DISS 2012).

    Get PDF
    In recent years, the field of usable security has attracted researchers from HCI and Information Security, and led to a better understanding of the interplay between human factors and security mechanisms. Despite these advances, designing systems which are both secure in, and appropriate for, their contexts of use continues to frustrate both researchers and practitioners. One reason is a misunderstanding of the role that HCI can play in the design of secure systems. A number of eminent security researchers and practitioners continue to espouse the need to treat people as the weakest link, and encourage designers to build systems that Homer Simpson can use. Unfortunately, treating users as a problem can limit the opportunities for innovation when people are engaged as part of a solution. Similarly, while extreme characters (such as Homer) can be useful for envisaging different modes of interaction, when taken out of context they risk disenfranchising the very people the design is meant to support. Better understanding the relationship between human factors and the design of secure systems is an important step forward, but many design research challenges still remain. There is growing evidence that HCI design artefacts can be effective at supporting secure system design, and that some alignment exists between HCI, security, and software engineering activities. However, more is needed to understand how broader insights from the interactive system design and user experience communities might also find traction in secure design practice. For these insights to lead to design practice innovation, we also need usability and security evaluation activities that better support interaction design, together with software tools that augment, rather than hinder, these design processes. Last, but not least, we need to share experiences and anecdotes about designing usable and secure systems, and reflect on the different ways of performing and evaluating secure interaction design research. The objective of this workshop is to act as a forum for those interested in the design of interactive secure systems. By bringing together a like-minded community of researchers and practitioners, we hope to share knowledge gleaned from recent research, as well as experiences designing secure and usable systems in practice
    corecore