2,161 research outputs found

    Usable security.

    Get PDF
    Traditionally, security is only considered as strong as its weakest link, and people were considered as the weak links (Schneier, 2003). This thinking triggers a vicious circle. (Adam & Sasse, 1999) stated that users are informed as little as possible on security mechanisms taken by IT departments, precisely because they are seen as inherently untrustworthy. Their work has shown that users were not sufficiently aware of security issues and tend to build their own (often inaccurate) models of possible security threats. Users have a low perception of threats because they lack the necessary information to understand their importance. According to (Sasse & al., 2001) blaming users for a security breach is like blaming human error rather than bad design. Security has, therefore, a human dimension that must be neither ignored nor neglected. The increase in the number of breaches may be attributed to designers who fail to sufficiently consider the human factor in their design techniques. Thus, to undo the Gordian knot of security, we must provide a human dimension to security

    Ignore These At Your Peril: Ten principles for trust design

    Get PDF
    Online trust has been discussed for more than 10 years, yet little practical guidance has emerged that has proven to be applicable across contexts or useful in the long run. 'Trustworthy UI design guidelines' created in the late 90ies to address the then big question of online trust: how to get shoppers online, are now happily employed by people preparing phishing scams. In this paper we summarize, in practical terms, a conceptual framework for online trust we've established in 2005. Because of its abstract nature it is still useful as a lens through which to view the current big questions of the online trust debate - large focused on usable security and phishing attacks. We then deduct practical 10 rules for providing effective trust support to help practitioners and researchers of usable security

    Pastures: Towards usable security policy engineering

    Get PDF
    Whether a particular computing installation meets its security goals depends on whether the administrators can create a policy that expresses these goals—security in practice requires effective policy engineering. We have found that the reigning SELinux model fares poorly in this regard, partly because typical isolation goals are not directly stated but instead are properties derivable from the type definitions by complicated analysis tools. Instead, we are experimenting with a security-policy approach based on copy-on-write “pastures”, in which the sharing of resources between pastures is the fundamental security policy primitive. We argue that it has a number of properties that are better from the usability point of view. We implemented this approach as a patch for the 2.6 Linux kernel.

    Usable Security. A Systematic Literature Review

    Get PDF
    Usable security involves designing security measures that accommodate users’ needs and behaviors. Balancing usability and security poses challenges: the more secure the systems, the less usable they will be. On the contrary, more usable systems will be less secure. Numerous studies have addressed this balance. These studies, spanning psychology and computer science/engineering, contribute diverse perspectives, necessitating a systematic review to understand strategies and findings in this area. This systematic literature review examined articles on usable security from 2005 to 2022. A total of 55 research studies were selected after evaluation. The studies have been broadly categorized into four main clusters, each addressing different aspects: (1) usability of authentication methods, (2) helping security developers improve usability, (3) design strategies for influencing user security behavior, and (4) formal models for usable security evaluation. Based on this review, we report that the field’s current state reveals a certain immaturity, with studies tending toward system comparisons rather than establishing robust design guidelines based on a thorough analysis of user behavior. A common theoretical and methodological background is one of the main areas for improvement in this area of research. Moreover, the absence of requirements for Usable security in almost all development contexts greatly discourages implementing good practices since the earlier stages of development

    Usable Security: Why Do We Need It? How Do We Get It?

    Get PDF
    Security experts frequently refer to people as “the weakest link in the chain” of system security. Famed hacker Kevin Mitnick revealed that he hardly ever cracked a password, because it “was easier to dupe people into revealing it” by employing a range of social engineering techniques. Often, such failures are attributed to users’ carelessness and ignorance. However, more enlightened researchers have pointed out that current security tools are simply too complex for many users, and they have made efforts to improve user interfaces to security tools. In this chapter, we aim to broaden the current perspective, focusing on the usability of security tools (or products) and the process of designing secure systems for the real-world context (the panorama) in which they have to operate. Here we demonstrate how current human factors knowledge and user-centered design principles can help security designers produce security solutions that are effective in practice

    Barriers to Usable Security? Three Organizational Case Studies

    Get PDF
    Usable security assumes that when security functions are more usable, people are more likely to use them, leading to an improvement in overall security. Existing software design and engineering processes provide little guidance for leveraging this in the development of applications. Three case studies explore organizational attempts to provide usable security products

    Usable Security for Wireless Body-Area Networks

    Get PDF
    We expect wireless body-area networks of pervasive wearable devices will enable in situ health monitoring, personal assistance, entertainment personalization, and home automation. As these devices become ubiquitous, we also expect them to interoperate. That is, instead of closed, end-to-end body-worn sensing systems, we envision standardized sensors that wirelessly communicate their data to a device many people already carry today, the smart phone. However, this ubiquity of wireless sensors combined with the characteristics they sense present many security and privacy problems. In this thesis we describe solutions to two of these problems. First, we evaluate the use of bioimpedance for recognizing who is wearing these wireless sensors and show that bioimpedance is a feasible biometric. Second, we investigate the use of accelerometers for verifying whether two of these wireless sensors are on the same person and show that our method is successful as distinguishing between sensors on the same body and on different bodies. We stress that any solution to these problems must be usable, meaning the user should not have to do anything but attach the sensor to their body and have them just work. These methods solve interesting problems in their own right, but it is the combination of these methods that shows their true power. Combined together they allow a network of wireless sensors to cooperate and determine whom they are sensing even though only one of the wireless sensors might be able to determine this fact. If all the wireless sensors know they are on the same body as each other and one of them knows which person it is on, then they can each exploit the transitive relationship to know that they must all be on that person’s body. We show how these methods can work together in a prototype system. This ability to operate unobtrusively, collecting in situ data and labeling it properly without interrupting the wearer’s activities of daily life, will be vital to the success of these wireless sensors

    Stakeholder involvement, motivation, responsibility, communication: How to design usable security in e-Science

    Get PDF
    e-Science projects face a difficult challenge in providing access to valuable computational resources, data and software to large communities of distributed users. Oil the one hand, the raison d'etre of the projects is to encourage members of their research communities to use the resources provided. Oil the other hand, the threats to these resources from online attacks require robust and effective Security to mitigate the risks faced. This raises two issues: ensuring that (I) the security mechanisms put in place are usable by the different users of the system, and (2) the security of the overall system satisfies the security needs of all its different stakeholders. A failure to address either of these issues call seriously jeopardise the success of e-Science projects.The aim of this paper is to firstly provide a detailed understanding of how these challenges call present themselves in practice in the development of e-Science applications. Secondly, this paper examines the steps that projects can undertake to ensure that security requirements are correctly identified, and security measures are usable by the intended research community. The research presented in this paper is based Oil four case studies of c-Science projects. Security design traditionally uses expert analysis of risks to the technology and deploys appropriate countermeasures to deal with them. However, these case studies highlight the importance of involving all stakeholders in the process of identifying security needs and designing secure and usable systems.For each case study, transcripts of the security analysis and design sessions were analysed to gain insight into the issues and factors that surround the design of usable security. The analysis concludes with a model explaining the relationships between the most important factors identified. This includes a detailed examination of the roles of responsibility, motivation and communication of stakeholders in the ongoing process of designing usable secure socio-technical systems such as e-Science. (C) 2007 Elsevier Ltd. All rights reserved

    Eliciting usable security requirements with misusability cases.

    Get PDF
    Although widely used for both security and usability concerns, scenarios used in security design may not necessarily inform the design of usability, and vice-versa. One way of using scenarios to bridge security and usability involves explicitly describing how design decisions can lead to users inadvertently exploiting vulnerabilities to carry out their production tasks. We present Mis-usability Cases: scenarios which describe how design decisions may lead to usability problems subsequently leading to system misuse. We describe the steps carried out to develop and apply misusability cases to elicit requirements and report preliminary results applying this technique in a recent case study

    Eight Lightweight Usable Security Principles for Developers

    Get PDF
    We propose eight usable security principles that provide software developers with a lightweight framework to help them integrate security in a user-friendly way. These principles should help developers who must weigh usability and security tradeoffs to facilitate adoption
    • …
    corecore