16 research outputs found

    Eliciting usable security requirements with misusability cases.

    Get PDF
    Although widely used for both security and usability concerns, scenarios used in security design may not necessarily inform the design of usability, and vice-versa. One way of using scenarios to bridge security and usability involves explicitly describing how design decisions can lead to users inadvertently exploiting vulnerabilities to carry out their production tasks. We present Mis-usability Cases: scenarios which describe how design decisions may lead to usability problems subsequently leading to system misuse. We describe the steps carried out to develop and apply misusability cases to elicit requirements and report preliminary results applying this technique in a recent case study

    Seeking the philosopher's stone.

    Get PDF
    This article describes the unique challenges facing usable security research and design, and introduces three proposals for addressing these. For all intents and purposes, security design is currently a craft, where quality is dependent on individuals and their ability, rather than on principles and engineering. However, the wide variety of different skills necessary to design secure and usable systems is unlikely to be mastered by many individuals, requiring an unlikely combination of insight and education. Psychology, economics and cryptography have very little in common, and yet all have a role to play in the field of usable security. To address these concerns, three proposals are presented here: 1) to adopt a principled design framework for usable security and privacy; 2) to support a research environment where skills and knowledge can be pooled and shared; and 3) to guide and inform the principles that underpin the educational curriculum of future security engineers and researchers

    Security through usability: a user-centered approach for balanced security policy requirements.

    Get PDF
    Security policy authors face a dilemma. On one hand, policies need to respond to a constantly evolving, well reported threat landscape, the consequences of which have heightened the security awareness of senior managers. On the other hand, the impact of policies extend beyond constraints on desktop computers and laptops; an overly constrained policy may compromise operations or stifle the freedom needed for staff to innovate. Because few people are fired for making a policy too secure, as long as usability continues to be treated as a trade-off quality together with functionality then policies will err on the side of constraint over freedom of action. Existing work argues that balanced security can be achieved using Requirements Engineering best practice. Such approaches, however, treat usability as another class of quality requirement, and prescribed techniques fail to elicit or analyse empirical data with the same richness as those used by usability professionals. There is, therefore, a need to incorporates techniques from HCI into the task of specifying security, but without compromising Requirements Engineering practice. Recent work demonstrated how user-centered design and security requirements engineering techniques can be aligned; this approach was validated using a general system design project, where ample time was available to collect empirical data and run participatory requirements and risk workshops. The question remains whether such an approach scales for eliciting policy requirements where time is an imperative rather than a luxury

    Evaluating the implications of attack and security patterns with premortems.

    Get PDF
    Security patterns are a useful way of describing, packaging and applying security knowledge which might otherwise be unavailable. However, because patterns represent partial knowledge of a problem and solution space, there is little certainty that addressing the consequences of one problem won't introduce or exacerbate another. Rather than using patterns exclusively to explore possible solutions to security problems, we can use them to better understand the security problem space. To this end, we present a framework for evaluating the implications of security and attack patterns using premortems: scenarios describing a failed system that invites reasons for its failure. We illustrate our approach using an example from the EU FP 7 webinos project

    An approach in defining Information Assurance Patterns based on security ontology and meta-modeling

    Get PDF
    We have to realizing that information is the most important asset. We are at a stage where the information is not only becoming a valuable asset but also the volume of it is increasing at a rapid rate. Modern Enterprises are now working hard to ensure the proper storage, availability and integrity of this information. So we are having the information assurance discipline which is focusing on preservation of information confidentiality, integrity and availability in all of the informations’ various state. We have the information security patterns. But we have to realize that we have to design the Information assurance patterns which can be reusable solutions of addressing information assurance in enterprise-level information engineering. In this paper we are going to propose five Information Assurance patterns based on meta-modelling. And then we will be integrating the exisitng security ontology to the derived Information Assurance patterns. By this we can get the benefits of using patterns and at the same time we can use the concept of security ontology in our Information Assurance Patterns

    Engaging stakeholders in security design: an assumption-driven approach.

    Get PDF
    System stakeholders fail to engage with security until comparatively late in the design and development process. User Experience artefacts like personas and scenarios create this engagement, but creating and contextualising them is difficult without real-world, empirical data; such data cannot be easily elicited from disengaged stakeholders. This paper presents an approach for engaging stakeholders in the elicitation and specification of security requirements at a late-stage of a system's design; this approach relies on assumption-based personas and scenarios, which are aligned with security and requirements analysis activities. We demonstrate this approach by describing how it was used to elicit security requirements for a medical research portal

    Engaging Stakeholders during Late Stage Security Design with Assumption Personas

    Get PDF
    Purpose – This paper aims to present an approach where assumption personas are used to engage stakeholders in the elicitation and specification of security requirements at a late stage of a system’s design. Design/methodology/approach – The author has devised an approach for developing assumption personas for use in participatory design sessions during the later stages of a system’s design. The author validates this approach using a case study in the e-Science domain. Findings – Engagement follows by focusing on the indirect, rather than direct, implications of security. More design approaches are needed for treating security at a comparatively late stage. Security design techniques should scale to working with sub-optimal input data. Originality/value – This paper contributes an approach where assumption personas engage project team members when eliciting and specifying security requirements at the late stages of a project

    Design as Code: Facilitating Collaboration between Usability and Security Engineers using CAIRIS

    Get PDF
    Designing usable and secure software is hard with- out tool-support. Given the importance of requirements, CAIRIS was designed to illustrate the form tool-support for specifying usable and secure systems might take. While CAIRIS supports a broad range of security and usability engineering activities, its architecture needs to evolve to meet the workflows of these stakeholders. To this end, this paper illustrates how CAIRIS and its models act as a vehicle for collaboration between usability and security engineers. We describe how the modified architecture of CAIRIS facilitates this collaboration, and illustrate the tool using three usage scenarios

    Balancing Risk Appetite and Risk Attitude in Requirements: a Framework for User Liberation

    Get PDF
    The tendency to throw controls at perceived and real system vulnerabilities, coupled with the likelihood of these controls being technical in nature, has the propensity to favour security over usability. However there is little evidence of increased assurance and it could encourage work stoppages or deviations that keep honest users from engaging with the system. The conflicting balance of trust and controls, and the challenge of turning that balance into clear requirements, creates an environment that alienates users and feeds the paranoia of actors who assume more ownership of the system than necessary. Security therefore becomes an inhibitor rather than an enabler for the community. This paper looks at measuring the balance of an organisation’s or a community’s risk appetite with the risk attitudes of its members in the early stages of IS development. It suggests how the dials of assurance can be influenced by the levers of good systems practice to create a cultural shift to trusting the users

    Learning from “Shadow Security”: Why understanding non-compliance provides the basis for effective security

    Get PDF
    Over the past decade, security researchers and practitioners have tried to understand why employees do not comply with organizational security policies and mechanisms. Past re-search has treated compliance as a binary decision: people comply, or they do not. From our analysis of 118 in-depth interviews with individuals (employees in a large multinational organization) about security non-compliance, a 3rd response emerges: shadow security. This describes the instances where security-conscious employees who think they cannot comply with the prescribed security policy create a more fitting alter-native to the policies and mechanisms created by the organization’s official security staff. These workarounds are usually not visible to official security and higher management – hence ‘shadow security’. They may not be as secure as the ‘official’ policy would be in theory, but they reflect the best compromise staff can find between getting the job done and managing the risks that the assets they understand face. We conclude that rather than trying to ‘stamp out’ shadow security practices, organizations should learn from them: they provide a starting point ‘workable’ security: solutions that offer effective security and fit with the organization’s business, rather than impede it
    corecore