13 research outputs found

    Stakeholder involvement, motivation, responsibility, communication: How to design usable security in e-Science

    Get PDF
    e-Science projects face a difficult challenge in providing access to valuable computational resources, data and software to large communities of distributed users. Oil the one hand, the raison d'etre of the projects is to encourage members of their research communities to use the resources provided. Oil the other hand, the threats to these resources from online attacks require robust and effective Security to mitigate the risks faced. This raises two issues: ensuring that (I) the security mechanisms put in place are usable by the different users of the system, and (2) the security of the overall system satisfies the security needs of all its different stakeholders. A failure to address either of these issues call seriously jeopardise the success of e-Science projects.The aim of this paper is to firstly provide a detailed understanding of how these challenges call present themselves in practice in the development of e-Science applications. Secondly, this paper examines the steps that projects can undertake to ensure that security requirements are correctly identified, and security measures are usable by the intended research community. The research presented in this paper is based Oil four case studies of c-Science projects. Security design traditionally uses expert analysis of risks to the technology and deploys appropriate countermeasures to deal with them. However, these case studies highlight the importance of involving all stakeholders in the process of identifying security needs and designing secure and usable systems.For each case study, transcripts of the security analysis and design sessions were analysed to gain insight into the issues and factors that surround the design of usable security. The analysis concludes with a model explaining the relationships between the most important factors identified. This includes a detailed examination of the roles of responsibility, motivation and communication of stakeholders in the ongoing process of designing usable secure socio-technical systems such as e-Science. (C) 2007 Elsevier Ltd. All rights reserved

    Privacy is a process, not a PET: a theory for effective privacy practice

    Get PDF
    Privacy research has not helped practitioners -- who struggle to reconcile users' demands for information privacy with information security, legislation, information management and use -- to improve privacy practice. Beginning with the principle that information security is necessary but not sufficient for privacy, we present an innovative layered framework - the Privacy Security Trust (PST) Framework - which integrates, in one model, the different activities practitioners must undertake for effective privacy practice. The PST Framework considers information security, information management and data protection legislation as privacy hygiene factors, representing the minimum processes for effective privacy practice. The framework also includes privacy influencers - developed from previous research in information security culture, information ethics and information culture - and privacy by design principles. The framework helps to deliver good privacy practice by providing: 1) a clear hierarchy of the activities needed for effective privacy practice; 2) delineation of information security and privacy; and 3) justification for placing data protection at the heart of those activities involved in maintaining information privacy. We present a proof-of-concept application of the PST Framework to an example technology -- electricity smart meters

    Security and Online learning: to protect or prohibit

    Get PDF
    The rapid development of online learning is opening up many new learning opportunities. Yet, with this increased potential come a myriad of risks. Usable security systems are essential as poor usability in security can result in excluding intended users while allowing sensitive data to be released to unacceptable recipients. This chapter presents findings concerned with usability for two security issues: authentication mechanisms and privacy. Usability issues such as memorability, feedback, guidance, context of use and concepts of information ownership are reviewed within various environments. This chapter also reviews the roots of these usability difficulties in the culture clash between the non-user-oriented perspective of security and the information exchange culture of the education domain. Finally an account is provided of how future systems can be developed which maintain security and yet are still usable

    Gulfs of Expectation: Eliciting and Verifying Differences in Trust Expectations using Personas

    Get PDF
    Personas are a common tool used in Human Computer Interaction to represent the needs and expectations of a system’s stakeholders, but they are also grounded in large amounts of qualitative data. Our aim is to make use of this data to anticipate the differences between a user persona’s expectations of a system, and the expectations held by its developers. This paper introduces the idea of gulfs of expectation – the gap between the expectations held by a user about a system and its developers, and the expectations held by a developer about the system and its users. By evaluating these differences in expectation against a formal representation of a system, we demonstrate how differences between the anticipated user and developer mental models of the system can be verified. We illustrate this using a case study where persona characteristics were analysed to identify divergent behaviour and potential security breaches as a result of differing trust expectations

    Casamento de impressões digitais fotográficas utilizando uma representação baseada em banco de filtros

    Get PDF
    Trabalho de conclusão de curso (graduação)—Universidade de Brasília, Instituto de Ciências Exatas, Departamento de Ciência da Computação, 2016.Embora métodos para casamento de impressões digitais obtidas por intermédio de aquisição com contato já foram amplamente estudados, ainda não se conhecem meios para casamento de impressões digitais obtidas sem contato. Para resolver tal problema, propõese uma metodologia para casamento de impressões digitais sem contato, baseada na abordagem FingerCode, por meio de banco de filtros de Gabor após operações de préprocessamento sobre imagens obtidas através de mecanismos de aquisição multivista. A metodologia foi cientificamente bem sucedida com uma taxa de ERR de 10.74%.Even, though, matching between fingerprints obtained through touchbased acquisition methods had already been extensively studied, means for performing matching between fingerprints obtained through touchless acquisition methods are yet unknown. To solve this problem, a method is proposed, based on FingerCode approach, through Gabor filterbank after the execution of a pre-processing algorithm over images obtained through multiview acquisition. The methodology was scientifically successful with an ERR of 10.74%

    Balancing privacy needs with location sharing in mobile computing

    Get PDF
    Mobile phones are increasingly becoming tools for social interaction. As more phones come equipped with location tracking capabilities, capable of collecting and distributing personal information (including location) of their users, user control of location information and privacy for that matter, has become an important research issue. This research first explores various techniques of user control of location in location-based systems, and proposes the re-conceptualisation of deception (defined here as the deliberate withholding of location information) from information systems security to the field of location privacy. Previous work in this area considers techniques such as anonymisation, encryption, cloaking and blurring, among others. Since mobile devices have become social tools, this thesis takes a different approach by empirically investigating first the likelihood of the use of the proposed technique (deception) in protecting location privacy. We present empirical results (based on an online study) that show that people are willing to deliberately withhold their location information to protect their location privacy. However, our study shows that people feel uneasy in engaging in this type of deception if they believe this will be detected by their intended recipients. The results also suggest that the technique is popular in situations where it is very difficult to detect that there has been a deliberate withholding of location information during a location disclosure. Our findings are then presented in the form of initial design guidelines for the design of deception to control location privacy. Based on these initial guidelines, we propose and build a deception-based privacy control model. Two different evaluation approaches are employed in investigating the suitability of the model. These include; a field-based study of the techniques employed in the model and a laboratory-based usability study of the Mobile Client application upon which the DPC model is based, using HCI (Human Computer Interaction) professionals. Finally, we present guidelines for the design of deception in location disclosure, and lessons learned from the two evaluation approaches. We also propose a unified privacy preference framework implemented on the application layer of the mobile platform as a future direction of this thesis

    The development of secure and usable systems.

    Get PDF
    "People are the weakest link in the security chain"---Bruce Schneier. The aim of the thesis is to investigate the process of designing secure systems, and how designers can ensure that security mechanisms are usable and effective in practice. The research perspective is one of security as a socio-technical system. A review of the literature of security design and Human Computer Interactions in Security (HCISec) reveals that most security design methods adopt either an organisational approach, or a technical focus. And whilst HCISec has identified the need to improve usability in computer security, most of the current research in this area is addressing the issue by improving user interfaces to security tools. Whilst this should help to reduce users' errors and workload, this approach does not address problems which arise from the difficulty of reconciling technical requirements and human factors. To date, little research has been applied to socio-technical approaches to secure system design methods. Both identifying successful socio-technical design approaches and gaining a better understanding of the issues surrounding their application is required to address this gap. Appropriate and Effective Guidance for Information Security (AEGIS) is a socio-technical secure system development methodology developed for this purpose. It takes a risk-based approach to security design and focuses on recreating the contextual information surrounding the system in order to better inform security decisions, with the aim of making these decisions better suited to users' needs. AEGIS uses a graphical notation defined in the UML Meta-Object Facility to provide designers with a familiar and well- supported means of building models. Grid applications were selected as the area in which to apply and validate AEGIS. Using the research methodology Action Research, AEGIS was applied to a total of four Grid case studies. This allowed in the first instance the evaluation and refinement of AEGIS on real- world systems. Through the use of the qualitative data analysis methodology Grounded Theory, the design session transcripts gathered from the Action Research application of AEGIS were then further analysed. The resulting analysis identified important factors affecting the design process - separated into categories of responsibility, motivation, stakeholders and communication. These categories were then assembled into a model informing the factors and issues that affect socio-technical secure system design. This model therefore provides a key theoretical insight into real-world issues and is a useful foundation for improving current practice and future socio-technical secure system design methodologies

    The Great Scrape: The Clash Between Scraping and Privacy

    Get PDF
    Artificial intelligence (AI) systems depend on massive quantities of data, often gathered by “scraping” – the automated extraction of large amounts of data from the internet. A great deal of scraped data is about people. This personal data provides the grist for AI tools such as facial recognition, deep fakes, and generative AI. Although scraping enables web searching, archival, and meaningful scientific research, scraping for AI can also be objectionable or even harmful to individuals and society. Organizations are scraping at an escalating pace and scale, even though many privacy laws are seemingly incongruous with the practice. In this Article, we contend that scraping must undergo a serious reckoning with privacy law. Scraping violates nearly all of the key principles in privacy laws, including fairness; individual rights and control; transparency; consent; purpose specification and secondary use restrictions; data minimization; onward transfer; and data security. With scraping, data protection laws built around these requirements are ignored. Scraping has evaded a reckoning with privacy law largely because scrapers act as if all publicly available data were free for the taking. But the public availability of scraped data shouldn’t give scrapers a free pass. Privacy law regularly protects publicly available data, and privacy principles are implicated even when personal data is accessible to others. This Article explores the fundamental tension between scraping and privacy law. With the zealous pursuit and astronomical growth of AI, we are in the midst of what we call the “great scrape.” There must now be a great reconciliation
    corecore