11 research outputs found

    Proceedings of the Designing interactive secure systems workshop (DISS 2012).

    Get PDF
    In recent years, the field of usable security has attracted researchers from HCI and Information Security, and led to a better understanding of the interplay between human factors and security mechanisms. Despite these advances, designing systems which are both secure in, and appropriate for, their contexts of use continues to frustrate both researchers and practitioners. One reason is a misunderstanding of the role that HCI can play in the design of secure systems. A number of eminent security researchers and practitioners continue to espouse the need to treat people as the weakest link, and encourage designers to build systems that Homer Simpson can use. Unfortunately, treating users as a problem can limit the opportunities for innovation when people are engaged as part of a solution. Similarly, while extreme characters (such as Homer) can be useful for envisaging different modes of interaction, when taken out of context they risk disenfranchising the very people the design is meant to support. Better understanding the relationship between human factors and the design of secure systems is an important step forward, but many design research challenges still remain. There is growing evidence that HCI design artefacts can be effective at supporting secure system design, and that some alignment exists between HCI, security, and software engineering activities. However, more is needed to understand how broader insights from the interactive system design and user experience communities might also find traction in secure design practice. For these insights to lead to design practice innovation, we also need usability and security evaluation activities that better support interaction design, together with software tools that augment, rather than hinder, these design processes. Last, but not least, we need to share experiences and anecdotes about designing usable and secure systems, and reflect on the different ways of performing and evaluating secure interaction design research. The objective of this workshop is to act as a forum for those interested in the design of interactive secure systems. By bringing together a like-minded community of researchers and practitioners, we hope to share knowledge gleaned from recent research, as well as experiences designing secure and usable systems in practice

    Closed-loop feedback computation model of dynamical reputation based on the local trust evaluation in business-to-consumer e-commerce

    Get PDF
    Trust and reputation are important factors that influence the success of both traditional transactions in physical social networks and modern e-commerce in virtual Internet environments. It is difficult to define the concept of trust and quantify it because trust has both subjective and objective characteristics at the same time. A well-reported issue with reputation management system in business-to-consumer (BtoC) e-commerce is the “all good reputation” problem. In order to deal with the confusion, a new computational model of reputation is proposed in this paper. The ratings of each customer are set as basic trust score events. In addition, the time series of massive ratings are aggregated to formulate the sellers’ local temporal trust scores by Beta distribution. A logical model of trust and reputation is established based on the analysis of the dynamical relationship between trust and reputation. As for single goods with repeat transactions, an iterative mathematical model of trust and reputation is established with a closed-loop feedback mechanism. Numerical experiments on repeated transactions recorded over a period of 24 months are performed. The experimental results show that the proposed method plays guiding roles for both theoretical research into trust and reputation and the practical design of reputation systems in BtoC e-commerce

    Trust Management for Public Key Infrastructures: Implementing the X.509 Trust Broker

    Get PDF
    A Public Key Infrastructure (PKI) is considered one of the most important techniques used to propagate trust in authentication over the Internet. This technology is based on a trust model defined by the original X.509 (1988) standard and is composed of three entities: the Certification Authority (CA), the certificate holder (or subject) and the Relying Party (RP). The CA plays the role of a trusted third party between the certificate holder and the RP. In many use cases, this trust model has worked successfully. However on the Internet, PKI technology is currently facing many obstacles that slow down its global adoption. In this paper, we argue that most of these obstacles boil down to one problem, which is the trust issue, i.e. how can an RP trust an unknown CA over the Internet? We demonstrate that the original X.509 trust model is not appropriate for the Internet and must be extended to include a new entity, called the Trust Broker, which helps RPs make trust decisions about CAs. We present an approach to assess the quality of a certificate that is related to the quality of the CA’s policy and its commitment to it. The Trust Broker, which is proposed for inclusion in the 2016 edition of X.509, could follow this approach to give RPs trust information about CAs. Finally, we present a prototype Trust Broker that demonstrates how RPs can make informed decisions about certificates in the context of the Web, by using its services

    The Alchemy of Trust:The Creative Act of Designing Trustworthy Socio-Technical Systems

    Get PDF
    Trust is recognised as a significant and valuable component of socio-technical systems, facilitating numerous important benefits. Many trust models have been created throughout various streams of literature, describing trust for different stakeholders in different contexts. However, when designing a system with multiple stakeholders in their multiple contexts, how does one decide which trust model(s) to apply? And furthermore, how does one go from selecting a model or models to translating those into design? We review and analyse two prominent trust models, and apply them to the design of a trustworthy socio-technical system, namely virtual research environments. We show that a singular model cannot easily be imported and directly implemented into the design of such a system. We introduce the concept of alchemy as the most apt characterization of a successful design process, illustrating the need for designers to engage with the richness of the trust landscape and creatively experiment with components from multiple models to create the perfect blend for their context. We provide a demonstrative case study illustrating the process through which designers of socio-technical systems can become alchemists of trust

    Functionality-based application confinement: A parameterised and hierarchical approach to policy abstraction for rule-based application-oriented access controls

    Get PDF
    Access controls are traditionally designed to protect resources from users, and consequently make access decisions based on the identity of the user, treating all processes as if they are acting on behalf of the user that runs them. However, this user-oriented approach is insufficient at protecting against contemporary threats, where security compromises are often due to applications running malicious code, either due to software vulnerabilities or malware. Application-oriented access controls can mitigate this threat by managing the authority of individual applications. Rule-based application-oriented access controls can restrict applications to only allow access to the specific finely-grained resources required for them to carry out their tasks, and thus can significantly limit the damage that can be caused by malicious code. Unfortunately existing application-oriented access controls have policy complexity and usability problems that have limited their use. This thesis proposes a new access control model, known as functionality-based application confinement (FBAC). The FBAC model has a number of unique features designed to overcome problems with previous approaches. Policy abstractions, known as functionalities, are used to assign authority to applications based on the features they provide. Functionalities authorise elaborate sets of finely grained privileges based on high-level security goals, and adapt to the needs of specific applications through parameterisation. FBAC is hierarchical, which enables it to provide layers of abstraction and encapsulation in policy. It also simultaneously enforces the security goals of both users and administrators by providing discretionary and mandatory controls. An LSM-based (Linux security module) prototype implementation, known as FBAC-LSM, was developed as a proof-of-concept and was used to evaluate the new model and associated techniques. The policy requirements of over one hundred applications were analysed, and policy abstractions and application policies were developed. Analysis showed that the FBAC model is capable of representing the privilege needs of applications. The model is also well suited to automaiii tion techniques that can in many cases create complete application policies a priori, that is, without first running the applications. This is an improvement over previous approaches that typically rely on learning modes to generate policies. A usability study was conducted, which showed that compared to two widely-deployed alternatives (SELinux and AppArmor), FBAC-LSM had significantly higher perceived usability and resulted in significantly more protective policies. Qualitative analysis was performed and gave further insight into the issues surrounding the usability of application-oriented access controls, and confirmed the success of the FBAC model

    Usable privacy and security in smart homes

    Get PDF
    Ubiquitous computing devices increasingly dominate our everyday lives, including our most private places: our homes. Homes that are equipped with interconnected, context-aware computing devices, are considered “smart” homes. To provide their functionality and features, these devices are typically equipped with sensors and, thus, are capable of collecting, storing, and processing sensitive user data, such as presence in the home. At the same time, these devices are prone to novel threats, making our homes vulnerable by opening them for attackers from outside, but also from within the home. For instance, remote attackers who digitally gain access to presence data can plan for physical burglary. Attackers who are physically present with access to devices could access associated (sensitive) user data and exploit it for further cyberattacks. As such, users’ privacy and security are at risk in their homes. Even worse, many users are unaware of this and/or have limited means to take action. This raises the need to think about usable mechanisms that can support users in protecting their smart home setups. The design of such mechanisms, however, is challenging due to the variety and heterogeneity of devices available on the consumer market and the complex interplay of user roles within this context. This thesis contributes to usable privacy and security research in the context of smart homes by a) understanding users’ privacy perceptions and requirements for usable mechanisms and b) investigating concepts and prototypes for privacy and security mechanisms. Hereby, the focus is on two specific target groups, that are inhabitants and guests of smart homes. In particular, this thesis targets their awareness of potential privacy and security risks, enables them to take control over their personal privacy and security, and illustrates considerations for usable authentication mechanisms. This thesis provides valuable insights to help researchers and practitioners in designing and evaluating privacy and security mechanisms for future smart devices and homes, particularly targeting awareness, control, and authentication, as well as various roles.Computer und andere „intelligente“, vernetzte Geräte sind allgegenwärtig und machen auch vor unserem privatesten Zufluchtsort keinen Halt: unserem Zuhause. Ein „intelligentes Heim“ verspricht viele Vorteile und nützliche Funktionen. Um diese zu erfüllen, sind die Geräte mit diversen Sensoren ausgestattet – sie können also in unserem Zuhause sensitive Daten sammeln, speichern und verarbeiten (bspw. Anwesenheit). Gleichzeitig sind die Geräte anfällig für (neuartige) Cyberangriffe, gefährden somit unser Zuhause und öffnen es für potenzielle – interne sowie externe – Angreifer. Beispielsweise könnten Angreifer, die digital Zugriff auf sensitive Daten wie Präsenz erhalten, einen physischen Überfall in Abwesenheit der Hausbewohner planen. Angreifer, die physischen Zugriff auf ein Gerät erhalten, könnten auf assoziierte Daten und Accounts zugreifen und diese für weitere Cyberangriffe ausnutzen. Damit werden die Privatsphäre und Sicherheit der Nutzenden in deren eigenem Zuhause gefährdet. Erschwerend kommt hinzu, dass viele Nutzenden sich dessen nicht bewusst sind und/oder nur limitierte Möglichkeiten haben, effiziente Gegenmaßnahmen zu ergreifen. Dies macht es unabdingbar, über benutzbare Mechanismen nachzudenken, die Nutzende beim Schutz ihres intelligenten Zuhauses unterstützen. Die Umsetzung solcher Mechanismen ist allerdings eine große Herausforderung. Das liegt unter anderem an der großen Vielfalt erhältlicher Geräte von verschiedensten Herstellern, was das Finden einer einheitlichen Lösung erschwert. Darüber hinaus interagieren im Heimkontext meist mehrere Nutzende in verschieden Rollen (bspw. Bewohner und Gäste), was die Gestaltung von Mechanismen zusätzlich erschwert. Diese Doktorarbeit trägt dazu bei, benutzbare Privatsphäre- und Sicherheitsmechanismen im Kontext des „intelligenten Zuhauses“ zu entwickeln. Insbesondere werden a) die Wahrnehmung von Privatsphäre sowie Anforderungen an potenzielle Mechanismen untersucht, sowie b) Konzepte und Prototypen für Privatsphäre- und Sicherheitsmechanismen vorgestellt. Der Fokus liegt hierbei auf zwei Zielgruppen, den Bewohnern sowie den Gästen eines intelligenten Zuhauses. Insbesondere werden in dieser Arbeit deren Bewusstsein für potenzielle Privatsphäre- und Sicherheits-Risiken adressiert, ihnen Kontrolle über ihre persönliche Privatsphäre und Sicherheit ermöglicht, sowie Möglichkeiten für benutzbare Authentifizierungsmechanismen für beide Zielgruppen aufgezeigt. Die Ergebnisse dieser Doktorarbeit legen den Grundstein für zukünftige Entwicklung und Evaluierung von benutzbaren Privatsphäre und Sicherheitsmechanismen im intelligenten Zuhause

    Kooperative Angriffserkennung in drahtlosen Ad-hoc- und Infrastrukturnetzen: Anforderungsanalyse, Systementwurf und Umsetzung

    Get PDF
    Mit der zunehmenden Verbreitung mobiler Endgeräte und Dienste ergeben sich auch neue Herausforderungen für ihre Sicherheit. Diese lassen sich nur teilweise mit herkömmlichen Sicherheitsparadigmen und -mechanismen meistern. Die Gründe hierfür sind in den veränderten Voraussetzungen durch die inhärenten Eigenschaften mobiler Systeme zu suchen. Die vorliegende Arbeit thematisiert am Beispiel von Wireless LANs die Entwicklung von Sicherheitsmechanismen für drahtlose Ad-hoc- und Infrastrukturnetze. Sie stellt dabei den umfassenden Schutz der einzelnen Endgeräte in den Vordergrund, die zur Kompensation fehlender infrastruktureller Sicherheitsmaßnahmen miteinander kooperieren. Den Ausgangspunkt der Arbeit bildet eine Analyse der Charakteristika mobiler Umgebungen, um grundlegende Anforderungen an eine Sicherheitslösung zu identifizieren. Anhand dieser werden existierende Lösungen bewertet und miteinander verglichen. Der so gewonnene Einblick in die Vor- und Nachteile präventiver, reaktiver und angriffstoleranter Mechanismen führt zu der Konzeption einer hybriden universellen Rahmenarchitektur zur Integration beliebiger Sicherheitsmechanismen in einem kooperativen Verbund. Die Validierung des Systementwurfs erfolgt anhand einer zweigeteilten prototypischen Implementierung. Den ersten Teil bildet die Realisierung eines verteilten Network Intrusion Detection Systems als Beispiel für einen Sicherheitsmechanismus. Hierzu wird eine Methodik beschrieben, um anomalie- und missbrauchserkennende Strategien auf beliebige Netzprotokolle anzuwenden. Die Machbarkeit des geschilderten Ansatzes wird am Beispiel von infrastrukturellem WLAN nach IEEE 802.11 demonstriert. Den zweiten Teil der Validierung bildet der Prototyp einer Kooperations-Middleware auf Basis von Peer-to-Peer-Technologien für die gemeinsame Angriffserkennung lose gekoppelter Endgeräte. Dieser kompensiert bisher fehlende Mechanismen zur optimierten Abbildung des Overlay-Netzes auf die physische Struktur drahtloser Netze, indem er nachträglich die räumliche Position mobiler Knoten in die Auswahl eines Kooperationspartners einbezieht. Die zusätzlich definierte Schnittstelle zu einem Vertrauensmanagementsystem ermöglicht die Etablierung von Vertrauensbeziehungen auf Kooperationsebene als wichtige Voraussetzung für den Einsatz in realen Umgebungen. Als Beispiel für ein Vertrauensmanagementsystem wird der Einsatz von Reputationssystemen zur Bewertung der Verlässlichkeit eines mobilen Knotens diskutiert. Neben einem kurzen Abriss zum Stand der Forschung in diesem Gebiet werden dazu zwei Vorschläge für die Gestaltung eines solchen Systems für mobile Ad-hoc-Netze gemacht.The increasing deployment of mobile devices and accompanying services leads to new security challenges. Due to the changed premises caused by particular features of mobile systems, these obstacles cannot be solved solely by traditional security paradigms and mechanisms. Drawing on the example of wireless LANs, this thesis examines the development of security mechanisms for wireless ad hoc and infrastructural networks. It places special emphasis on the comprehensive protection of each single device as well as compensating missing infrastructural security means by cooperation. As a starting point this thesis analyses the characteristics of mobile environments to identify basic requirements for a security solution. Based on these requirements existing preventive, reactive and intrusion tolerant approaches are evaluated. This leads to the conception of a hybrid and universal framework to integrate arbitrary security mechanisms within cooperative formations. The resulting system design is then validated by a twofold prototype implementation. The first part consists of a distributed network intrusion detection system as an example for a security mechanism. After describing a methodology for applying anomaly- as well as misuse-based detection strategies to arbitrary network protocols, the feasibility of this approach is demonstrated for IEEE 802.11 infrastructural wireless LAN. The second part of the validation is represented by the prototype of a P2P-based cooperation middleware for collaborative intrusion detection by loosely coupled devices. Missing mechanisms for the improved mapping of overlay and physical network structures are compensated by subsequently considering the spatial position of a mobile node when choosing a cooperation partner. Furthermore, an additional interface to an external trust management system enables the establishment of trust relationships as a prerequisite for a deployment in real world scenarios. Reputation systems serve as an example of such a trust management system that can be used to estimate the reliability of a mobile node. After outlining the state of the art, two design patterns of a reputation system for mobile ad hoc networks are presented

    Human decision-making in computer security incident response

    Get PDF
    Background: Cybersecurity has risen to international importance. Almost every organization will fall victim to a successful cyberattack. Yet, guidance for computer security incident response analysts is inadequate. Research Questions: What heuristics should an incident analyst use to construct general knowledge and analyse attacks? Can we construct formal tools to enable automated decision support for the analyst with such heuristics and knowledge? Method: We take an interdisciplinary approach. To answer the first question, we use the research tradition of philosophy of science, specifically the study of mechanisms. To answer the question on formal tools, we use the research tradition of program verification and logic, specifically Separation Logic. Results: We identify several heuristics from biological sciences that cybersecurity researchers have re-invented to varying degrees. We consolidate the new mechanisms literature to yield heuristics related to the fact that knowledge is of clusters of multi-field mechanism schema on four dimensions. General knowledge structures such as the intrusion kill chain provide context and provide hypotheses for filling in details. The philosophical analysis answers this research question, and also provides constraints on building the logic. Finally, we succeed in defining an incident analysis logic resembling Separation Logic and translating the kill chain into it as a proof of concept. Conclusion: These results benefits incident analysis, enabling it to expand from a tradecraft or art to also integrate science. Future research might realize our logic into automated decision-support. Additionally, we have opened the field of cybersecuity to collaboration with philosophers of science and logicians

    From Resilience-Building to Resilience-Scaling Technologies: Directions -- ReSIST NoE Deliverable D13

    Get PDF
    This document is the second product of workpackage WP2, "Resilience-building and -scaling technologies", in the programme of jointly executed research (JER) of the ReSIST Network of Excellence. The problem that ReSIST addresses is achieving sufficient resilience in the immense systems of ever evolving networks of computers and mobile devices, tightly integrated with human organisations and other technology, that are increasingly becoming a critical part of the information infrastructure of our society. This second deliverable D13 provides a detailed list of research gaps identified by experts from the four working groups related to assessability, evolvability, usability and diversit

    A framework to evaluate user experience of end user application security features

    Get PDF
    The use of technology in society moved from satisfying the technical needs of users to giving a lasting user experience while interacting with the technology. The continuous technological advancements have led to a diversity of emerging security concerns. It is necessary to balance security issues with user interaction. As such, designers have adapted to this reality by practising user centred design during product development to cater for the experiential needs of user - product interaction. These User Centred Design best practices and standards ensure that security features are incorporated within End User Programs (EUP). The primary function of EUP is not security, and interaction with security features while performing a program related task does present the end user with an extra burden. Evaluation mechanisms exist to enumerate the performance of the EUP and the user’s experience of the product interaction. Security evaluation standards focus on the program code security as well as on security functionalities of programs designed for security. However, little attention has been paid to evaluating user experience of functionalities offered by embedded security features. A qualitative case study research using problem based and design science research approaches was used to address the lack of criteria to evaluate user experience with embedded security features. User study findings reflect poor user experience with EUP security features, mainly as a result of low awareness of their existence, their location and sometimes even of their importance. From the literature review of the information security and user experience domains and the user study survey findings, four components of the framework were identified, namely: end user characteristics, information security, user experience and end user program security features characteristics. This thesis focuses on developing a framework that can be used to evaluate the user experience of interacting with end user program security features. The framework was designed following the design science research method and was reviewed by peers and experts for its suitability to address the problem. Subject experts in the fields of information security and human computer interaction were engaged, as the research is multidisciplinary. This thesis contributes to the body of knowledge on information security and on user experience elements of human computer interaction security regarding how to evaluate user experience of embedded InfoSec features. The research adds uniquely to the literature in the area of Human Computer Interaction Security evaluation and measurement in general, and is specific to end user program security features. The proposed metrics for evaluating UX of interacting with EUP security features were used to propose intervention to influence UX in an academic setup. The framework, besides presenting UX evaluation strategies for EUP security features, also presents a platform for further academic research on human factors of information security. The impact can be evaluated by assessing security behaviour, and successful security breaches, as well as user experience of interaction with end user programs
    corecore