1,004 research outputs found

    Chip and Skim: cloning EMV cards with the pre-play attack

    Full text link
    EMV, also known as "Chip and PIN", is the leading system for card payments worldwide. It is used throughout Europe and much of Asia, and is starting to be introduced in North America too. Payment cards contain a chip so they can execute an authentication protocol. This protocol requires point-of-sale (POS) terminals or ATMs to generate a nonce, called the unpredictable number, for each transaction to ensure it is fresh. We have discovered that some EMV implementers have merely used counters, timestamps or home-grown algorithms to supply this number. This exposes them to a "pre-play" attack which is indistinguishable from card cloning from the standpoint of the logs available to the card-issuing bank, and can be carried out even if it is impossible to clone a card physically (in the sense of extracting the key material and loading it into another card). Card cloning is the very type of fraud that EMV was supposed to prevent. We describe how we detected the vulnerability, a survey methodology we developed to chart the scope of the weakness, evidence from ATM and terminal experiments in the field, and our implementation of proof-of-concept attacks. We found flaws in widely-used ATMs from the largest manufacturers. We can now explain at least some of the increasing number of frauds in which victims are refused refunds by banks which claim that EMV cards cannot be cloned and that a customer involved in a dispute must therefore be mistaken or complicit. Pre-play attacks may also be carried out by malware in an ATM or POS terminal, or by a man-in-the-middle between the terminal and the acquirer. We explore the design and implementation mistakes that enabled the flaw to evade detection until now: shortcomings of the EMV specification, of the EMV kernel certification process, of implementation testing, formal analysis, or monitoring customer complaints. Finally we discuss countermeasures

    Common ELIXIR Service for Researcher Authentication and Authorisation

    Get PDF
    Linden M, Prochazka M, Lappalainen I, et al. Common ELIXIR Service for Researcher Authentication and Authorisation. F1000Research. 2018;7: 1199.A common Authentication and Authorisation Infrastructure (AAI) that would allow single sign-on to services has been identified as a key enabler for European bioinformatics. ELIXIR AAI is an ELIXIR service portfolio for authenticating researchers to ELIXIR services and assisting these services on user privileges during research usage. It relieves the scientific service providers from managing the user identities and authorisation themselves, enables the researcher to have a single set of credentials to all ELIXIR services and supports meeting the requirements imposed by the data protection laws. ELIXIR AAI was launched in late 2016 and is part of the ELIXIR Compute platform portfolio. By the end of 2017 the number of users reached 1000, while the number of relying scientific services was 36. This paper presents the requirements and design of the ELIXIR AAI and the policies related to its use, and how it can be used for serving some example services, such as document management, social media, data discovery, human data access, cloud compute and training services

    Guidelines for Secure Operation of Attribute Authorities and issuers of statements for entities (G071)

    Get PDF
    These guidelines describe the minimum requirements and recommendations for the secure operation of attribute authorities and similar services that make statements about an entity based on well-defined attributes. Adherence to these guidelines may help to establish trust between communities, operators of attribute authorities and issuers, and Relying Parties, infrastructures, and service providers. This document does not define an accreditation process

    Future Open Mobile Services

    Full text link
    The major barriers for the success of mobile data services are the lack of comprehensible mobile service architectures, their confusing business models and the complexity combined with the inconsistency of the technology enablers. This paper attempts to present a more structured and comprehensive analysis of the current mobile service architectures and their technology enablers. The paper starts with a thorough study of the evolution of mobile services and their business models, and a collection of expectations of the different actors, including the end-user. Next, starting from the original mobile services architecture and environment, an attempt to place the different technology enablers in relation to each other and in relation to their position in the mobile system, will be carried out. Each technology enabler together with their contribution in the enhancement of mobile services are then summarised in a complete and comprehensive way. The paper concludes with a recapitulation of the achievement of the state-of-the-art technology enablers and an identification of future improvements

    Guidelines for Secure Operation of Attribute Authorities and other issuers of access-granting statements

    Get PDF
    These guidelines describe the minimum requirements and recommendations for the secure operation of Attribute Authorities and similar services providing statements for the purpose of obtaining access to infrastructure services. Stated compliance with these guidelines may help to establish trust between issuers and Relying Parties. This document does not define an accreditation process

    Streamlining governmental processes by putting citizens in control of their personal data

    Get PDF
    Governments typically store large amounts of personal information on their citizens, such as a home address, marital status, and occupation, to offer public services. Because governments consist of various governmental agencies, multiple copies of this data often exist. This raises concerns regarding data consistency, privacy, and access control, especially under recent legal frameworks such as GDPR. To solve these problems, and to give citizens true control over their data, we explore an approach using the decentralised Solid ecosystem, which enables citizens to maintain their data in personal data pods. We have applied this approach to two high-impact use cases, where citizen information is stored in personal data pods, and both public and private organisations are selectively granted access. Our findings indicate that Solid allows reshaping the relationship between citizens, their personal data, and the applications they use in the public and private sector. We strongly believe that the insights from this Flemish Solid Pilot can speed up the process for public administrations and private organisations that want to put the users in control of their data

    Efficient Enforcement of Security Policies in Distributed Systems

    Get PDF
    Policy-based management (PBM) is an adaptable security policy mechanism in information systems (IS) that confirm only authorised users can access resources. A few decades ago, the traditional PBM has focused on closed systems, where enforcement mechanisms are trusted by system administrators who define access control policies. Most of current work on the PBM systems focuses on designing a centralised policy decision point (PDP), the component that evaluates an access request against a policy and reports the decision back, which can have performance and resilience drawbacks. Performance and resilience are a major concern for applications in military, health and national security domains where the performance is desirable to increase situational awareness through collaboration and to decrease the length of the decision making cycle. The centralised PDP also represents a single point of failure. In case of the failure of the centralised PDP, all resources in the system may cease to function. The efficient distribution of enforcement mechanisms is therefore key in building large scale policy managed distributed systems. Moving from the traditional PBM systems to dynamic PBM systems supports dynamic adaptability of behaviour by changing policy without recoding or stopping the system. The SANTA history-based dynamic PBM system has a formal underpinning in Interval Temporal Logic (ITL) allowing for formal analysis and verification to take place. The main aim of the research to automatically distribute enforcement mechanisms in the distributed system in order to provide resilience against network failure whilst preserving efficiency of policy decision making. The policy formalisation is based on SANTA policy model to provide a high level of assurance. The contribution of this work addresses the challenge of performance, manageability and security, by designing a Decentralised PBM framework and a corresponding Distributed Enforcements Architecture (DENAR). The ability of enforcing static and dynamic security policies in DENAR is the prime research issue, which balances the desire to distribute systems for flexibility whilst maintaining sufficient security over operations. Our research developed mechanisms to improve the efficiency of the enforcement of security policy mechanisms and their resilience against network failures in distributed information systems
    • …
    corecore