315 research outputs found

    LiS: Lightweight Signature Schemes for continuous message authentication in cyber-physical systems

    Get PDF
    Agency for Science, Technology and Research (A*STAR) RIE 202

    Learning Representations from Persian Handwriting for Offline Signature Verification, a Deep Transfer Learning Approach

    Full text link
    Offline Signature Verification (OSV) is a challenging pattern recognition task, especially when it is expected to generalize well on the skilled forgeries that are not available during the training. Its challenges also include small training sample and large intra-class variations. Considering the limitations, we suggest a novel transfer learning approach from Persian handwriting domain to multi-language OSV domain. We train two Residual CNNs on the source domain separately based on two different tasks of word classification and writer identification. Since identifying a person signature resembles identifying ones handwriting, it seems perfectly convenient to use handwriting for the feature learning phase. The learned representation on the more varied and plentiful handwriting dataset can compensate for the lack of training data in the original task, i.e. OSV, without sacrificing the generalizability. Our proposed OSV system includes two steps: learning representation and verification of the input signature. For the first step, the signature images are fed into the trained Residual CNNs. The output representations are then used to train SVMs for the verification. We test our OSV system on three different signature datasets, including MCYT (a Spanish signature dataset), UTSig (a Persian one) and GPDS-Synthetic (an artificial dataset). On UT-SIG, we achieved 9.80% Equal Error Rate (EER) which showed substantial improvement over the best EER in the literature, 17.45%. Our proposed method surpassed state-of-the-arts by 6% on GPDS-Synthetic, achieving 6.81%. On MCYT, EER of 3.98% was obtained which is comparable to the best previously reported results

    Multiple generation of Bengali static signatures

    Get PDF
    Handwritten signature datasets are really necessary for the purpose of developing and training automatic signature verification systems. It is desired that all samples in a signature dataset should exhibit both inter-personal and intra-personal variability. A possibility to model this reality seems to be obtained through the synthesis of signatures. In this paper we propose a method based on motor equivalence model theory to generate static Bengali signatures. This theory divides the human action to write mainly into cognitive and motor levels. Due to difference between scripts, we have redesigned our previous synthesizer [1,2], which generates static Western signatures. The experiments assess whether this method can approach the intra and inter-personal variability of the Bengali-100 Static Signature DB from a performance-based validation. The similarities reported in the experimental results proof the ability of the synthesizer to generate signature images in this script

    Mitigator: Privacy policy compliance using Intel SGX

    Get PDF
    Privacy policies have been known to be hard to read and understand by internet users and yet users are obliged to accept these one-sided terms of usage of their data before they can effectively use websites. Although research has been conducted into alternative representations of privacy policies, it does not consider whether the website provider actually adheres to the data handling practices outlined in the privacy policy. However, there has been significant research towards achieving compliance of internal processing systems to access control policies that capture some aspects of privacy policies, such as those related to confidentiality of collected information, the time period of its retention, and its disclosure to third parties. Apart from the fact that these access control policies may not be designed to be translatable to machine-readable or simplified text policies, such systems suffer from two related drawbacks: first, they assume a large trusted computing base (TCB) and in particular, the operating system is included within their TCB. Secondly, as they are only aimed at achieving compliance of different internal data processing systems to these access control policies, they do not seek to provide users of any proof of a compliant system. On the other hand, trusted hardware seeks to reduce the TCB on a remote machine that a user needs to trust in order to run a program and obtain its results. Trusted hardware platforms provide two novel security properties: they disallow a malicious operating system from learning secrets from the program state and secondly, they allow the user to verify that the OS has not modified the program before or while running it, as long as the user trusts the hardware platform. Our goal is to design an architecture that uses an underlying trusted hardware platform to run a program, named the decryptor, that only hands users' data to a target program that has been determined to be compliant with a privacy policy model. As both of these programs are run on a trusted hardware platform, users can verify that the decryptor is indeed the correct, unmodified program. Most importantly, in our architecture, we provide trustworthy information about the verifier program used on the server side to a client program such that it can ensure that the target program has been checked for compliance with a privacy policy model by a valid verifier program. Such a verifier program should be made open-sourced so that it can be checked by experts. Our second contribution lies in implementing this architecture on the Intel SGX hardware platform, using a shim layer, namely the Graphene-SGX library. Finally, we also evaluate our system for its efficiency and find that it has a very small overhead in comparison with a setup that does not provide such guarantees

    Security in signalling and digital signatures

    Get PDF

    A method for creating digital signature policies.

    Get PDF
    Increased political pressures towards a more efficient public sector have resulted in the increased proliferation of electronic documents and associated technologies such as Digital Signatures. Whilst Digital Signatures provide electronic document security functions, they do not confer legal meaning of a signature which captures the conditions under which a signature can be deemed to be legally valid. Whilst in the paper-world this information is often communicated implicitly, verbally or through notes within the document itself, in the electronic world a technological tool is required to communicate this meaning; one such technological aid is the Digital Signature Policy. In a transaction where the legality of a signature must be established, a Digital Signature Policy can confer the necessary contextual information that is required to make such a judgment. The Digital Signature Policy captures information such as the terms to which a signatory wishes to bind himself, the actual legal clauses and acts being invoked by the process of signing, the conditions under which a signatory's signature is deemed legally valid and other such information. As this is a relatively new technology, little literature exists on this topic. This research was conducted in an Action Research collaboration with a Spanish Public Sector organisation that sought to introduce Digital Signature Policy technology; their specific research problem was that the production of Digital Signature Policies was time consuming, resource intensive, arduous and suffered from lack of quality. The research therefore sought to develop a new and improved method for creating Digital Signature Policies. The researcher collaborated with the problem owner, as is typical of Participative Action Research. The research resulted in the development of a number of Information Systems artefacts, the development of a method for creating Digital Signature Policies and finally led to a stage where the problem owner could successfully develop the research further without the researcher's further input

    Data Protection for the Internet of Things

    Get PDF
    The Internet of Things (abbreviated: “IoT”) is acknowledged as one of the most important disruptive technologies with more than 16 billion devices forecasted to interact autonomously by 2020. The idea is simple, devices will help to measure the status of physical objects. The devices, containing sensors and actuators, are so small that they can be integrated or attached to any object in order to measure that object and possibly change its status accordingly. A process or work flow is then able to interact with those devices and to control the objects physically. The result is the collection of massive data in a ubiquitous form. This data can be analysed to gain new insights, a benefit propagated by the “Big Data” and “Smart Data” paradigms. While governments, cities and industries are heavily involved in the Internet of Things, society’s privacy awareness and the concerns over data protection in IoT increase steadily. The scale of the collection, processing and dissemination of possibly private information in the Internet of Things has long begun to raise privacy concerns. The problem is a fundamental one, it is the massive data collection that benefits the investment on IoT, while it contradicts the interest on data minimization coming from privacy advocates. And the challenges go even further, while privacy is an actively researched topic with a mature variety of privacy preserving mechanisms, legal studies and surveillance studies in specific contexts, investigations of how to apply this concepts in the constrained environment of IoT have merely begun. Thus the objective of this thesis is threefold and tackles several topics, looking at them in a differentiated way and later bringing them together for one of the first, (more) complete pictures of privacy in IoT. The first starting point is the throughout study of stakeholders, impact areas and proposals on an architectural reference model for IoT. At the time of this writing, IoT was adversed heavily by several companies, products and even governments, creating a blurred picture of what IoT really is. This thesis surveys stakeholders, scenarios, architecture paradigms and definitions to find a working definition for IoT which adequately describes the intersection between all of the aforementioned topics. In a further step, the definition is applied exemplary on two scenarios to identify the common building blocks of those scenarios and of IoT in general. The building blocks are then verified against a similar approach by the IoT-A and Rerum projects and unified to an IoT domain model. This approach purposefully uses notions and paradigms provided in related scientific work and European projects in order to benefit from existing efforts and to achieve a common understanding. In this thesis, the observation of so called cyber-physical properties of IoT leads to the conclusion that IoT proposals miss a core concept of physical interaction in the “real world”. Accordingly, this thesis takes a detour to jurisdiction and identifies ownership and possession as a main concept of “human-to-object” relationships. The analysis of IoT building blocks ends with an enhanced IoT domain model. The next step breaks down “privacy by design”. Notably hereby is that privacy by design has been well integrated in to the new European General Data Protection Regulation (GDPR). This regulation heavily affects IoT and thus serves as the main source of privacy requirements. Gürses et al.’s privacy paradigm (privacy as confidentiality, privacy as control and privacy as practice) is used for the breakdown, preceded by a survey of relevant privacy proposals, where relevancy was measured upon previously identified IoT impact areas and stakeholders. Independently from IoT, this thesis shows that privacy engineering is a task that still needs to be well understood. A privacy development lifecycle was therefore sketched as a first step in this direction. Existing privacy technologies are part of the survey. Current research is summed up to show that while many schemes exist, few are adequate for actual application in IoT due to their high energy or computational consumption and high implementation costs (most notably caused by the implementation of special arithmetics). In an effort to give a first direction on possible new privacy enhancing technologies for IoT, new technical schemes are presented, formally verified and evaluated. The proposals comprise schemes, among others, on relaxed integrity protection, privacy friendly authentication and authorization as well as geo-location privacy. The schemes are presented to industry partners with positive results. This technologies have thus been published in academia and as intellectual property items. This thesis concludes by bringing privacy and IoT together. The final result is a privacy enhanced IoT domain model accompanied by a set of assumptions regarding stakeholders, economic impacts, economic and technical constraints as well as formally verified and evaluated proof of concept technologies for privacy in IoT. There is justifiable interest in IoT as it helps to tackle many future challenges found in several impact areas. At the same time, IoT impacts the stakeholders that participate in those areas, creating the need for unification of IoT and privacy. This thesis shows that technical and economic constraints do not impede such a process, although the process has merely begun

    On-line signature recognition through the combination of real dynamic data and synthetically generated static data

    Full text link
    This is the author’s version of a work that was accepted for publication in Pattern Recognition . Changes resulting from the publishing process, such as peer review, editing, corrections, structural formatting, and other quality control mechanisms may not be reflected in this document. Changes may have been made to this work since it was submitted for publication. A definitive version was subsequently published in Pattern Recognition , 48, 9 (2005) DOI: 10.1016/j.patcog.2015.03.019On-line signature verification still remains a challenging task within biometrics. Due to their behavioral nature (opposed to anatomic biometric traits), signatures present a notable variability even between successive realizations. This leads to higher error rates than other largely used modalities such as iris or fingerprints and is one of the main reasons for the relatively slow deployment of this technology. As a step towards the improvement of signature recognition accuracy, the present paper explores and evaluates a novel approach that takes advantage of the performance boost that can be reached through the fusion of on-line and off-line signatures. In order to exploit the complementarity of the two modalities, we propose a method for the generation of enhanced synthetic static samples from on-line data. Such synthetic off-line signatures are used on a new on-line signature recognition architecture based on the combination of both types of data: real on-line samples and artificial off-line signatures synthesized from the real data. The new on-line recognition approach is evaluated on a public benchmark containing both real versions (on-line and off-line) of the exact same signatures. Different findings and conclusions are drawn regarding the discriminative power of on-line and off-line signatures and of their potential combination both in the random and skilled impostors scenarios.M. D.-C. is supported by a PhD fellowship from the ULPGC and M.G.-B. is supported by a FPU fellowship from the Spanish MECD. This work has been partially supported by projects: MCINN TEC2012-38630- C04-02, Bio-Shield (TEC2012-34881) from Spanish MINECO, BEAT (FP7-SEC-284989) from EU, CECABANK and Cátedra UAM-Telefónic

    ICFHR 2020 Competition on Short answer ASsessment and Thai student SIGnature and Name COMponents Recognition and Verification (SASIGCOM 2020)

    Full text link
    This paper describes the results of the competition on Short answer ASsessment and Thai student SIGnature and Name COMponents Recognition and Verification (SASIGCOM 2020) in conjunction with the 17th International Conference on Frontiers in Handwriting Recognition (ICFHR 2020). The competition was aimed to automate the evaluation process short answer-based examination and record the development and gain attention to such system. The proposed competition contains three elements which are short answer assessment (recognition and marking the answers to short-answer questions derived from examination papers), student name components (first and last names) and signature verification and recognition. Signatures and name components data were collected from 100 volunteers. For the Thai signature dataset, there are 30 genuine signatures, 12 skilled and 12 simple forgeries for each writer. With Thai name components dataset, there are 30 genuine and 12 skilfully forged name components for each writer. There are 104 exam papers in the short answer assessment dataset, 52 of which were written with cursive handwriting; the rest of 52 papers were written with printed handwriting. The exam papers contain ten questions, and the answers to the questions were designed to be a few words per question. Three teams from distinguished labs submitted their systems. For short answer assessment, word spotting task was also performed. This paper analysed the results produced by their algorithms using a performance measure and defines a way forward for this subject of research. Both the datasets, along with some of the accompanying ground truth/baseline mask will be made freely available for research purposes via the TC10/TC11

    Selected Papers from the First International Symposium on Future ICT (Future-ICT 2019) in Conjunction with 4th International Symposium on Mobile Internet Security (MobiSec 2019)

    Get PDF
    The International Symposium on Future ICT (Future-ICT 2019) in conjunction with the 4th International Symposium on Mobile Internet Security (MobiSec 2019) was held on 17–19 October 2019 in Taichung, Taiwan. The symposium provided academic and industry professionals an opportunity to discuss the latest issues and progress in advancing smart applications based on future ICT and its relative security. The symposium aimed to publish high-quality papers strictly related to the various theories and practical applications concerning advanced smart applications, future ICT, and related communications and networks. It was expected that the symposium and its publications would be a trigger for further related research and technology improvements in this field
    corecore