5,530 research outputs found
Cluster Dependent Classifiers for Online Signature Verification
In this paper, the applicability of notion of cluster dependent classifier for online signature verification is investigated. For every writer, by the use of a number of training samples, a representative is selected based on minimum average distance criteria (centroid) across all the samples of that writer. Later k-means clustering algorithm is employed to cluster the writers based on the chosen representatives. To select a suitable classifier for a writer, the equal error rate (EER) is estimated using each of the classifier for every writer in a cluster. The classifier which gives the lowest EER for a writer is selected to be the suitable classifier for that writer. Once the classifier for each writer in a cluster is decided, the classifier which has been selected for a maximum number of writers in that cluster is decided to be the classifier for all writers of that cluster. During verification, the authenticity of the query signature is decided using the same classifier which has been selected for the cluster to which the claimed writer belongs. In comparison with the existing works on online signature verification, which use a common classifier for all writers during verification, our work is based on the usage of a classifier which is cluster dependent. On the other hand our intuition is to recommend to use a same classifier for all and only those writers who have some common characteristics and to use different classifiers for writers of different characteristics. To demonstrate the efficacy of our model, extensive experiments are carried out on the MCYT online signature dataset (DB1) consisting signatures of 100 individuals. The outcome of the experiments being indicative of increased performance with the adaption of cluster dependent classifier seems to open up a new avenue for further investigation on a reasonably large dataset
Neural Machine Translation Inspired Binary Code Similarity Comparison beyond Function Pairs
Binary code analysis allows analyzing binary code without having access to
the corresponding source code. A binary, after disassembly, is expressed in an
assembly language. This inspires us to approach binary analysis by leveraging
ideas and techniques from Natural Language Processing (NLP), a rich area
focused on processing text of various natural languages. We notice that binary
code analysis and NLP share a lot of analogical topics, such as semantics
extraction, summarization, and classification. This work utilizes these ideas
to address two important code similarity comparison problems. (I) Given a pair
of basic blocks for different instruction set architectures (ISAs), determining
whether their semantics is similar or not; and (II) given a piece of code of
interest, determining if it is contained in another piece of assembly code for
a different ISA. The solutions to these two problems have many applications,
such as cross-architecture vulnerability discovery and code plagiarism
detection. We implement a prototype system INNEREYE and perform a comprehensive
evaluation. A comparison between our approach and existing approaches to
Problem I shows that our system outperforms them in terms of accuracy,
efficiency and scalability. And the case studies utilizing the system
demonstrate that our solution to Problem II is effective. Moreover, this
research showcases how to apply ideas and techniques from NLP to large-scale
binary code analysis.Comment: Accepted by Network and Distributed Systems Security (NDSS) Symposium
201
Higher-Order Process Modeling: Product-Lining, Variability Modeling and Beyond
We present a graphical and dynamic framework for binding and execution of
business) process models. It is tailored to integrate 1) ad hoc processes
modeled graphically, 2) third party services discovered in the (Inter)net, and
3) (dynamically) synthesized process chains that solve situation-specific
tasks, with the synthesis taking place not only at design time, but also at
runtime. Key to our approach is the introduction of type-safe stacked
second-order execution contexts that allow for higher-order process modeling.
Tamed by our underlying strict service-oriented notion of abstraction, this
approach is tailored also to be used by application experts with little
technical knowledge: users can select, modify, construct and then pass
(component) processes during process execution as if they were data. We
illustrate the impact and essence of our framework along a concrete, realistic
(business) process modeling scenario: the development of Springer's
browser-based Online Conference Service (OCS). The most advanced feature of our
new framework allows one to combine online synthesis with the integration of
the synthesized process into the running application. This ability leads to a
particularly flexible way of implementing self-adaption, and to a particularly
concise and powerful way of achieving variability not only at design time, but
also at runtime.Comment: In Proceedings Festschrift for Dave Schmidt, arXiv:1309.455
- …