3,096 research outputs found

    Handgrip pattern recognition

    Get PDF
    There are numerous tragic gun deaths each year. Making handguns safer by personalizing them could prevent most such tragedies. Personalized handguns, also called smart guns, are handguns that can only be fired by the authorized user. Handgrip pattern recognition holds great promise in the development of the smart gun. Two algorithms, static analysis algorithm and dynamic analysis algorithm, were developed to find the patterns of a person about how to grasp a handgun. The static analysis algorithm measured 160 subjects\u27 fingertip placements on the replica gun handle. The cluster analysis and discriminant analysis were applied to these fingertip placements, and a classification tree was built to find the fingertip pattern for each subject. The dynamic analysis algorithm collected and measured 24 subjects\u27 handgrip pressure waveforms during the trigger pulling stage. A handgrip recognition algorithm was developed to find the correct pattern. A DSP box was built to make the handgrip pattern recognition to be done in real time. A real gun was used to evaluate the handgrip recognition algorithm. The result was shown and it proves that such a handgrip recognition system works well as a prototype

    Interval valued symbolic representation of writer dependent features for online signature verification

    Get PDF
    This work focusses on exploitation of the notion of writer dependent parameters for online signature verification. Writer dependent parameters namely features, decision threshold and feature dimension have been well exploited for effective verification. For each writer, a subset of the original set of features are selected using different filter based feature selection criteria. This is in contrast to writer independent approaches which work on a common set of features for all writers. Once features for each writer are selected, they are represented in the form of an interval valued symbolic feature vector. Number of features and the decision threshold to be used for each writer during verification are decided based on the equal error rate (EER) estimated with only the signatures considered for training the system. To demonstrate the effectiveness of the proposed approach, extensive experiments are conducted on both MCYT (DB1) and MCYT (DB2) benchmarking online signature datasets consisting of signatures of 100 and 330 individuals respectively using the available 100 global parametric features. © 2017 Elsevier Lt

    MLCapsule: Guarded Offline Deployment of Machine Learning as a Service

    Full text link
    With the widespread use of machine learning (ML) techniques, ML as a service has become increasingly popular. In this setting, an ML model resides on a server and users can query it with their data via an API. However, if the user's input is sensitive, sending it to the server is undesirable and sometimes even legally not possible. Equally, the service provider does not want to share the model by sending it to the client for protecting its intellectual property and pay-per-query business model. In this paper, we propose MLCapsule, a guarded offline deployment of machine learning as a service. MLCapsule executes the model locally on the user's side and therefore the data never leaves the client. Meanwhile, MLCapsule offers the service provider the same level of control and security of its model as the commonly used server-side execution. In addition, MLCapsule is applicable to offline applications that require local execution. Beyond protecting against direct model access, we couple the secure offline deployment with defenses against advanced attacks on machine learning models such as model stealing, reverse engineering, and membership inference

    Automatic Signature Verification (ASV) in e-commerce

    Get PDF
    In the offline world, payments are often made over the counter with some levels of human inspections. However,such inspection does not exist in the online world. Online transactions are carried out virtually on a remote application server. Though other technical security measures have been introduced for online transactions such as the use of encryption, digital signatures and digital certificates, the issue of trust is still largely a major problem in e-commerce since actual authentication of users is not often established. A solution to this is to use biometrics Automatic Signature Verification (ASV) systems where human identification is carried out automatically based on their signatures.The main advantage of ASV over other biometrics technologies is that its applications are widely accepted and generally acknowledged by the public due to the fact that signatures have long been used as proof of identity in legal documents and financial transactions.Additionally, the ASV system allows the extraction of dynamic information that describes the way a signature is actually executed in terms of velocity, acceleration, pen pressure, pen inclination, etc. Many signature experts believe the dynamic information of the signing operation is generally consistent and stable throughout one’s lifetime.This in turn is more secure simply because it is harder to imitate human signing operation than to reproduce signature images of another person.Since ASV allows for remote networked authentication, it appears promising to most e-commerce applications. This paper generally describes the ASV potentials, its current applications and impediments in e-commerce related activities.It also addresses areas for ASV improvements

    GuardNN: Secure DNN Accelerator for Privacy-Preserving Deep Learning

    Full text link
    This paper proposes GuardNN, a secure deep neural network (DNN) accelerator, which provides strong hardware-based protection for user data and model parameters even in an untrusted environment. GuardNN shows that the architecture and protection can be customized for a specific application to provide strong confidentiality and integrity protection with negligible overhead. The design of the GuardNN instruction set reduces the TCB to just the accelerator and enables confidentiality protection without the overhead of integrity protection. GuardNN also introduces a new application-specific memory protection scheme to minimize the overhead of memory encryption and integrity verification. The scheme shows that most of the off-chip meta-data in today's state-of-the-art memory protection can be removed by exploiting the known memory access patterns of a DNN accelerator. GuardNN is implemented as an FPGA prototype, which demonstrates effective protection with less than 2% performance overhead for inference over a variety of modern DNN models

    Applications Of Machine Learning In Biology And Medicine

    Get PDF
    Machine learning as a field is defined to be the set of computational algorithms that improve their performance by assimilating data. As such, the field as a whole has found applications in many diverse disciplines from robotics and communication in engineering to economics and finance, and also biology and medicine. It should not come as a surprise that many popular methods in use today have completely different origins. Despite this heterogeneity, different methods can be divided into standard tasks, such as supervised, unsupervised, semi-supervised and reinforcement learning. Although machine learning as a field can be formalized as methods trying to solve certain standard tasks, applying these tasks on datasets from different fields comes with certain caveats, and sometimes is fraught with challenges. In this thesis, we develop general procedures and novel solutions, dealing with practical problems that arise when modeling biological and medical data. Cost sensitive learning is an important area of research in machine learning which addresses the widespread and practical problem of dealing with different costs during the learning and deployment of classification algorithms. In many applications such as credit fraud detection, network intrusion and specifically medical diagnosis domains, prior class distributions are highly skewed, which makes the training examples very much unbalanced. Combining this with uneven misclassification costs renders standard machine learning approaches useless in learning an acceptable decision function. We experimentally show the benefits and shortcomings of various methods that convert cost blind learning algorithms to cost sensitive ones. Using the results and best practices found for cost sensitive learning, we design and develop a machine learning approach to ontology mapping. Next, we present a novel approach to deal with uncertainty in classification when costs are unknown or otherwise hard to assign. Support Vector Machines (SVM) are considered to be among the most successful approaches for classification. However prediction of instances near the decision boundary depends more on the specific parameter selection or noise in data, rather than a clear difference in features. In many applications such as medical diagnosis, these regions should be labeled as uncertain rather than assigned to any particular class. Furthermore, instances may belong to novel disease subtypes that are not from any previously known class. In such applications, declining to make a prediction could be beneficial when more powerful but expensive tests are available. We develop a novel approach for optimal selection of the threshold and show its successful application on three biological and medical datasets. The last part of this thesis provides novel solutions for handling high dimensional data. Although high-dimensional data is ubiquitously found in many disciplines, current life science research almost always involves high-dimensional genomics/proteomics data. The ``omics\u27\u27 data provide a wealth of information and have changed the research landscape in biology and medicine. However, these data are plagued with noise, redundancy and collinearity, which makes the discovery process very difficult and costly. Any method that can accurately detect irrelevant and noisy variables in omics data would be highly valuable. We present Robust Feature Selection (RFS), a randomized feature selection approach dedicated to low-sample high-dimensional data. RFS combines an embedded feature selection method with a randomization procedure for stability. Recent advances in sparse recovery and estimation methods have provided efficient and asymptotically consistent feature selection algorithms. However, these methods lack finite sample error control due to instability. Furthermore, the chances of correct recovery diminish with more collinearity among features. To overcome these difficulties, RFS uses a randomization procedure to provide an accurate and stable feature selection method. We thoroughly evaluate RFS by comparing it to a number of popular univariate and multivariate feature selection methods and show marked prediction accuracy improvement of a diagnostic signature, while preserving a good stability

    Trajectory-Based Spatiotemporal Entity Linking

    Full text link
    Trajectory-based spatiotemporal entity linking is to match the same moving object in different datasets based on their movement traces. It is a fundamental step to support spatiotemporal data integration and analysis. In this paper, we study the problem of spatiotemporal entity linking using effective and concise signatures extracted from their trajectories. This linking problem is formalized as a k-nearest neighbor (k-NN) query on the signatures. Four representation strategies (sequential, temporal, spatial, and spatiotemporal) and two quantitative criteria (commonality and unicity) are investigated for signature construction. A simple yet effective dimension reduction strategy is developed together with a novel indexing structure called the WR-tree to speed up the search. A number of optimization methods are proposed to improve the accuracy and robustness of the linking. Our extensive experiments on real-world datasets verify the superiority of our approach over the state-of-the-art solutions in terms of both accuracy and efficiency.Comment: 15 pages, 3 figures, 15 table

    Self-Reliance for the Internet of Things: Blockchains and Deep Learning on Low-Power IoT Devices

    Get PDF
    The rise of the Internet of Things (IoT) has transformed common embedded devices from isolated objects to interconnected devices, allowing multiple applications for smart cities, smart logistics, and digital health, to name but a few. These Internet-enabled embedded devices have sensors and actuators interacting in the real world. The IoT interactions produce an enormous amount of data typically stored on cloud services due to the resource limitations of IoT devices. These limitations have made IoT applications highly dependent on cloud services. However, cloud services face several challenges, especially in terms of communication, energy, scalability, and transparency regarding their information storage. In this thesis, we study how to enable the next generation of IoT systems with transaction automation and machine learning capabilities with a reduced reliance on cloud communication. To achieve this, we look into architectures and algorithms for data provenance, automation, and machine learning that are conventionally running on powerful high-end devices. We redesign and tailor these architectures and algorithms to low-power IoT, balancing the computational, energy, and memory requirements.The thesis is divided into three parts:Part I presents an overview of the thesis and states four research questions addressed in later chapters.Part II investigates and demonstrates the feasibility of data provenance and transaction automation with blockchains and smart contracts on IoT devices.Part III investigates and demonstrates the feasibility of deep learning on low-power IoT devices.We provide experimental results for all high-level proposed architectures and methods. Our results show that algorithms of high-end cloud nodes can be tailored to IoT devices, and we quantify the main trade-offs in terms of memory, computation, and energy consumption
    corecore