2,438 research outputs found

    Process of Fingerprint Authentication using Cancelable Biohashed Template

    Get PDF
    Template protection using cancelable biometrics prevents data loss and hacking stored templates, by providing considerable privacy and security. Hashing and salting techniques are used to build resilient systems. Salted password method is employed to protect passwords against different types of attacks namely brute-force attack, dictionary attack, rainbow table attacks. Salting claims that random data can be added to input of hash function to ensure unique output. Hashing salts are speed bumps in an attacker’s road to breach user’s data. Research proposes a contemporary two factor authenticator called Biohashing. Biohashing procedure is implemented by recapitulated inner product over a pseudo random number generator key, as well as fingerprint features that are a network of minutiae. Cancelable template authentication used in fingerprint-based sales counter accelerates payment process. Fingerhash is code produced after applying biohashing on fingerprint. Fingerhash is a binary string procured by choosing individual bit of sign depending on a preset threshold. Experiment is carried using benchmark FVC 2002 DB1 dataset. Authentication accuracy is found to be nearly 97\%. Results compared with state-of art approaches finds promising

    Representing Input Transformations by Low-Dimensional Parameter Subspaces

    Full text link
    Deep models lack robustness to simple input transformations such as rotation, scaling, and translation, unless they feature a particular invariant architecture or undergo specific training, e.g., learning the desired robustness from data augmentations. Alternatively, input transformations can be treated as a domain shift problem, and solved by post-deployment model adaptation. Although a large number of methods deal with transformed inputs, the fundamental relation between input transformations and optimal model weights is unknown. In this paper, we put forward the configuration subspace hypothesis that model weights optimal for parameterized continuous transformations can reside in low-dimensional linear subspaces. We introduce subspace-configurable networks to learn these subspaces and observe their structure and surprisingly low dimensionality on all tested transformations, datasets and architectures from computer vision and audio signal processing domains. Our findings enable efficient model reconfiguration, especially when limited storage and computing resources are at stake

    RADIC Voice Authentication: Replay Attack Detection using Image Classification for Voice Authentication Systems

    Get PDF
    Systems like Google Home, Alexa, and Siri that use voice-based authentication to verify their users’ identities are vulnerable to voice replay attacks. These attacks gain unauthorized access to voice-controlled devices or systems by replaying recordings of passphrases and voice commands. This shows the necessity to develop more resilient voice-based authentication systems that can detect voice replay attacks. This thesis implements a system that detects voice-based replay attacks by using deep learning and image classification of voice spectrograms to differentiate between live and recorded speech. Tests of this system indicate that the approach represents a promising direction for detecting voice-based replay attacks

    Tiny Machine Learning Environment: Enabling Intelligence on Constrained Devices

    Get PDF
    Running machine learning algorithms (ML) on constrained devices at the extreme edge of the network is problematic due to the computational overhead of ML algorithms, available resources on the embedded platform, and application budget (i.e., real-time requirements, power constraints, etc.). This required the development of specific solutions and development tools for what is now referred to as TinyML. In this dissertation, we focus on improving the deployment and performance of TinyML applications, taking into consideration the aforementioned challenges, especially memory requirements. This dissertation contributed to the construction of the Edge Learning Machine environment (ELM), a platform-independent open-source framework that provides three main TinyML services, namely shallow ML, self-supervised ML, and binary deep learning on constrained devices. In this context, this work includes the following steps, which are reflected in the thesis structure. First, we present the performance analysis of state-of-the-art shallow ML algorithms including dense neural networks, implemented on mainstream microcontrollers. The comprehensive analysis in terms of algorithms, hardware platforms, datasets, preprocessing techniques, and configurations shows similar performance results compared to a desktop machine and highlights the impact of these factors on overall performance. Second, despite the assumption that TinyML only permits models inference provided by the scarcity of resources, we have gone a step further and enabled self-supervised on-device training on microcontrollers and tiny IoT devices by developing the Autonomous Edge Pipeline (AEP) system. AEP achieves comparable accuracy compared to the typical TinyML paradigm, i.e., models trained on resource-abundant devices and then deployed on microcontrollers. Next, we present the development of a memory allocation strategy for convolutional neural networks (CNNs) layers, that optimizes memory requirements. This approach reduces the memory footprint without affecting accuracy nor latency. Moreover, e-skin systems share the main requirements of the TinyML fields: enabling intelligence with low memory, low power consumption, and low latency. Therefore, we designed an efficient Tiny CNN architecture for e-skin applications. The architecture leverages the memory allocation strategy presented earlier and provides better performance than existing solutions. A major contribution of the thesis is given by CBin-NN, a library of functions for implementing extremely efficient binary neural networks on constrained devices. The library outperforms state of the art NN deployment solutions by drastically reducing memory footprint and inference latency. All the solutions proposed in this thesis have been implemented on representative devices and tested in relevant applications, of which results are reported and discussed. The ELM framework is open source, and this work is clearly becoming a useful, versatile toolkit for the IoT and TinyML research and development community

    Contributions to improve the technologies supporting unmanned aircraft operations

    Get PDF
    Mención Internacional en el título de doctorUnmanned Aerial Vehicles (UAVs), in their smaller versions known as drones, are becoming increasingly important in today's societies. The systems that make them up present a multitude of challenges, of which error can be considered the common denominator. The perception of the environment is measured by sensors that have errors, the models that interpret the information and/or define behaviors are approximations of the world and therefore also have errors. Explaining error allows extending the limits of deterministic models to address real-world problems. The performance of the technologies embedded in drones depends on our ability to understand, model, and control the error of the systems that integrate them, as well as new technologies that may emerge. Flight controllers integrate various subsystems that are generally dependent on other systems. One example is the guidance systems. These systems provide the engine's propulsion controller with the necessary information to accomplish a desired mission. For this purpose, the flight controller is made up of a control law for the guidance system that reacts to the information perceived by the perception and navigation systems. The error of any of the subsystems propagates through the ecosystem of the controller, so the study of each of them is essential. On the other hand, among the strategies for error control are state-space estimators, where the Kalman filter has been a great ally of engineers since its appearance in the 1960s. Kalman filters are at the heart of information fusion systems, minimizing the error covariance of the system and allowing the measured states to be filtered and estimated in the absence of observations. State Space Models (SSM) are developed based on a set of hypotheses for modeling the world. Among the assumptions are that the models of the world must be linear, Markovian, and that the error of their models must be Gaussian. In general, systems are not linear, so linearization are performed on models that are already approximations of the world. In other cases, the noise to be controlled is not Gaussian, but it is approximated to that distribution in order to be able to deal with it. On the other hand, many systems are not Markovian, i.e., their states do not depend only on the previous state, but there are other dependencies that state space models cannot handle. This thesis deals a collection of studies in which error is formulated and reduced. First, the error in a computer vision-based precision landing system is studied, then estimation and filtering problems from the deep learning approach are addressed. Finally, classification concepts with deep learning over trajectories are studied. The first case of the collection xviiistudies the consequences of error propagation in a machine vision-based precision landing system. This paper proposes a set of strategies to reduce the impact on the guidance system, and ultimately reduce the error. The next two studies approach the estimation and filtering problem from the deep learning approach, where error is a function to be minimized by learning. The last case of the collection deals with a trajectory classification problem with real data. This work completes the two main fields in deep learning, regression and classification, where the error is considered as a probability function of class membership.Los vehículos aéreos no tripulados (UAV) en sus versiones de pequeño tamaño conocidos como drones, van tomando protagonismo en las sociedades actuales. Los sistemas que los componen presentan multitud de retos entre los cuales el error se puede considerar como el denominador común. La percepción del entorno se mide mediante sensores que tienen error, los modelos que interpretan la información y/o definen comportamientos son aproximaciones del mundo y por consiguiente también presentan error. Explicar el error permite extender los límites de los modelos deterministas para abordar problemas del mundo real. El rendimiento de las tecnologías embarcadas en los drones, dependen de nuestra capacidad de comprender, modelar y controlar el error de los sistemas que los integran, así como de las nuevas tecnologías que puedan surgir. Los controladores de vuelo integran diferentes subsistemas los cuales generalmente son dependientes de otros sistemas. Un caso de esta situación son los sistemas de guiado. Estos sistemas son los encargados de proporcionar al controlador de los motores información necesaria para cumplir con una misión deseada. Para ello se componen de una ley de control de guiado que reacciona a la información percibida por los sistemas de percepción y navegación. El error de cualquiera de estos sistemas se propaga por el ecosistema del controlador siendo vital su estudio. Por otro lado, entre las estrategias para abordar el control del error se encuentran los estimadores en espacios de estados, donde el filtro de Kalman desde su aparición en los años 60, ha sido y continúa siendo un gran aliado para los ingenieros. Los filtros de Kalman son el corazón de los sistemas de fusión de información, los cuales minimizan la covarianza del error del sistema, permitiendo filtrar los estados medidos y estimarlos cuando no se tienen observaciones. Los modelos de espacios de estados se desarrollan en base a un conjunto de hipótesis para modelar el mundo. Entre las hipótesis se encuentra que los modelos del mundo han de ser lineales, markovianos y que el error de sus modelos ha de ser gaussiano. Generalmente los sistemas no son lineales por lo que se realizan linealizaciones sobre modelos que a su vez ya son aproximaciones del mundo. En otros casos el ruido que se desea controlar no es gaussiano, pero se aproxima a esta distribución para poder abordarlo. Por otro lado, multitud de sistemas no son markovianos, es decir, sus estados no solo dependen del estado anterior, sino que existen otras dependencias que los modelos de espacio de estados no son capaces de abordar. Esta tesis aborda un compendio de estudios sobre los que se formula y reduce el error. En primer lugar, se estudia el error en un sistema de aterrizaje de precisión basado en visión por computador. Después se plantean problemas de estimación y filtrado desde la aproximación del aprendizaje profundo. Por último, se estudian los conceptos de clasificación con aprendizaje profundo sobre trayectorias. El primer caso del compendio estudia las consecuencias de la propagación del error de un sistema de aterrizaje de precisión basado en visión artificial. En este trabajo se propone un conjunto de estrategias para reducir el impacto sobre el sistema de guiado, y en última instancia reducir el error. Los siguientes dos estudios abordan el problema de estimación y filtrado desde la perspectiva del aprendizaje profundo, donde el error es una función que minimizar mediante aprendizaje. El último caso del compendio aborda un problema de clasificación de trayectorias con datos reales. Con este trabajo se completan los dos campos principales en aprendizaje profundo, regresión y clasificación, donde se plantea el error como una función de probabilidad de pertenencia a una clase.I would like to thank the Ministry of Science and Innovation for granting me the funding with reference PRE2018-086793, associated to the project TEC2017-88048-C2-2-R, which provide me the opportunity to carry out all my PhD. activities, including completing an international research internship.Programa de Doctorado en Ciencia y Tecnología Informática por la Universidad Carlos III de MadridPresidente: Antonio Berlanga de Jesús.- Secretario: Daniel Arias Medina.- Vocal: Alejandro Martínez Cav

    An Analytical Performance Evaluation on Multiview Clustering Approaches

    Get PDF
    The concept of machine learning encompasses a wide variety of different approaches, one of which is called clustering. The data points are grouped together in this approach to the problem. Using a clustering method, it is feasible, given a collection of data points, to classify each data point as belonging to a specific group. This can be done if the algorithm is given the collection of data points. In theory, data points that constitute the same group ought to have attributes and characteristics that are equivalent to one another, however data points that belong to other groups ought to have properties and characteristics that are very different from one another. The generation of multiview data is made possible by recent developments in information collecting technologies. The data were collected from à variety of sources and were analysed using a variety of perspectives. The data in question are what are known as multiview data. On a single view, the conventional clustering algorithms are applied. In spite of this, real-world data are complicated and can be clustered in a variety of different ways, depending on how the data are interpreted. In practise, the real-world data are messy. In recent years, Multiview Clustering, often known as MVC, has garnered an increasing amount of attention due to its goal of utilising complimentary and consensus information derived from different points of view. On the other hand, the vast majority of the systems that are currently available only enable the single-clustering scenario, whereby only makes utilization of a single cluster to split the data. This is the case since there is only one cluster accessible. In light of this, it is absolutely necessary to carry out investigation on the multiview data format. The study work is centred on multiview clustering and how well it performs compared to these other strategies

    Cybersecurity: Past, Present and Future

    Full text link
    The digital transformation has created a new digital space known as cyberspace. This new cyberspace has improved the workings of businesses, organizations, governments, society as a whole, and day to day life of an individual. With these improvements come new challenges, and one of the main challenges is security. The security of the new cyberspace is called cybersecurity. Cyberspace has created new technologies and environments such as cloud computing, smart devices, IoTs, and several others. To keep pace with these advancements in cyber technologies there is a need to expand research and develop new cybersecurity methods and tools to secure these domains and environments. This book is an effort to introduce the reader to the field of cybersecurity, highlight current issues and challenges, and provide future directions to mitigate or resolve them. The main specializations of cybersecurity covered in this book are software security, hardware security, the evolution of malware, biometrics, cyber intelligence, and cyber forensics. We must learn from the past, evolve our present and improve the future. Based on this objective, the book covers the past, present, and future of these main specializations of cybersecurity. The book also examines the upcoming areas of research in cyber intelligence, such as hybrid augmented and explainable artificial intelligence (AI). Human and AI collaboration can significantly increase the performance of a cybersecurity system. Interpreting and explaining machine learning models, i.e., explainable AI is an emerging field of study and has a lot of potentials to improve the role of AI in cybersecurity.Comment: Author's copy of the book published under ISBN: 978-620-4-74421-

    Changes in surface electromyography characteristics and foot-tapping rate of force development as measures of spasticity in patients with multiple sclerosis

    Get PDF
    Spasticity is a common symptom experienced by individuals with upper motor neuron lesions such as those with stroke, spinal cord injury, traumatic brain injury, cerebral palsy, amyotrophic lateral sclerosis, and multiple sclerosis. Although the etiology and progression of spasticity differs between these clinical populations, it shares many of the same consequences: muscle pain, weakness, fatigue, increased disability, depression, medication side effects, and a reduced quality of life. For this reason, there has been increased interest in the measurement and treatment of spasticity symptoms. Subjective measures of spasticity like the Modified Ashworth Scale (MAS) and Tardieu Scale have shown questionable validity/reliability and poorly correlate to functional outcome measures but continue to be used in clinical and research settings. Objective measures like myotonometry, electrogoniometry, and inertial sensors on the other hand provide much more reliable measures but at the expense of increased costs, time, and equipment. Therefore, to properly assess and treat spasticity symptoms, a timelier and cost-effective objective measure of spasticity is needed. PURPOSE: To reexamine a previously collected dataset from a sample of patients with multiple sclerosis before and after dry-needling and functional electrically stimulated walking spasticity treatments. Specifically, we wished to know whether there were: 1.) Acute (within visit) and chronic (between visit) changes in sEMG and Foot-tapping rate of force development measures after treatment, 2.) Between leg differences before and after treatments, 3.) significant correlations between EMG, foot-tapping, and functional outcome measures. METHODS: 16 MS patients (10 relapsing-remitting and 6 progressive MS) participated in the original study. The study consisted of 14 visits: 2 pre/post visits, 4 visits of dry-needling + functional electrically stimulated walking (FESW), and 8 visits with FESW only. The more spastic leg (involved leg) was given the treatment, making the other the control. Dry-needling was performed on the involved leg’s gastrocnemius medial and lateral heads by inserting monofilament needles and electrically stimming the muscles until visible twitches occurred. Dry-needling was done 30 seconds on and 30 seconds off for a total of 90 seconds of treatment. FESW was performed on the involved leg by attaching electrodes to the tibialis anterior and gastrocnemius muscles. Patients walked 20-minutes at a self-selected pace while the involved leg was stimmed. sEMG was collected before and after each treatment by having the patient perform a single maximal heel raise. Foot-tapping ability was assessed using the 10-second foot-tapping test (FTT) and a small force plate. Functional measures also included the 25-foot walk test (25FWT) 6-minute walk test (6MWT), modified fatigue impact score (MFIS), and number of heel raises performed. RESULTS: No significant between leg differences were noted for any of the sEMG measures (p>0.05). No significant chronic changes occurred in any of the sEMG measures. For the Dry-needling + FESW visits, sEMG sample entropy was significantly increased in the involved leg at post-needling (p = 0.035) and post-FESW (p = 0.027). The non-involved leg’s sample entropy was significantly higher at post-FESW only (p = 0.017). The non-involved leg’s, mean frequency was significantly higher at post-FESW compared pre-needling (p = 0.033) and post-needling (p = 0.032). For the FESW only visits, there were no significant changes in the involved leg. The Non-involved leg’s mean frequency was significantly higher at Post-FESW (p = 0.006). Median frequency was significantly higher at Post-FESW (p = 0.009). The number of foot-taps was significantly increased from Pre to Post-intervention in both the Involved (p = 0.006) and Non-involved legs (p 0.002). There was a significantly higher number of foot-taps in the Non-involved leg compared to the Involved leg at both Pre (p =0.008) and Post (p = 0.015) timepoints. AUC was significantly higher in the Involved leg at Post-treatment (p = 0.030). Time to peak was found to be higher in the Involved leg compared to the Non-involved leg at both Pre (p = 0.037) and Post-intervention (p = 0.019). Time to base was higher in the Involved leg compared to the Non-involved leg at both Pre (p = 0.031) and Post-intervention (p = 0.004). Total tap time was higher in the Involved leg at both Pre (p = 0.010) and Post-intervention (p = 0.007). Percent time to peak was significantly lower in the involved limb at Pre-intervention (p = 0.026) and Post intervention (p = 0.037). Percent time to base was significantly higher in the Involved leg at Pre-intervention (p = 0.026) and Post intervention (p = 0.037). The sEMG measures tended to poorly or non-significantly correlate with the functional outcome measures. The foot-tapping measures, especially the involved leg, tended to exhibit stronger correlations with the functional outcome measures. CONCLUSION: sEMG Sample entropy and foot-tapping ability are significantly improved by dry-needling treatments and walking. sEMG measures did not tend to correlate well with functional outcome measures but the foot-tapping measures did. This suggests that foot-tapping rate and related measures may be a useful measure of spasticity and treatment effects

    Design and Implementation of HD Wireless Video Transmission System Based on Millimeter Wave

    Get PDF
    With the improvement of optical fiber communication network construction and the improvement of camera technology, the video that the terminal can receive becomes clearer, with resolution up to 4K. Although optical fiber communication has high bandwidth and fast transmission speed, it is not the best solution for indoor short-distance video transmission in terms of cost, laying difficulty and speed. In this context, this thesis proposes to design and implement a multi-channel wireless HD video transmission system with high transmission performance by using the 60GHz millimeter wave technology, aiming to improve the bandwidth from optical nodes to wireless terminals and improve the quality of video transmission. This thesis mainly covers the following parts: (1) This thesis implements wireless video transmission algorithm, which is divided into wireless transmission algorithm and video transmission algorithm, such as 64QAM modulation and demodulation algorithm, H.264 video algorithm and YUV420P algorithm. (2) This thesis designs the hardware of wireless HD video transmission system, including network processing unit (NPU) and millimeter wave module. Millimeter wave module uses RWM6050 baseband chip and TRX-BF01 rf chip. This thesis will design the corresponding hardware circuit based on the above chip, such as 10Gb/s network port, PCIE. (3) This thesis realizes the software design of wireless HD video transmission system, selects FFmpeg and Nginx to build the sending platform of video transmission system on NPU, and realizes video multiplex transmission with Docker. On the receiving platform of video transmission, FFmpeg and Qt are selected to realize video decoding, and OpenGL is combined to realize video playback. (4) Finally, the thesis completed the wireless HD video transmission system test, including pressure test, Web test and application scenario test. It has been verified that its HD video wireless transmission system can transmit HD VR video with three-channel bit rate of 1.2GB /s, and its rate can reach up to 3.7GB /s, which meets the research goal

    Learning-Based Ubiquitous Sensing For Solving Real-World Problems

    Get PDF
    Recently, as the Internet of Things (IoT) technology has become smaller and cheaper, ubiquitous sensing ability within these devices has become increasingly accessible. Learning methods have also become more complex in the field of computer science ac- cordingly. However, there remains a gap between these learning approaches and many problems in other disciplinary fields. In this dissertation, I investigate four different learning-based studies via ubiquitous sensing for solving real-world problems, such as in IoT security, athletics, and healthcare. First, I designed an online intrusion detection system for IoT devices via power auditing. To realize the real-time system, I created a lightweight power auditing device. With this device, I developed a distributed Convolutional Neural Network (CNN) for online inference. I demonstrated that the distributed system design is secure, lightweight, accurate, real-time, and scalable. Furthermore, I characterized potential Information-stealer attacks via power auditing. To defend against this potential exfiltration attack, a prototype system was built on top of the botnet detection system. In a testbed environment, I defined and deployed an IoT Information-stealer attack. Then, I designed a detection classifier. Altogether, the proposed system is able to identify malicious behavior on endpoint IoT devices via power auditing. Next, I enhanced athletic performance via ubiquitous sensing and machine learning techniques. I first designed a metric called LAX-Score to quantify a collegiate lacrosse team’s athletic performance. To derive this metric, I utilized feature selection and weighted regression. Then, the proposed metric was statistically validated on over 700 games from the last three seasons of NCAA Division I women’s lacrosse. I also exam- ined the biometric sensing dataset obtained from a collegiate team’s athletes over the course of a season. I then identified the practice features that are most correlated with high-performance games. Experimental results indicate that LAX-Score provides insight into athletic performance quality beyond wins and losses. Finally, I studied the data of patients with Parkinson’s Disease. I secured the Inertial Measurement Unit (IMU) sensing data of 30 patients while they conducted pre-defined activities. Using this dataset, I measured tremor events during drawing activities for more convenient tremor screening. Our preliminary analysis demonstrates that IMU sensing data can identify potential tremor events in daily drawing or writing activities. For future work, deep learning-based techniques will be used to extract features of the tremor in real-time. Overall, I designed and applied learning-based methods across different fields to solve real-world problems. The results show that combining learning methods with domain knowledge enables the formation of solutions
    • …
    corecore