488 research outputs found

    High-Performance VLSI Architectures for Lattice-Based Cryptography

    Get PDF
    Lattice-based cryptography is a cryptographic primitive built upon the hard problems on point lattices. Cryptosystems relying on lattice-based cryptography have attracted huge attention in the last decade since they have post-quantum-resistant security and the remarkable construction of the algorithm. In particular, homomorphic encryption (HE) and post-quantum cryptography (PQC) are the two main applications of lattice-based cryptography. Meanwhile, the efficient hardware implementations for these advanced cryptography schemes are demanding to achieve a high-performance implementation. This dissertation aims to investigate the novel and high-performance very large-scale integration (VLSI) architectures for lattice-based cryptography, including the HE and PQC schemes. This dissertation first presents different architectures for the number-theoretic transform (NTT)-based polynomial multiplication, one of the crucial parts of the fundamental arithmetic for lattice-based HE and PQC schemes. Then a high-speed modular integer multiplier is proposed, particularly for lattice-based cryptography. In addition, a novel modular polynomial multiplier is presented to exploit the fast finite impulse response (FIR) filter architecture to reduce the computational complexity of the schoolbook modular polynomial multiplication for lattice-based PQC scheme. Afterward, an NTT and Chinese remainder theorem (CRT)-based high-speed modular polynomial multiplier is presented for HE schemes whose moduli are large integers

    Segmentasi dan pengesanan objek bergerak dalam keadaan cuaca berjerebu dan berkabus

    Get PDF
    Segmentation and detection of moving object are very important in navigation applications to improve visibility of computer vision technology. The challenges to these issues are how these two issues address hazy and foggy weather. This situation affects technology and specifically the video data used to detect moving objects. This problem occurs due to the light that is scattered because of the fog and haze pixels which prevent light from penetrating resulting in over segmentation. Various methods have been used to improve accuracy and sensitivity in over segmentation but further enhancement is needed to improve the performance in the detection of moving objects. In this research, a new method is proposed to overcome over segmentation which is a combination between Gaussian Mixture Model and other filters based on their own specialities. The combined filters comprised Median Filter and Average Filter for over segmentation, Morphology Filter and Gaussian Filter to rebuild structure element of pixel object, and combination of Blob Analysis, Bounding Box and Kalman Filter to reduce False Positive detection. The combination of these filters is known as Object of Interest Movement (OIM). Qualitative and quantitative methods were used to make comparison with previous methods. Data comprised sources of haze recordings obtained from YouTube and open dataset from Karlsure. Comparative analysis of pictures and calculations of detection of objects were done. Result showed that the combined filters is capable of improving accuracy and sensitivity of the segmentation and detection which were 72.24% for foggy videos, and 76.73% in hazy weather. Based on the findings, the OIM method has proven its capability to improve the accuracy of segmentation and detection object without the need for enhancement to contrast an image

    Neural replay in representation, learning and planning

    Get PDF
    Spontaneous neural activity is rarely the subject of investigation in cognitive neuroscience. This may be due to a dominant metaphor of cognition as the information processing unit, whereas internally generated thoughts are often considered as noise. Adopting a reinforcement learning (RL) framework, I consider cognition in terms of an agent trying to attain its internal goals. This framework motivated me to address in my thesis the role of spontaneous neural activity in human cognition. First, I developed a general method, called temporal delayed linear modelling (TDLM), to enable me to analyse this spontaneous activity. TDLM can be thought of as a domain general sequence detection method. It combines nonlinear classification and linear temporal modelling. This enables testing for statistical regularities in sequences of neural representations of a decoded state space. Although developed for use with human non- invasive neuroimaging data, the method can be extended to analyse rodent electrophysiological recordings. Next, I applied TDLM to study spontaneous neural activity during rest in humans. As in rodents, I found that spontaneously generated neural events tended to occur in structured sequences. These sequences are accelerated in time compared to those that related to actual experience (30 -50 ms state-to-state time lag). These sequences, termed replay, reverse their direction after reward receipt. Notably, this human replay is not a recapitulation of prior experience, but follows sequence implied by a learnt abstract structural knowledge, suggesting a factorized representation of structure and sensory information. Finally, I test the role of neural replay in model-based learning and planning in humans. Following reward receipt, I found significant backward replay of non-local experience with a 160 ms lag. This replay prioritises and facilitates the learning of action values. In a separate sequential planning task, I show these neural sequences go forward in direction, depicting the trajectory subjects about to take. The research presented in this thesis reveals a rich role of spontaneous neural activity in supporting internal computations that underpin planning and inference in human cognition

    Application of shifted delta cepstral features for GMM language identification

    Get PDF
    Spoken language identifcation (LID) in telephone speech signals is an important and difficult classification task. Language identifcation modules can be used as front end signal routers for multilanguage speech recognition or transcription devices. Gaussian Mixture Models (GMM\u27s) can be utilized to effectively model the distribution of feature vectors present in speech signals for classification. Common feature vectors used for speech processing include Linear Prediction (LP-CC), Mel-Frequency (MF-CC), and Perceptual Linear Prediction derived Cepstral coefficients (PLP-CC). This thesis compares and examines the recently proposed type of feature vector called the Shifted Delta Cepstral (SDC) coefficients. Utilization of the Shifted Delta Cepstral coefficients has been shown to improve language identification performance. This thesis explores the use of different types of shifted delta cepstral feature vectors for spoken language identification of telephone speech using a simple Gaussian Mixture Models based classifier for a 3-language task. The OGI Multi-language Telephone Speech Corpus is used to evaluate the system

    Visual Similarity Using Limited Supervision

    Get PDF
    The visual world is a conglomeration of objects, scenes, motion, and much more. As humans, we look at the world through our eyes, but we understand it by using our brains. From a young age, humans learn to recognize objects by association, meaning that we link an object or action to the most similar one in our memory to make sense of it. Within the field of Artificial Intelligence, Computer Vision gives machines the ability to see. While digital cameras provide eyes to the machine, Computer Vision develops its brain. To that purpose, Deep Learning has emerged as a very successful tool. This method allows machines to learn solutions to problems directly from the data. On the basis of Deep Learning, computers nowadays can also learn to interpret the visual world. However, the process of learning in machines is very different from ours. In Deep Learning, images and videos are grouped into predefined, artificial categories. However, describing a group of objects, or actions, with a single integer (category) disregards most of its characteristics and pair-wise relationships. To circumvent this, we propose to expand the categorical model by using visual similarity which better mirrors the human approach. Deep Learning requires a large set of manually annotated samples, that form the training set. Retrieving training samples is easy given the endless amount of images and videos available on the internet. However, this also requires manual annotations, which are very costly and laborious to obtain and thus a major bottleneck in modern computer vision. In this thesis, we investigate visual similarity methods to solve image and video classification. In particular, we search for a solution where human super- vision is marginal. We focus on Zero-Shot Learning (ZSL), where only a subset of categories are manually annotated. After studying existing methods in the field, we identify common limitations and propose methods to tackle them. In particular, ZSL image classification is trained using only discriminative supervi- sion, i.e. predefined categories, while ignoring other descriptive characteristics. To tackle this, we propose a new approach to learn shared features, i.e. non- discriminative, thus descriptive characteristics, which improves existing methods by a large margin. However, while ZSL has shown great potential for the task of image classification, for example in case of face recognition, it has performed poorly for video classification. We identify the reasons for the lack of growth in the field and provide a new, powerful baseline. Unfortunately, even if ZSL requires only partial labeled data, it still needs supervision during training. For that reason, we also investigate purely unsuper- vised methods. A successful paradigm is self-supervision: the model is trained using a surrogate task where supervision is automatically provided. The key to self-supervision is the ability of deep learning to transfer the knowledge learned from one task to a new task. The more similar the two tasks are, the more effective the transfer is. Similar to our work on ZSL, we also studied the com- mon limitations of existing self-supervision approaches and proposed a method to overcome them. To improve self-supervised learning, we propose a policy network which controls the parameters of the surrogate task and is trained through reinforcement learning. Finally, we present a real-life application where utilizing visual similarity with limited supervision provides a better solution compared to existing parametric approaches. We analyze the behavior of motor-impaired rodents during a single repeating action for which our method provides an objective similarity of behav- ior, facilitating comparisons across animal subjects and time during recovery

    Deep Learning Methods for Register Classification

    Get PDF
    For this project the data used is the one collected by, Biber and Egbert (2018) related to various language articles from the internet. I am using BERT model (Bidirectional Encoder Representations from Transformers), which is a deep neural network and FastText, which is a shallow neural network, as a baseline to perform text classification. Also, I am using Deep Learning models like XLNet to see if classification accuracy is improved. Also, it has been described by Biber and Egbert (2018) what is register. We can think of register as genre. According to Biber (1988), register is varieties defined in terms of general situational parameters. Hence, it can be inferred that there is a close relation between the language and the context of the situation in which it is being used. This work attempts register classification using deep learning methods that use attention mechanism. Working with the models, dealing with the imbalanced datasets in real life problems, tuning the hyperparameters for training the models was accomplished throughout the work. Also, proper evaluation metrics for various kind of data was determined. The background study shows that how cumbersome the use classical Machine Learning approach used to be. Deep Learning, on the other hand, can accomplish the task with ease. The metric to be selected for the classification task for different types of datasets (balanced vs imbalanced), dealing with overfitting was also accomplished
    corecore