48 research outputs found

    Efficient and Low-complexity Hardware Architecture of Gaussian Normal Basis Multiplication over GF(2m) for Elliptic Curve Cryptosystems

    Get PDF
    In this paper an efficient high-speed architecture of Gaussian normal basis multiplier over binary finite field GF(2m) is presented. The structure is constructed by using regular modules for computation of exponentiation by powers of 2 and low-cost blocks for multiplication by normal elements of the binary field. Since the exponents are powers of 2, the modules are implemented by some simple cyclic shifts in the normal basis representation. As a result, the multiplier has a simple structure with a low critical path delay. The efficiency of the proposed structure is studied in terms of area and time complexity by using its implementation on Vertix-4 FPGA family and also its ASIC design in 180nm CMOS technology. Comparison results with other structures of the Gaussian normal basis multiplier verify that the proposed architecture has better performance in terms of speed and hardware utilization

    High-speed VLSI implementation of Digit-serial Gaussian normal basis Multiplication over GF(2m)

    Get PDF
    In this paper, by employing the logical effort technique an efficient and high-speed VLSI implementation of the digit-serial Gaussian normal basis multiplier is presented. It is constructed by using AND, XOR and XOR tree components. To have a low-cost implementation with low number of transistors, the block of AND gates are implemented by using NAND gates based on the property of the XOR gates in the XOR tree. To optimally decrease the delay and increase the drive ability of the circuit the logical effort method as an efficient method for sizing the transistors is employed. By using this method and also a 4-input XOR gate structure, the circuit is designed for minimum delay. The digit-serial Gaussian normal basis multiplier is implemented over two binary finite fields GF(2163) and GF(2233) in 0.18μm CMOS technology for three different digit sizes. The results show that the proposed structures, compared to previous structures, have been improved in terms of delay and area parameters

    High-speed Hardware Implementations of Point Multiplication for Binary Edwards and Generalized Hessian Curves

    Get PDF
    In this paper high-speed hardware architectures of point multiplication based on Montgomery ladder algorithm for binary Edwards and generalized Hessian curves in Gaussian normal basis are presented. Computations of the point addition and point doubling in the proposed architecture are concurrently performed by pipelined digit-serial finite field multipliers. The multipliers in parallel form are scheduled for lower number of clock cycles. The structure of proposed digit-serial Gaussian normal basis multiplier is constructed based on regular and low-cost modules of exponentiation by powers of two and multiplication by normal elements. Therefore, the structures are area efficient and have low critical path delay. Implementation results of the proposed architectures on Virtex-5 XC5VLX110 FPGA show that then execution time of the point multiplication for binary Edwards and generalized Hessian curves over GF(2163) and GF(2233) are 8.62µs and 11.03µs respectively. The proposed architectures have high-performance and high-speed compared to other works

    Diseño de criptoprocesadores de curva elíptica sobre gf(2^163) usando bases normales gaussianas

    Get PDF
    This paper presents the efficient hardware implementation of cryptoprocessors that carry out the scalar multiplication kP over finite field GF(2163) using two digit-level multipliers. The finite field arithmetic operations were implemented using Gaussian normal basis (GNB) representation, and the scalar multiplication kP was implemented using Lopez-Dahab algorithm, 2-NAF halve-and-add algorithm and w-tNAF method for Koblitz curves. The processors were designed using VHDL description, synthesized on the Stratix-IV FPGA using Quartus II 12.0 and verified using SignalTAP II and Matlab. The simulation results show that the cryptoprocessors present a very good performance to carry out the scalar multiplication kP. In this case, the computation times of the multiplication kP using Lopez-Dahab, 2-NAF halve-and-add and 16-tNAF for Koblitz curves were 13.37 µs, 16.90 µs and 5.05 µs, respectively.En este trabajo se presenta la implementación eficiente en hardware de criptoprocesadores que permiten llevar a cabo la multiplicación escalar kP sobre el campo finito GF(2163) usando dos multiplicadores a nivel de digito. Las operaciones aritméticas de campo finito fueron implementadas usando la representación de bases normales Gaussianas (GNB), y la multiplicación escalar kP fue implementada usando el algoritmo de López-Dahab, el algoritmo de bisección de punto 2-NAF y el método w-tNAF para curvas de Koblitz. Los criptoprocesadores fueron diseñados usando descripción VHDL, sintetizados en el FPGA Stratix-IV usando Quartus II 12.0 y verificados usando SignalTAP II y Matlab. Los resultados de simulación muestran que los criptoprocesadores presentan un muy buen desempeño para llevar a cabo la multiplicación escalar kP. En este caso, los tiempos de computo de la multiplicación kP usando Lopez-Dahab, bisección de punto 2-NAF y 16-tNAF para curvas de Koblitz fueron 13.37 µs, 16.90 µs and 5.05 µs, respectivamente

    강인한 저차원 공간의 학습과 분류: 희소 및 저계수 표현

    Get PDF
    학위논문 (박사)-- 서울대학교 대학원 : 전기·컴퓨터공학부, 2017. 2. 오성회.Learning a subspace structure based on sparse or low-rank representation has gained much attention and has been widely used over the past decade in machine learning, signal processing, computer vision, and robotic literatures to model a wide range of natural phenomena. Sparse representation is a powerful tool for high-dimensional data such as images, where the goal is to represent or compress the cumbersome data using a few representative samples. Low-rank representation is a generalization of the sparse representation in 2D space. Behind the successful outcomes, many efforts have been made for learning sparse or low-rank representation effciently. However, they are still ineffcient for complex data structures and lack robustness under the existence of various noises including outliers and missing data, because many existing algorithms relax the ideal optimization problem to a tractable one without considering computational and memory complexities. Thus, it is important to use a good representation algorithm which is effciently solvable and robust against unwanted corruptions. In this dissertation, our main goal is to learn algorithms with both robustness and effciency under noisy environments. As for sparse representation, most of the optimization problems are relaxed to convex ones based on surrogate measures, such as the l1-norm, to resolve the computational intractability and high noise sensitivity of the original sparse representation problem based on the l0-norm. However, if the system at interest, other than the sparsity measure, is inherently nonconvex, then using a convex sparsity measure may not be the best choice for the problems. From this perspective, we propose desirable criteria to be a good nonconvex sparsity measure and suggest a corresponding family of measure. The proposed family of measures allows a simple measure, which enables effcient computation and embraces the benefits of both l0- and l1-norms, and most importantly, its gradient vanishes slowly unlike the l0-norm, which is suitable from an optimization perspective. For low-rank representation, we first present an effcient l1-norm based low-rank matrix approximation algorithm using the proposed alternating rectified gradient methods to solve an l1-norm minimization problem, since conventional algorithms are very slow to solve the l1-norm based alternating minimization problem. The proposed methods try to find an optimal direction with a proper constraint which limits the search domain to avoid the diffculty that arises from the ambiguity in representing the two optimization variables. It is extended to an algorithm with an explicit smoothness regularizer and an orthogonality constraint for better effciency and solve it under the augmented Lagrangian framework. To give more stable solution with flexible rank estimation in the presence of heavy corruptions, we present a new solution based on the elastic-net regularization of singular values, which allows a faster algorithm than existing rank minimization methods without any heavy operations and is more stable than the state-of-the-art low-rank approximation algorithms due to its strong convexity. As a result, the proposed method leads to a holistic approach which enables both rank minimization and bilinear factorization. Moreover, as an extension to the previous methods performing on an unstructured matrix, we apply recent advances in rank minimization to a structured matrix for robust kernel subspace estimation under noisy scenarios. Lastly, but not least, we extend a low-rank approximation problem, which assumes a single subspace, to a problem which lies in a union of multiple subspaces, which is closely related to subspace clustering. While many recent studies are based on sparse or low-rank representation, the grouping effect among similar samples has not been often considered with the sparse or low-rank representation. Thus, we propose a robust group subspace clustering lgorithms based on sparse and low-rank representation with explicit subspace grouping. To resolve the fundamental issue on computational complexity of existing subspace clustering algorithms, we suggest a full scalable low-rank subspace clustering approach, which achieves linear complexity in the number of samples. Extensive experimental results on various applications, including computer vision and robotics, using benchmark and real-world data sets verify that our suggested solutions to the existing issues on sparse and low-rank representations are considerably robust, effective, and practically applicable.1 Introduction 1 1.1 Main Challenges 4 1.2 Organization of the Dissertation 6 2 Related Work 11 2.1 Sparse Representation 11 2.2 Low-Rank Representation 14 2.2.1 Low-rank matrix approximation 14 2.2.2 Robust principal component analysis 17 2.3 Subspace Clustering 18 2.3.1 Sparse subspace clustering 18 2.3.2 Low-rank subspace clustering 20 2.3.3 Scalable subspace clustering 20 2.4 Gaussian Process Regression 21 3 Effcient Nonconvex Sparse Representation 25 3.1 Analysis of the l0-norm approximation 26 3.1.1 Notations 26 3.1.2 Desirable criteria for a nonconvex measure 27 3.1.3 A representative family of measures: SVG 29 3.2 The Proposed Nonconvex Sparsity Measure 32 3.2.1 Choosing a simple one among the SVG family 32 3.2.2 Relationships with other sparsity measures 34 3.2.3 More analysis on SVG 36 3.2.4 Learning sparse representations via SVG 38 3.3 Experimental Results 40 3.3.1 Evaluation for nonconvex sparsity measures 41 3.3.2 Low-rank approximation of matrices 42 3.3.3 Sparse coding 44 3.3.4 Subspace clustering 46 3.3.5 Parameter Analysis 49 3.4 Summary 51 4 Robust Fixed Low-Rank Representations 53 4.1 The Alternating Rectified Gradient Method for l1 Minimization 54 4.1.1 l1-ARGA as an approximation method 54 4.1.2 l1-ARGD as a dual method 65 4.1.3 Experimental results 74 4.2 Smooth Regularized Fixed-Rank Representation 88 4.2.1 Robust orthogonal matrix factorization (ROMF) 89 4.2.2 Rank estimation for ROMF (ROMF-RE) 95 4.2.3 Experimental results 98 4.3 Structured Low-Rank Representation 114 4.3.1 Kernel subspace learning 115 4.3.2 Structured kernel subspace learning in GPR 119 4.3.3 Experimental results 125 4.4 Summary 133 5 Robust Lower-Rank Subspace Representations 135 5.1 Elastic-Net Subspace Representation 136 5.2 Robust Elastic-Net Subspace Learning 140 5.2.1 Problem formulation 140 5.2.2 Algorithm: FactEN 145 5.3 Joint Subspace Estimation and Clustering 151 5.3.1 Problem formulation 151 5.3.2 Algorithm: ClustEN 152 5.4 Experiments 156 5.4.1 Subspace learning problems 157 5.4.2 Subspace clustering problems 167 5.5 Summary 174 6 Robust Group Subspace Representations 175 6.1 Group Subspace Representation 176 6.2 Group Sparse Representation (GSR) 180 6.2.1 GSR with noisy data 180 6.2.2 GSR with corrupted data 181 6.3 Group Low-Rank Representation (GLR) 184 6.3.1 GLR with noisy or corrupted data 184 6.4 Experimental Results 187 6.5 Summary 197 7 Scalable Low-Rank Subspace Clustering 199 7.1 Incremental Affnity Representation 201 7.2 End-to-End Scalable Subspace Clustering 205 7.2.1 Robust incremental summary representation 205 7.2.2 Effcient affnity construction 207 7.2.3 An end-to-end scalable learning pipeline 210 7.2.4 Nonlinear extension for SLR 213 7.3 Experimental Results 215 7.3.1 Synthetic data 216 7.3.2 Motion segmentation 219 7.3.3 Face clustering 220 7.3.4 Handwritten digits clustering 222 7.3.5 Action clustering 224 7.4 Summary 227 8 Conclusion and Future Work 229 Appendices 233 A Derivations of the LRA Problems 235 B Proof of Lemma 1 237 C Proof of Proposition 1 239 D Proof of Theorem 1 241 E Proof of Theorem 2 247 F Proof of Theorems in Chapter 6 251 F.1 Proof of Theorem 3 251 F.2 Proof of Theorem 4 252 F.3 Proof of Theorem 5 253 G Proof of Theorems in Chapter 7 255 G.1 Proof of Theorem 6 255 G.2 Proof of Theorem 7 256 Bibliography 259 초록 275Docto

    Learning optimal control policies from data: a partially model-based actor-only approach

    Get PDF
    This dissertation presents new algorithms for learning optimal feed-back controllers directly from experimental data, considering the plant to be controlled as a black-box source of streaming input and output data. The presented methods fall in the Reinforcement Learning “actor-only” family of algorithms, employing a represen-tation (policy parameterization) of the controller as a function of the feedback values and of a set of parameters to be tuned. The optimization of a policy parameterization corresponds to the search of the set of parameters associated with the best value of a chosen performance index. Such a search is carried on via numerical opti-mization techniques, such as the Stochastic Gradient Descent algo-rithm and related techniques. The proposed methods are based on a combination of the data-driven policy search framework with some elements of the model-based scenario, in order to mitigate some of the drawbacks presented by the purely data-driven approach, while retaining a low modeling effort, as compared to the typical identif-cation and model-based control design scenario. In particular, we initially introduce an algorithm for the search of smooth control policies, considering both the online scenario (when new data are collected from the plant during the iterative policy syn-thesis, while the plant is also under closed-loop control) and the of-fine one (i.e. from open-loop data that were previously collected from the plant). The proposed method is then extended to learn non-smooth control policies, in particular hybrid control laws, op-timizing both the local controllers and the switching law directly from data. The described methods are then extended in order to be employed in a collaborative learning setup, considering multi-agent systems characterized by heavy similarities, exploiting a cloud-aided scenario to enhance the learning process by sharing information

    Structure-Preserving Model Reduction of Physical Network Systems

    Get PDF
    This paper considers physical network systems where the energy storage is naturally associated to the nodes of the graph, while the edges of the graph correspond to static couplings. The first sections deal with the linear case, covering examples such as mass-damper and hydraulic systems, which have a structure that is similar to symmetric consensus dynamics. The last section is concerned with a specific class of nonlinear physical network systems; namely detailed-balanced chemical reaction networks governed by mass action kinetics. In both cases, linear and nonlinear, the structure of the dynamics is similar, and is based on a weighted Laplacian matrix, together with an energy function capturing the energy storage at the nodes. We discuss two methods for structure-preserving model reduction. The first one is clustering; aggregating the nodes of the underlying graph to obtain a reduced graph. The second approach is based on neglecting the energy storage at some of the nodes, and subsequently eliminating those nodes (called Kron reduction).</p
    corecore