2 research outputs found

    Edge Face Recognition System Based on One-Shot Augmented Learning

    Get PDF
    There is growing concern among users of computer systems about how their data is handled. In this sense, IT (Information Technology) professionals are not unaware of this problem and are looking for solutions to meet the requirements and concerns of their users. During the last few years, various techniques and technologies have emerged that allow us to answer to the problem posed by users. Technologies such as edge computing and techniques such as one-shot learning and data augmentation enable progress in this regard. Thus, in this article, we propose the creation of a system that makes use of these techniques and technologies to solve the problem of face recognition and form a low-cost security system. The results obtained show that the combination of these techniques is effective in most of the face detection algorithms and allows an effective solution to the problem raised

    A Unified Scheme of ResNet and Softmax

    Full text link
    Large language models (LLMs) have brought significant changes to human society. Softmax regression and residual neural networks (ResNet) are two important techniques in deep learning: they not only serve as significant theoretical components supporting the functionality of LLMs but also are related to many other machine learning and theoretical computer science fields, including but not limited to image classification, object detection, semantic segmentation, and tensors. Previous research works studied these two concepts separately. In this paper, we provide a theoretical analysis of the regression problem: exp(Ax)+Ax,1n1(exp(Ax)+Ax)b22\| \langle \exp(Ax) + A x , {\bf 1}_n \rangle^{-1} ( \exp(Ax) + Ax ) - b \|_2^2, where AA is a matrix in Rn×d\mathbb{R}^{n \times d}, bb is a vector in Rn\mathbb{R}^n, and 1n{\bf 1}_n is the nn-dimensional vector whose entries are all 11. This regression problem is a unified scheme that combines softmax regression and ResNet, which has never been done before. We derive the gradient, Hessian, and Lipschitz properties of the loss function. The Hessian is shown to be positive semidefinite, and its structure is characterized as the sum of a low-rank matrix and a diagonal matrix. This enables an efficient approximate Newton method. As a result, this unified scheme helps to connect two previously thought unrelated fields and provides novel insight into loss landscape and optimization for emerging over-parameterized neural networks, which is meaningful for future research in deep learning models
    corecore