CORE
πΊπ¦Β
Β make metadata, not war
Services
Services overview
Explore all CORE services
Access to raw data
API
Dataset
FastSync
Content discovery
Recommender
Discovery
OAI identifiers
OAI Resolver
Managing content
Dashboard
Bespoke contracts
Consultancy services
Support us
Support us
Membership
Sponsorship
Research partnership
About
About
About us
Our mission
Team
Blog
FAQs
Contact us
Community governance
Governance
Advisory Board
Board of supporters
Research network
Innovations
Our research
Labs
A Unified Scheme of ResNet and Softmax
Authors
Zhao Song
Weixin Wang
Junze Yin
Publication date
23 September 2023
Publisher
View
on
arXiv
Abstract
Large language models (LLMs) have brought significant changes to human society. Softmax regression and residual neural networks (ResNet) are two important techniques in deep learning: they not only serve as significant theoretical components supporting the functionality of LLMs but also are related to many other machine learning and theoretical computer science fields, including but not limited to image classification, object detection, semantic segmentation, and tensors. Previous research works studied these two concepts separately. In this paper, we provide a theoretical analysis of the regression problem:
β₯
β¨
exp
β‘
(
A
x
)
+
A
x
,
1
n
β©
β
1
(
exp
β‘
(
A
x
)
+
A
x
)
β
b
β₯
2
2
\| \langle \exp(Ax) + A x , {\bf 1}_n \rangle^{-1} ( \exp(Ax) + Ax ) - b \|_2^2
β₯
β¨
exp
(
A
x
)
+
A
x
,
1
n
β
β©
β
1
(
exp
(
A
x
)
+
A
x
)
β
b
β₯
2
2
β
, where
A
A
A
is a matrix in
R
n
Γ
d
\mathbb{R}^{n \times d}
R
n
Γ
d
,
b
b
b
is a vector in
R
n
\mathbb{R}^n
R
n
, and
1
n
{\bf 1}_n
1
n
β
is the
n
n
n
-dimensional vector whose entries are all
1
1
1
. This regression problem is a unified scheme that combines softmax regression and ResNet, which has never been done before. We derive the gradient, Hessian, and Lipschitz properties of the loss function. The Hessian is shown to be positive semidefinite, and its structure is characterized as the sum of a low-rank matrix and a diagonal matrix. This enables an efficient approximate Newton method. As a result, this unified scheme helps to connect two previously thought unrelated fields and provides novel insight into loss landscape and optimization for emerging over-parameterized neural networks, which is meaningful for future research in deep learning models
Similar works
Full text
Available Versions
arXiv.org e-Print Archive
See this paper in CORE
Go to the repository landing page
Download from data provider
oai:arXiv.org:2309.13482
Last time updated on 12/10/2023