CORE
🇺🇦
make metadata, not war
Services
Services overview
Explore all CORE services
Access to raw data
API
Dataset
FastSync
Content discovery
Recommender
Discovery
OAI identifiers
OAI Resolver
Managing content
Dashboard
Bespoke contracts
Consultancy services
Support us
Support us
Membership
Sponsorship
Community governance
Advisory Board
Board of supporters
Research network
About
About us
Our mission
Team
Blog
FAQs
Contact us
Aggregate subgradient method for nonsmooth DC optimization
Authors
Adil Bagirov
Kaisa Joki
+3 more
Napsu Karmitsa
Marko Mäkelä
Sona Taheri
Publication date
1 January 2021
Publisher
'Springer Science and Business Media LLC'
Doi
Cite
Abstract
The aggregate subgradient method is developed for solving unconstrained nonsmooth difference of convex (DC) optimization problems. The proposed method shares some similarities with both the subgradient and the bundle methods. Aggregate subgradients are defined as a convex combination of subgradients computed at null steps between two serious steps. At each iteration search directions are found using only two subgradients: the aggregate subgradient and a subgradient computed at the current null step. It is proved that the proposed method converges to a critical point of the DC optimization problem and also that the number of null steps between two serious steps is finite. The new method is tested using some academic test problems and compared with several other nonsmooth DC optimization solvers. © 2020, Springer-Verlag GmbH Germany, part of Springer Nature
Similar works
Full text
Open in the Core reader
Download PDF
Available Versions
Federation ResearchOnline
See this paper in CORE
Go to the repository landing page
Download from data provider
vital:15031
Last time updated on 02/12/2022