Article thumbnail
Location of Repository

Finding Approximate Solutions to NP-Hard Problems by Neural Networks Is Hard

By Xin Yao

Abstract

Finding approximate solutions to hard combinatorial optimization problems by neural networks is a very attractive prospect. Many empirical studies have been done in the area. However, recent research about a neural network model indicates that for any NP-hard problem the existance of a polynomial size network that solves it implies that NP=co-NP, which is contrary to the well-known conjecture that<F NaN> NP6=co-NP. This paper shows that even finding approximate solutions with guaranteed performance to some NP-hard problems by a polynomial size network is also impossible unless NP=co-NP. Keywords --- Neural Networks, Combinatorial Optimization, Computational Complexity. 1 Introduction The interest in mapping combinatorial optimization problems onto neural networks has been growing rapidly since Hopfield and Tank first used them to solve TSP [1]. It has been demonstrated that neural network optimization algorithms can give good near optimal solutions to rather large NP-hard problems [..

Topics: Combinatorial Optimization, Computational Complexity
Year: 1992
DOI identifier: 10.1016/0020-0190(92)90261-s
OAI identifier: oai:CiteSeerX.psu:10.1.1.12.9839
Provided by: CiteSeerX
Download PDF:
Sorry, we are unable to provide the full text but you may find it at the following location(s):
  • http://citeseerx.ist.psu.edu/v... (external link)
  • ftp://www.cs.adfa.edu.au/pub/x... (external link)
  • Suggested articles


    To submit an update or takedown request for this paper, please submit an Update/Correction/Removal Request.