Global convergence of decomposition learning methods for support vector machines

Norikazu Takahashi, Tetsuo Nishi

Research output: Contribution to journalArticlepeer-review

31 Citations (Scopus)


Decomposition methods are well-known techniques for solving quadratic programming (QP) problems arising in support vector machines (SVMs). In each iteration of a decomposition method, a small number of variables are selected and a QP problem with only the selected variables is solved. Since large matrix computations are not required, decomposition methods are applicable to large QP problems. In this paper, we will make a rigorous analysis of the global convergence of general decomposition methods for SVMs. We first introduce a relaxed version of the optimality condition for the QP problems and then prove that a decomposition method reaches a solution satisfying this relaxed optimality condition within a finite number of iterations under a very mild condition on how to select variables.

Original languageEnglish
Pages (from-to)1362-1369
Number of pages8
JournalIEEE Transactions on Neural Networks
Issue number6
Publication statusPublished - Nov 2006
Externally publishedYes


  • Decomposition method
  • Global convergence
  • Quadratic programming (QP)
  • Support vector machines (SVMs)
  • Termination

ASJC Scopus subject areas

  • Software
  • Computer Science Applications
  • Computer Networks and Communications
  • Artificial Intelligence


Dive into the research topics of 'Global convergence of decomposition learning methods for support vector machines'. Together they form a unique fingerprint.

Cite this