Global convergence of SMO algorithm for support vector regression

Norikazu Takahashi, Jun Guo, Tetsuo Nishi

Research output: Contribution to journalArticle

32 Citations (Scopus)

Abstract

Global convergence of the sequential minimal optimization (SMO) algorithm for support vector regression (SVR) is studied in this paper. Given ι training samples, SVR is formulated as a convex quadratic programming (QP) problem with ι pairs of variables. We prove that if two pairs of variables violating the optimality condition are chosen for update in each step and subproblems are solved in a certain way, then the SMO algorithm always stops within a finite number of iterations after finding an optimal solution. Also, efficient implementation techniques for the SMO algorithm are presented and compared experimentally with other SMO algorithms.

Original languageEnglish
Pages (from-to)971-982
Number of pages12
JournalIEEE Transactions on Neural Networks
Volume19
Issue number6
DOIs
Publication statusPublished - 2008
Externally publishedYes

Fingerprint

Support Vector Regression
Global Convergence
Optimization Algorithm
Convex Quadratic Programming
Quadratic programming
Training Samples
Optimality Conditions
Efficient Implementation
Optimal Solution
Update
Iteration

Keywords

  • Convergence
  • Quadratic programming (QP)
  • Sequential minimal optimization (SMO)
  • Support vector regression (SVR)

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Networks and Communications
  • Computer Science Applications
  • Software
  • Control and Systems Engineering
  • Theoretical Computer Science
  • Electrical and Electronic Engineering
  • Computational Theory and Mathematics
  • Hardware and Architecture

Cite this

Global convergence of SMO algorithm for support vector regression. / Takahashi, Norikazu; Guo, Jun; Nishi, Tetsuo.

In: IEEE Transactions on Neural Networks, Vol. 19, No. 6, 2008, p. 971-982.

Research output: Contribution to journalArticle

@article{fde9f541694941409772a8766c315742,
title = "Global convergence of SMO algorithm for support vector regression",
abstract = "Global convergence of the sequential minimal optimization (SMO) algorithm for support vector regression (SVR) is studied in this paper. Given ι training samples, SVR is formulated as a convex quadratic programming (QP) problem with ι pairs of variables. We prove that if two pairs of variables violating the optimality condition are chosen for update in each step and subproblems are solved in a certain way, then the SMO algorithm always stops within a finite number of iterations after finding an optimal solution. Also, efficient implementation techniques for the SMO algorithm are presented and compared experimentally with other SMO algorithms.",
keywords = "Convergence, Quadratic programming (QP), Sequential minimal optimization (SMO), Support vector regression (SVR)",
author = "Norikazu Takahashi and Jun Guo and Tetsuo Nishi",
year = "2008",
doi = "10.1109/TNN.2007.915116",
language = "English",
volume = "19",
pages = "971--982",
journal = "IEEE Transactions on Neural Networks and Learning Systems",
issn = "2162-237X",
publisher = "IEEE Computational Intelligence Society",
number = "6",

}

TY - JOUR

T1 - Global convergence of SMO algorithm for support vector regression

AU - Takahashi, Norikazu

AU - Guo, Jun

AU - Nishi, Tetsuo

PY - 2008

Y1 - 2008

N2 - Global convergence of the sequential minimal optimization (SMO) algorithm for support vector regression (SVR) is studied in this paper. Given ι training samples, SVR is formulated as a convex quadratic programming (QP) problem with ι pairs of variables. We prove that if two pairs of variables violating the optimality condition are chosen for update in each step and subproblems are solved in a certain way, then the SMO algorithm always stops within a finite number of iterations after finding an optimal solution. Also, efficient implementation techniques for the SMO algorithm are presented and compared experimentally with other SMO algorithms.

AB - Global convergence of the sequential minimal optimization (SMO) algorithm for support vector regression (SVR) is studied in this paper. Given ι training samples, SVR is formulated as a convex quadratic programming (QP) problem with ι pairs of variables. We prove that if two pairs of variables violating the optimality condition are chosen for update in each step and subproblems are solved in a certain way, then the SMO algorithm always stops within a finite number of iterations after finding an optimal solution. Also, efficient implementation techniques for the SMO algorithm are presented and compared experimentally with other SMO algorithms.

KW - Convergence

KW - Quadratic programming (QP)

KW - Sequential minimal optimization (SMO)

KW - Support vector regression (SVR)

UR - http://www.scopus.com/inward/record.url?scp=49149114060&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=49149114060&partnerID=8YFLogxK

U2 - 10.1109/TNN.2007.915116

DO - 10.1109/TNN.2007.915116

M3 - Article

VL - 19

SP - 971

EP - 982

JO - IEEE Transactions on Neural Networks and Learning Systems

JF - IEEE Transactions on Neural Networks and Learning Systems

SN - 2162-237X

IS - 6

ER -