Convergence proof of a sequential minimal optimization algorithm for support vector regression

Jun Guo, Norikazu Takahashi, Tetsuo Nishi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

A sequential minimal optimization (SMO) algorithm for support vector regression (SVR) has recently been proposed by Flake and Lawrence. However, the convergence of their algorithm has not been proved so far. In this paper, we consider an SMO algorithm, which deals with the same optimization problem as Flake and Lawrence's SMO, and give a rigorous proof that it always stops within a finite number of iterations.

Original languageEnglish
Title of host publicationIEEE International Conference on Neural Networks - Conference Proceedings
Pages355-362
Number of pages8
Publication statusPublished - 2006
Externally publishedYes
EventInternational Joint Conference on Neural Networks 2006, IJCNN '06 - Vancouver, BC, Canada
Duration: Jul 16 2006Jul 21 2006

Other

OtherInternational Joint Conference on Neural Networks 2006, IJCNN '06
CountryCanada
CityVancouver, BC
Period7/16/067/21/06

ASJC Scopus subject areas

  • Software

Cite this

Guo, J., Takahashi, N., & Nishi, T. (2006). Convergence proof of a sequential minimal optimization algorithm for support vector regression. In IEEE International Conference on Neural Networks - Conference Proceedings (pp. 355-362). [1716114]

Convergence proof of a sequential minimal optimization algorithm for support vector regression. / Guo, Jun; Takahashi, Norikazu; Nishi, Tetsuo.

IEEE International Conference on Neural Networks - Conference Proceedings. 2006. p. 355-362 1716114.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Guo, J, Takahashi, N & Nishi, T 2006, Convergence proof of a sequential minimal optimization algorithm for support vector regression. in IEEE International Conference on Neural Networks - Conference Proceedings., 1716114, pp. 355-362, International Joint Conference on Neural Networks 2006, IJCNN '06, Vancouver, BC, Canada, 7/16/06.
Guo J, Takahashi N, Nishi T. Convergence proof of a sequential minimal optimization algorithm for support vector regression. In IEEE International Conference on Neural Networks - Conference Proceedings. 2006. p. 355-362. 1716114
Guo, Jun ; Takahashi, Norikazu ; Nishi, Tetsuo. / Convergence proof of a sequential minimal optimization algorithm for support vector regression. IEEE International Conference on Neural Networks - Conference Proceedings. 2006. pp. 355-362
@inproceedings{23997c4549a6483f9a586ff1e5f659b9,
title = "Convergence proof of a sequential minimal optimization algorithm for support vector regression",
abstract = "A sequential minimal optimization (SMO) algorithm for support vector regression (SVR) has recently been proposed by Flake and Lawrence. However, the convergence of their algorithm has not been proved so far. In this paper, we consider an SMO algorithm, which deals with the same optimization problem as Flake and Lawrence's SMO, and give a rigorous proof that it always stops within a finite number of iterations.",
author = "Jun Guo and Norikazu Takahashi and Tetsuo Nishi",
year = "2006",
language = "English",
isbn = "0780394909",
pages = "355--362",
booktitle = "IEEE International Conference on Neural Networks - Conference Proceedings",

}

TY - GEN

T1 - Convergence proof of a sequential minimal optimization algorithm for support vector regression

AU - Guo, Jun

AU - Takahashi, Norikazu

AU - Nishi, Tetsuo

PY - 2006

Y1 - 2006

N2 - A sequential minimal optimization (SMO) algorithm for support vector regression (SVR) has recently been proposed by Flake and Lawrence. However, the convergence of their algorithm has not been proved so far. In this paper, we consider an SMO algorithm, which deals with the same optimization problem as Flake and Lawrence's SMO, and give a rigorous proof that it always stops within a finite number of iterations.

AB - A sequential minimal optimization (SMO) algorithm for support vector regression (SVR) has recently been proposed by Flake and Lawrence. However, the convergence of their algorithm has not been proved so far. In this paper, we consider an SMO algorithm, which deals with the same optimization problem as Flake and Lawrence's SMO, and give a rigorous proof that it always stops within a finite number of iterations.

UR - http://www.scopus.com/inward/record.url?scp=40649090457&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=40649090457&partnerID=8YFLogxK

M3 - Conference contribution

SN - 0780394909

SN - 9780780394902

SP - 355

EP - 362

BT - IEEE International Conference on Neural Networks - Conference Proceedings

ER -