A parallel neural network computing for the maximum clique problem

Kuo Chun Lee, Nobuo Funabiki, Y. B. Cho, Yoshiyasu Takefuji

Research output: Chapter in Book/Report/Conference proceedingConference contribution

6 Citations (Scopus)

Abstract

A novel computational model for large-scale maximum clique problems is proposed and tested. The maximum clique problem is first formulated as an unconstrained quadratic zero-one programming and it is solved by minimizing the weight summation over the same partition in a newly constructed graph. The proposed maximum neural network has the following advantages: (1) coefficient-parameter tuning in the motion equation is not required in the maximum neural network while the conventional neural networks suffer from it; (2) the equilibrium state of the maximum neural network is clearly defined in order to terminate the algorithm, while the existing neural networks do not have the clear definition; and (3) the maximum neural network always allows the state of the system to converge to the feasible solution, while the existing neural networks cannot guarantee it. The proposed parallel algorithm for large-size problems outperforms the best known algorithms in terms of computation time with much the same solution quality where the conventional branch-and-bound method cannot be used due to the exponentially increasing computation time.

Original languageEnglish
Title of host publication91 IEEE Int Jt Conf Neural Networks IJCNN 91
PublisherPubl by IEEE
Pages905-910
Number of pages6
ISBN (Print)0780302273
Publication statusPublished - 1991
Externally publishedYes
Event1991 IEEE International Joint Conference on Neural Networks - IJCNN '91 - Singapore, Singapore
Duration: Nov 18 1991Nov 21 1991

Other

Other1991 IEEE International Joint Conference on Neural Networks - IJCNN '91
CitySingapore, Singapore
Period11/18/9111/21/91

Fingerprint

Neural networks
Branch and bound method
Parallel algorithms
Equations of motion
Tuning

ASJC Scopus subject areas

  • Engineering(all)

Cite this

Lee, K. C., Funabiki, N., Cho, Y. B., & Takefuji, Y. (1991). A parallel neural network computing for the maximum clique problem. In 91 IEEE Int Jt Conf Neural Networks IJCNN 91 (pp. 905-910). Publ by IEEE.

A parallel neural network computing for the maximum clique problem. / Lee, Kuo Chun; Funabiki, Nobuo; Cho, Y. B.; Takefuji, Yoshiyasu.

91 IEEE Int Jt Conf Neural Networks IJCNN 91. Publ by IEEE, 1991. p. 905-910.

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Lee, KC, Funabiki, N, Cho, YB & Takefuji, Y 1991, A parallel neural network computing for the maximum clique problem. in 91 IEEE Int Jt Conf Neural Networks IJCNN 91. Publ by IEEE, pp. 905-910, 1991 IEEE International Joint Conference on Neural Networks - IJCNN '91, Singapore, Singapore, 11/18/91.
Lee KC, Funabiki N, Cho YB, Takefuji Y. A parallel neural network computing for the maximum clique problem. In 91 IEEE Int Jt Conf Neural Networks IJCNN 91. Publ by IEEE. 1991. p. 905-910
Lee, Kuo Chun ; Funabiki, Nobuo ; Cho, Y. B. ; Takefuji, Yoshiyasu. / A parallel neural network computing for the maximum clique problem. 91 IEEE Int Jt Conf Neural Networks IJCNN 91. Publ by IEEE, 1991. pp. 905-910
@inproceedings{c9dd5e1b23c04049916ba53779d807a6,
title = "A parallel neural network computing for the maximum clique problem",
abstract = "A novel computational model for large-scale maximum clique problems is proposed and tested. The maximum clique problem is first formulated as an unconstrained quadratic zero-one programming and it is solved by minimizing the weight summation over the same partition in a newly constructed graph. The proposed maximum neural network has the following advantages: (1) coefficient-parameter tuning in the motion equation is not required in the maximum neural network while the conventional neural networks suffer from it; (2) the equilibrium state of the maximum neural network is clearly defined in order to terminate the algorithm, while the existing neural networks do not have the clear definition; and (3) the maximum neural network always allows the state of the system to converge to the feasible solution, while the existing neural networks cannot guarantee it. The proposed parallel algorithm for large-size problems outperforms the best known algorithms in terms of computation time with much the same solution quality where the conventional branch-and-bound method cannot be used due to the exponentially increasing computation time.",
author = "Lee, {Kuo Chun} and Nobuo Funabiki and Cho, {Y. B.} and Yoshiyasu Takefuji",
year = "1991",
language = "English",
isbn = "0780302273",
pages = "905--910",
booktitle = "91 IEEE Int Jt Conf Neural Networks IJCNN 91",
publisher = "Publ by IEEE",

}

TY - GEN

T1 - A parallel neural network computing for the maximum clique problem

AU - Lee, Kuo Chun

AU - Funabiki, Nobuo

AU - Cho, Y. B.

AU - Takefuji, Yoshiyasu

PY - 1991

Y1 - 1991

N2 - A novel computational model for large-scale maximum clique problems is proposed and tested. The maximum clique problem is first formulated as an unconstrained quadratic zero-one programming and it is solved by minimizing the weight summation over the same partition in a newly constructed graph. The proposed maximum neural network has the following advantages: (1) coefficient-parameter tuning in the motion equation is not required in the maximum neural network while the conventional neural networks suffer from it; (2) the equilibrium state of the maximum neural network is clearly defined in order to terminate the algorithm, while the existing neural networks do not have the clear definition; and (3) the maximum neural network always allows the state of the system to converge to the feasible solution, while the existing neural networks cannot guarantee it. The proposed parallel algorithm for large-size problems outperforms the best known algorithms in terms of computation time with much the same solution quality where the conventional branch-and-bound method cannot be used due to the exponentially increasing computation time.

AB - A novel computational model for large-scale maximum clique problems is proposed and tested. The maximum clique problem is first formulated as an unconstrained quadratic zero-one programming and it is solved by minimizing the weight summation over the same partition in a newly constructed graph. The proposed maximum neural network has the following advantages: (1) coefficient-parameter tuning in the motion equation is not required in the maximum neural network while the conventional neural networks suffer from it; (2) the equilibrium state of the maximum neural network is clearly defined in order to terminate the algorithm, while the existing neural networks do not have the clear definition; and (3) the maximum neural network always allows the state of the system to converge to the feasible solution, while the existing neural networks cannot guarantee it. The proposed parallel algorithm for large-size problems outperforms the best known algorithms in terms of computation time with much the same solution quality where the conventional branch-and-bound method cannot be used due to the exponentially increasing computation time.

UR - http://www.scopus.com/inward/record.url?scp=0026308868&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0026308868&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:0026308868

SN - 0780302273

SP - 905

EP - 910

BT - 91 IEEE Int Jt Conf Neural Networks IJCNN 91

PB - Publ by IEEE

ER -