### Abstract

A novel computational model for large-scale maximum clique problems is proposed and tested. The maximum clique problem is first formulated as an unconstrained quadratic zero-one programming and it is solved by minimizing the weight summation over the same partition in a newly constructed graph. The proposed maximum neural network has the following advantages: (1) coefficient-parameter tuning in the motion equation is not required in the maximum neural network while the conventional neural networks suffer from it; (2) the equilibrium state of the maximum neural network is clearly defined in order to terminate the algorithm, while the existing neural networks do not have the clear definition; and (3) the maximum neural network always allows the state of the system to converge to the feasible solution, while the existing neural networks cannot guarantee it. The proposed parallel algorithm for large-size problems outperforms the best known algorithms in terms of computation time with much the same solution quality where the conventional branch-and-bound method cannot be used due to the exponentially increasing computation time.

Original language | English |
---|---|

Title of host publication | 91 IEEE Int Jt Conf Neural Networks IJCNN 91 |

Publisher | Publ by IEEE |

Pages | 905-910 |

Number of pages | 6 |

ISBN (Print) | 0780302273 |

Publication status | Published - 1991 |

Externally published | Yes |

Event | 1991 IEEE International Joint Conference on Neural Networks - IJCNN '91 - Singapore, Singapore Duration: Nov 18 1991 → Nov 21 1991 |

### Other

Other | 1991 IEEE International Joint Conference on Neural Networks - IJCNN '91 |
---|---|

City | Singapore, Singapore |

Period | 11/18/91 → 11/21/91 |

### Fingerprint

### ASJC Scopus subject areas

- Engineering(all)

### Cite this

*91 IEEE Int Jt Conf Neural Networks IJCNN 91*(pp. 905-910). Publ by IEEE.

**A parallel neural network computing for the maximum clique problem.** / Lee, Kuo Chun; Funabiki, Nobuo; Cho, Y. B.; Takefuji, Yoshiyasu.

Research output: Chapter in Book/Report/Conference proceeding › Conference contribution

*91 IEEE Int Jt Conf Neural Networks IJCNN 91.*Publ by IEEE, pp. 905-910, 1991 IEEE International Joint Conference on Neural Networks - IJCNN '91, Singapore, Singapore, 11/18/91.

}

TY - GEN

T1 - A parallel neural network computing for the maximum clique problem

AU - Lee, Kuo Chun

AU - Funabiki, Nobuo

AU - Cho, Y. B.

AU - Takefuji, Yoshiyasu

PY - 1991

Y1 - 1991

N2 - A novel computational model for large-scale maximum clique problems is proposed and tested. The maximum clique problem is first formulated as an unconstrained quadratic zero-one programming and it is solved by minimizing the weight summation over the same partition in a newly constructed graph. The proposed maximum neural network has the following advantages: (1) coefficient-parameter tuning in the motion equation is not required in the maximum neural network while the conventional neural networks suffer from it; (2) the equilibrium state of the maximum neural network is clearly defined in order to terminate the algorithm, while the existing neural networks do not have the clear definition; and (3) the maximum neural network always allows the state of the system to converge to the feasible solution, while the existing neural networks cannot guarantee it. The proposed parallel algorithm for large-size problems outperforms the best known algorithms in terms of computation time with much the same solution quality where the conventional branch-and-bound method cannot be used due to the exponentially increasing computation time.

AB - A novel computational model for large-scale maximum clique problems is proposed and tested. The maximum clique problem is first formulated as an unconstrained quadratic zero-one programming and it is solved by minimizing the weight summation over the same partition in a newly constructed graph. The proposed maximum neural network has the following advantages: (1) coefficient-parameter tuning in the motion equation is not required in the maximum neural network while the conventional neural networks suffer from it; (2) the equilibrium state of the maximum neural network is clearly defined in order to terminate the algorithm, while the existing neural networks do not have the clear definition; and (3) the maximum neural network always allows the state of the system to converge to the feasible solution, while the existing neural networks cannot guarantee it. The proposed parallel algorithm for large-size problems outperforms the best known algorithms in terms of computation time with much the same solution quality where the conventional branch-and-bound method cannot be used due to the exponentially increasing computation time.

UR - http://www.scopus.com/inward/record.url?scp=0026308868&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=0026308868&partnerID=8YFLogxK

M3 - Conference contribution

AN - SCOPUS:0026308868

SN - 0780302273

SP - 905

EP - 910

BT - 91 IEEE Int Jt Conf Neural Networks IJCNN 91

PB - Publ by IEEE

ER -