Share this page:

A Sequential Dual Method for Large Scale Multi-Class Linear SVMs

S. Sathiya Keerthi, S. Sundararajan, Kai-Wei Chang, Cho-Jui Hsieh, and Chih-Jen Lin, in KDD, 2008.

Code

Download the full text


Abstract

Efficient training of direct multi-class formulations of linear Support Vector Machines is very useful in applications such as text classification with a huge number examples as well as features. This paper presents a fast dual method for this training. The main idea is to sequentially traverse through the training set and optimize the dual variables associated with one example at a time. The speed of training is enhanced further by shrinking and cooling heuristics. Experiments indicate that our method is much faster than state of the art solvers such as bundle, cutting plane and exponentiated gradient methods


Bib Entry

@inproceedings{KSCHL08,
  author = {Keerthi, S. Sathiya and Sundararajan, S. and Chang, Kai-Wei and Hsieh, Cho-Jui and Lin, Chih-Jen},
  title = {A Sequential Dual Method for Large Scale Multi-Class Linear SVMs},
  booktitle = {KDD},
  year = {2008}
}

Related Publications

  1. Large Linear Classification When Data Cannot Fit In Memory, TKDD, 2012
  2. Selective Block Minimization for Faster Convergence of Limited Memory Large-scale Linear Models, KDD, 2011
  3. Iterative Scaling and Coordinate Descent Methods for Maximum Entropy Models, JMLR, 2010
  4. A Comparison of Optimization Methods and software for Large-scale L1-regularized Linear Classification, JMLR, 2010
  5. Training and Testing Low-degree Polynomial Data Mappings via Linear SVM, JMLR, 2010
  6. A Dual Coordinate Descent Method for Large-Scale Linear SVM, ICML, 2008
  7. Coordinate Descent Method for Large-scale L2-loss Linear SVM, JMLR, 2008
  8. LIBLINEAR: A Library for Large Linear Classification, JMLR, 2008