Selective Block Minimization for Faster Convergence of Limited Memory Large-scale Linear Models

Kai-Wei Chang and Dan Roth, in KDD, 2011.


[Full Text]

Abstract

As the size of data sets used to build classifiers steadily increases, training a linear model efficiently with limited memory becomes essential. Several techniques deal with this problem by loading blocks of data from disk one at a time, but usually take a considerable number of iterations to converge to a reasonable model. Even the best block minimization techniques [1] require many block loads since they treat all training examples uniformly. As disk I/O is expensive, reducing the amount of disk access can dramatically decrease the training time.

Bib Entry

@inproceedings{ChangRo11,
  author = {Chang, Kai-Wei and Roth, Dan},
  title = {Selective Block Minimization for Faster Convergence of Limited Memory Large-scale Linear Models},
  booktitle = {KDD},
  slides_url = {http://cogcomp.cs.illinois.edu/files/presentations/kdd_slide.pdf},
  year = {2011}
}

Links