Machine Learning

Learning rates for classification with Gaussian kernels

This topic contains 0 replies, has 1 voice, and was last updated by  arXiv 1 year, 8 months ago.


  • arXiv
    5 pts

    Learning rates for classification with Gaussian kernels

    This paper aims at refined error analysis for binary classification using support vector machine (SVM) with Gaussian kernel and convex loss. Our first result shows that for some loss functions such as the truncated quadratic loss and quadratic loss, SVM with Gaussian kernel can reach the almost optimal learning rate, provided the regression function is smooth. Our second result shows that, for a large number of loss functions, under some Tsybakov noise assumption, if the regression function is infinitely smooth, then SVM with Gaussian kernel can achieve the learning rate of order $m^{-1}$, where $m$ is the number of samples.

    Learning rates for classification with Gaussian kernels
    by Shao-Bo Lin, Jinshan Zeng, Xiangyu Chang
    https://arxiv.org/pdf/1702.08701v3.pdf

You must be logged in to reply to this topic.