Loading [MathJax]/extensions/Safe.js
papers.nips.cc
scholar.google.com
Algorithmic Stability and Uniform Generalization
Alabdulmohsin, Ibrahim M.
Neural Information Processing Systems Conference - 2015 via Local Bibsonomy
Keywords: dblp


[link]
Summary by NIPS Conference Reviews 9 years ago

The paper seeks to establish a connection between algorithmic stability and generalization performance. Notions of algorithmic stability have been proposed before and linked to the generalization performance of learning algorithms [conf/uai/KutinN02] [journals/neco/KearnsR99] and have also been shown to be crucial for learnability [journals/jmlr/Shalev-ShwartzSSS10].

[PoggioETAL:04] proved that for bounded loss functions, the generalization of ERM is equivalent to the probabilistic leave-one-out stability of the learning algorithm. [journals/jmlr/Shalev-ShwartzSSS10] then showed that a problem is learnable in Vapnik's general setting of learning iff there exists an asymptotically stability ERM procedure.

This paper first establishes that for Vapnik's general setting of learning, a probabilistic notion of stability, is necessary and sufficient for the training losses to converge to test losses uniformly for all distributions. The paper then presents some discussions on how this notion of stability can be interpreted to give results in terms of the capacity of the function class or the size of the population.

Your comment:

Send Feedback
ShortScience.org allows researchers to publish paper summaries that are voted on and ranked!
About

Sponsored by: