Generalization bounds via convex analysis
Speaker: Gabor Lugosi, Pompeu Fabra University, Barcelona
Time: Friday, Oct 7, 2022, 10:00AM - 11:00AM, Eastern Time
Zoom Link: contact
tml.online.seminars@gmail.com
Abstract:
Recent results bound the generalization error of supervised learning
algorithms in terms of the mutual information between their input and
the output.
In this talk we present a framework to generalize this result beyond
the standard choice of Shannon's mutual information . We show that it
is indeed
possible to replace the mutual information by any strongly convex
function of the joint input-output distribution, combined with a bound
on an appropriately
chosen norm capturing the geometry of the dependence measure. This
allows us to derive a variety of generalization bounds that are either
new or
strengthen previously known ones. This talk is based on joint work
with Gergely Neu.
Speaker's Bio
Gabor Lugosi received his Ph.D. from the Hungarian Academy of Sciences in
1991.
He an ICREA research professor at the Department of Economics, Pompeu
Fabra University, Barcelona. His research interests include the theory of
machine learning,
combinatorial statistics,
random structures, and information theory.
|
|
|