Theoretically Inclined Machine
Learning Seminar
|
Welcome!
The Theoretically Inclined Machine Learning (TML) Seminar is an online
seminar series organized by Professors Yongyi Mao and Maia Fraser at
uOttawa. Our objective is to provide a forum for machine learning
researchers in Ottawa and beyond to get together, discuss theoretical
advances in machine learning and exchange insights with broad research
communities.
Each seminar will feature one speaker, giving a talk for about 40 minutes
followed by questions and discussions.
The speakers will be primarily invited from the authors of some selected
papers in recent machine learning conferences. Occasionally we may also
invite speakers to talk about topics of their own choices, for example, a
work in progress.
Everyone with theoretical interest in machine learning is welcome to
participate in our seminars! If you wish to contact the seminar
organizers or want to be included in our mailing list, please contact ymao@uottawa.ca.
|
|
Upcoming Talk
Feb 20, 2024, 10AM, Pierre-Luc Bacon, University of Montreal.
Bridging the Gap: Task Design and Data Efficiency for Broader RL Adoption.
Past Talks
- Jan 16, 2024, 10AM, Ruiqi Zhang, UC Berkeley
"Trained Transformers Learn Linear Models in-Context"
- April 24, 2023, 11AM, Max Hopkins, UCSD
"Realizable Learning is All You Need"
- March 20, 2023, 1PM, Andre Wibisono, Yale University
"Sampling with Langevin Algorithms in Continuous and Discrete Times"
- Oct 7, 2022, 10AM, Gabor Lugosi, Fabra University, Barcelona
"Generalization bounds via convex analysis"
- Feb 2, 2022, 11AM, Chao Ma, Stanford University
"On Linear Stability of SGD and Input-Smoothness of Neural Networks"
- Dec 1, 2021, 10AM, Yiding Jiang, Carnegie Mellon University
"Assessing Generalization of SGD via Disagreement"
- Nov 17, 2021, 10AM, Hangfeng He, University of Pennsylvania
"Local Elasticity: A Phenomenological Approach Toward Understanding Deep Learning"
- Sept 16, 2021, 10AM, Pratik Chaudhair, University of Pennsylvania
(Note: This seminar is on Thursday)
"Foundations of Small Data"
- Aug 17, 2021, 11AM, Yi Ma, Berkeley
"White-Box Deep (Convolution) Networks from First Principles"
- July 6, 2021, 10AM, Ian Gemp, DeepMind
"EigenGame: PCA as a Nash Equilibrium"
- June 15, 2021, 10AM, James Lucas, University of Toronto
"Towards understanding theoretical limitations of meta learners
"
- May 4, 2021, 10AM, Guillaume Rabusseau, University of Montreal
"Learning and planning in partially observable Markov decision processes
with weighted automata and tensor networks"
- April 20, 2021, 10AM, Linjun Zhang, Rutgers University
"How Does Mixup Help with Robustness,
Generalization, and
Calibration?"
|
|