.
Foundations of AI.
Machine Learning.
Statistical Learning Theory.
Probability and Statistics.

I work with John ShaweTaylor at UCL
and with Csaba Szepesvári at DeepMind.
Actually these days I work from home.
People I work with at UCL include María PérezOrtiz and Benjamin Guedj.
People I work with at DeepMind include Laurent Orseau, Marcus Hutter, Claire Vernade, Ilja Kuzborskij, András György, Tor Lattimore from my own team (Foundations); and Razvan Pascanu, Amal RannenTriki, Soham De, Sam Smith from friend teams.

In the Fall term of 2016 I joined the Department of Computer Science at the U of A,
to work in Statistical Machine Learning. This is fascinating!
Besides statistical learning I am also interested in other learning frameworks
such as online learning and reinforcement learning,
and of course deep learning, which is quite popular these days.
It looks that optimization is one pervasive theme in machine learning,
though it comes up in such a variety of flavours and colours that it isn't boring.
It reminds of the
least action principle
of
Maupertuis,
saying that
"everything happens as if some quantity was to be made as small as possible."
(This principle has lead the optimists to believe that we live in
the best possible world.)
But just optimization doesn't quite do it for machine learning...
to really be talking about learning one has to pay attention to generalization!

I spent some time with
Mauricio's
group looking at signal analysis related things.
Before that I was working with
Sasha and
Nicole
using geometric functional analysis and probability
for estimating the smallest singular value of a sparse random matrix.
Even before that I worked with
Byron
on reversibility of a Brownian motion with drift.
As an undergrad,
with Loretta
I worked on a fun project about repeated games with incomplete information.

 Tighter risk certificates for (probabilistic) neural networks.
UCL Centre for AI.
Slides
Video
 Statistical Learning Theory: A Hitchhiker's Guide.
NeurIPS 2018 Tutorial. (with J. ShaweTaylor)
Slides
Video
 M. Haddouche, B. Guedj, O. Rivasplata, J. ShaweTaylor,
Upper and Lower Bounds on the Performance of Kernel PCA.
Submitted (2020).
PDF
 M. PérezOrtiz, O. Rivasplata, J. ShaweTaylor, Cs. Szepesvári,
Tighter risk certificates for neural networks.
Submitted (2020).
PDF
 M. Haddouche, B. Guedj, O. Rivasplata, J. ShaweTaylor,
PACBayes unleashed: Generalisation bounds with unbounded losses.
(2020).
PDF
Conference & Journal Papers

 L. Orseau, M. Hutter, O. Rivasplata,
Logarithmic pruning is all you need.
NeurIPS 2020 .
PDF
 O. Rivasplata, I. Kuzborskij, Cs. Szepesvári, J. ShaweTaylor,
PACBayes Analysis Beyond the Usual Bounds.
(Upgrade of the 2019 workshop paper with the same title.)
NeurIPS 2020.
PDF
 O. Rivasplata, E. ParradoHernández, J. ShaweTaylor, S. Sun, Cs. Szepesvári,
PACBayes bounds for stable algorithms with instancedependent priors.
NeurIPS 2018.
PDF
 A.E. Litvak, O. Rivasplata,
Smallest singular value of sparse random matrices.
Studia Math., 212, 3 (2012), 195218.
PDF
 O. Rivasplata, J. Rychtar, B. Schmuland,
Reversibility for diffusions via quasiinvariance.
Acta Univ. Carolin. Math. Phys., 48, 1 (2007), 310.
PDF
 O. Rivasplata, J. Rychtar, C. Sykes,
Evolutionary games in finite populations.
Pro Mathematica, 20, 39/40 (2006), 147164.
PDF
 O. Rivasplata, B. Schmuland,
Invariant and reversible measures for random walks on Z.
Pro Mathematica, 19, 37/38 (2005), 117124.
PDF
 M. PérezOrtiz, O. Rivasplata, J. ShaweTaylor, Cs. Szepesvári,
Towards selfcertified learning: Probabilistic neural networks trained by PACBayes with Backprop.
NeurIPS 2020 Workshop  Beyond BackPropagation.
PDF
 O. Rivasplata, I. Kuzborskij, Cs. Szepesvári, J. ShaweTaylor,
PACBayes Analysis Beyond the Usual Bounds.
NeurIPS 2019 Workshop  Machine Learning with Guarantees.
PDF
 O. Rivasplata,
A note on a confidence bound of Kuzborskij and Szepesvári.
(2021)
PDF
 O. Rivasplata,
Subgaussian random variables: An expository note.
(2012)
PDF
Probability Links (accessible with high probability)

Peruvian Links


