.

Omar Rivasplata
(No longer at the)
Department of Computing Science
University of Alberta
(Former) Office: 147 Athabasca Hall
Email: try
or
Here is my Curriculum Vitae
Last updated: not so long ago
News:
here

Foundations of AI.
Machine Learning.
Statistical Learning Theory.
Probability and Statistics.

I work with John ShaweTaylor at UCL
and with Csaba Szepesvári at DeepMind.
Actually these days I work from home.
People I work with at UCL include María PérezOrtiz and Benjamin Guedj.
People I work with at DeepMind include Tor Lattimore, Laurent Orseau, Marcus Hutter, Claire Vernade, Ilja Kuzborskij, András György from the Foundations team; Razvan Pascanu, Amal RannenTriki, Soham De, Sam Smith from the Deep Learning team.

In the Fall term of 2016 I joined
Csaba's group at the U of A to work
in Statistical Machine Learning. This is fascinating!
Besides statistical learning I am also interested in other learning frameworks
such as online learning and reinforcement learning,
and of course deep learning, which is quite popular these days.
It looks that optimization is one pervasive theme in machine learning,
though it comes up in a variety of flavours and colours that it isn't boring.
As the 'least action principle' of that guy Maupertuis who said that
"everything happens as if some quantity was to be made as small as possible."
But just optimization doesn't quite do it... to really be talking about learning
one has to pay attention to generalization.

I spent some time with
Mauricio's
group looking at signal analysis related things.
Before that I was working with
Sasha and
Nicole
using geometric functional analysis and probability
for estimating the smallest singular value of a sparse random matrix.
Even before that I worked with
Byron
on reversibility of a Brownian motion with drift.
As an undergrad,
with Loretta
I worked on a fun project about repeated games with incomplete information.

 Tighter risk certificates for (probabilistic) neural networks.
UCL Centre for AI.
Slides
Video
 Statistical Learning Theory: A Hitchhiker's Guide.
NeurIPS 2018 Tutorial. (with J. ShaweTaylor)
Slides
Video
 M. PérezOrtiz, O. Rivasplata, J. ShaweTaylor, Cs. Szepesvári,
Tighter risk certificates for neural networks.
Submitted (2020).
PDF
 L. Orseau, M. Hutter, O. Rivasplata,
Logarithmic pruning is all you need.
Accepted in NeurIPS (2020) .
PDF
 O. Rivasplata, I. Kuzborskij, Cs. Szepesvári, J. ShaweTaylor,
PACBayes Analysis Beyond the Usual Bounds.
(Upgrade of the 2019 workshop paper with the same title.)
Accepted in NeurIPS (2020).
PDF
 O. Rivasplata, I. Kuzborskij, Cs. Szepesvári, J. ShaweTaylor,
PACBayes Analysis Beyond the Usual Bounds.
NeurIPS 2019 Workshop on Machine Learning with Guarantees.
PDF
 O. Rivasplata, E. ParradoHernández, J. ShaweTaylor, S. Sun, Cs. Szepesvári,
PACBayes bounds for stable algorithms with instancedependent priors.
NeurIPS 2018.
PDF
 A.E. Litvak, O. Rivasplata,
Smallest singular value of sparse random matrices.
Studia Math., 212, 3 (2012), 195218.
PDF
 O. Rivasplata,
Subgaussian random variables: An expository note.
Unpublished note.
PDF
 O. Rivasplata, J. Rychtar, B. Schmuland,
Reversibility for diffusions via quasiinvariance.
Acta Univ. Carolin. Math. Phys., 48, 1 (2007), 310.
PDF
 O. Rivasplata, J. Rychtar, C. Sykes,
Evolutionary games in finite populations.
Pro Mathematica, 20, 39/40 (2006), 147164.
PDF
 O. Rivasplata, B. Schmuland,
Invariant and reversible measures for random walks on Z.
Pro Mathematica, 19, 37/38 (2005), 117124.
PDF
Probability Links (accessible with high probability)

Peruvian Links


