Research Associate and Adjunct Professor
PI of the Reinforcement Learning and Artificial Intelligence Lab (RLAI)
Department of Computing Science
University of Alberta
I graduated from the University of Alberta with a Ph.D. in Computing Science in 2015. During my doctoral studies I was advised by Richard Sutton, working in the RLAI lab. After that I worked as a Post-doc and Research Scientist in the Department of Computer Science at Indiana University in Bloomington.
Cam Linke (MSc)
Han Wang (MSc): email@example.com
Eugene Chen (MSc)
Andrew Jacobsen (NSERC USRA): firstname.lastname@example.org
CMPUT 366: Intelligent Systems - Fall 2017
CMPUT 609: Reinforcement Learning - Fall 2017
CSCI-B 659: Reinforcement learning for Artificial Intelligence - Spring 2017
CSCI-B 659: Reinforcement learning for Artificial Intelligence - Spring 2016
Reinforcement Learning, Robotics, Knowledge Representation and
My research focuses on the problem of Artifical Intelligence,
specifically how to replicate or simulate human level intelligence in
real and simulated agents. My research program explores how the
problem of intelligence can be modelled as a reinforcement learning
agent interacting with some unknown enviroment, learning from a scalar
reward signal rather than explicit feedback. My contributions include
new algorithms for reinforcement learning, and large-scale
demonstrations of learning on mobile robots.
My current CV can be found
Modayil, J., White, A., Sutton, R. S. (2014). Multi-timescale Nexting
in a Reinforcement Learning Robot. Adaptive Behavior, 22(2):146--160.
Whiteson, S., Tanner, B., & White, A. (2010). The reinforcement
learning competitions. AI Magazine, 31(2): 81--94.
Tanner, B., & White, A. (2009). RL-Glue: Language-independent software for reinforcement-learning experiments. The Journal of Machine Learning Research, 10: 2133--2136.
Sherstan C., Bennett B., Young K., Ashley D., White, A., White M., Sutton R. (2018). Directly Estimating the Variance of the $\lambda$-Return Using Temporal-Difference Methods
. Conference on Uncertainty in Artificial Intelligence (UAI).
Pan Y., Zaheer M., White, A., Patterson A., White M. (2018). Organizing experience: a deeper look at replay mechanisms for sample-based planning in continuous state domains
. International Joint Conference on Artificial Intelligence (IJCAI).
Pan Y., White, A., White M. (2017). Accelerated Gradient Temporal Difference Learning
. AAAI Conference on Artificial Intelligence (AAAI).
Sherstan, C., Machado, M., ,White, A., Patrick P. (2016). Introspective Agents: Confidence Measures for General Value
Functions, Artificial General Intelligence (AGI).
White, A., White M. (2016). Investigating practical linear temporal difference learning. In International Conference on Autonomous Agents and MultiAgent Systems (AAMAS). [ CODE ]
White, M., White A. (2016) Adapting the trace parameter in reinforcement learning, In International Conference on Autonomous Agents and MultiAgent Systems (AAMAS).
White, A., Modayil, J., & Sutton, R. S. (2012). Scaling
life-long off-policy learning. In the IEEE International Conference on Development and Learning and
Epigenetic Robotics, 1--6.
[paper of distinction award]
Modayil, J., White, A., Pilarski, P. M., & Sutton, R. S. (2012). Acquiring a broad
range of empirical knowledge in real time by temporal-difference
learning. In the IEEE International Conference on Systems,
Man, and Cybernetics, 1903--1910.
Modayil, J., White, A., Sutton, R. S. (2012). Multi-timescale Nexting
in a Reinforcement Learning Robot. Presented at the 2012 International
Conference on Adaptive Behaviour, Odense, Denmark. To appear in: SAB
12, LNAI 7426, pp. 299-309, T. Ziemke, C. Balkenius, and J. Hallam,
Eds., Springer Heidelberg.
Sutton, R. S., Modayil, J., Delp, M., Degris, T., Pilarski, P. M.,
White, A., & Precup, D. (2011). Horde: A
scalable real-time architecture for learning knowledge from
unsupervised sensorimotor interaction. In The 10th
International Conference on Autonomous Agents and Multiagent
Systems: 2, 761--768.
White, M., & White, A. (2010). Interval
estimation for reinforcement-learning algorithms in continuous-state
domains. In Advances in Neural Information Processing Systems, 2433--2441.
Sturtevant, N. R., & White, A. M. (2007). Feature
construction for reinforcement learning in hearts. In
Computers and Games . Springer Berlin Heidelberg, 122--134
Other published works
Yangchen P., White, A., White M. (2017). Accelerated Gradient Temporal Difference Learning
. European workshop on reinforcement learning (EWRL).
Schlegel M., White, A., White M. (2017). Stable predictive representations with general value functions for continual learning
. Continual Learning and Deep Networks workshop at the Neural Information Processing System Conference.
White, A., & Sutton, R. S. (2014). GQ (lambda) Quick Reference Guide.
White, A., Modayil, J., & Sutton, R. S. (2014). Surprise and
curiosity for big data robotics. In Workshops at the
Twenty-Eighth AAAI Conference on Artificial Intelligence.
Modayil, J., White, A., Pilarski, P. M., Sutton, R. S. (2012). Acquiring
Diverse Predictive Knowledge in Real Time by Temporal-difference
Learning. International Workshop on Evolutionary and
Reinforcement Learning for Autonomous Robot Systems, Montpellier, France.
[Best paper award]
Modayil, J., Pilarski, P., White, A., Degris, T., & Sutton,
R. (2010). Off-policy knowledge maintenance for robots. In Proceedings
of Robotics Science and Systems Workshop (Towards Closing the Loop:
Active Learning for Robotics) : 55.
White, A. (2015) Developing a predictive approach to knowledge. Doctoral thesis, University of Alberta.
White, A. (2006) A standard system for benchmarking in reinforcement
learning. Master's thesis, University of Alberta.
google scholar page for a list of my
publications that Google knows about.
Office: 307 Athabasca Hall
Department of Computing Science
University of Alberta