eric nalisnick

Postdoctoral Research Associate
Machine Learning Group
University of Cambridge

Room BE4-33
Baker Building
Cambridge, UK CB2 1PZ

first initial [dot] nalisnick [at] eng.cam.ac.uk


about     cv     code     twitter    

I am a postdoctoral researcher at the Cambridge Machine Learning Group. My research interests cover a swath of probabilistic machine learning and statistics. In particular, I am interested in both the theory and application of models that can quantify their uncertainty while being robust and computationally efficient.

I completed my PhD under the supervision of Padhraic Smyth at the University of California, Irvine. Previously, I've done research internships at DeepMind, Microsoft, Amazon, and Twitter. My undergraduate studies where in computer science and English literature at Lehigh University (Bethlehem, PA).

PUBLICATIONS


preprints / working papers
conference publications
Disi Ji, Eric Nalisnick, Yu Qian, Richard Scheuermann, and Padhraic Smyth. Bayesian Trees for Automated Cytometry Data Analysis. In Proceedings of Machine Learning for Healthcare (MLHC), Stanford, California, August 16-18 2018.

Eric Nalisnick and Padhraic Smyth. Learning Priors for Invariance. In Proceedings of the 21st International Conference on Artificial Intelligence and Statistics (AISTATS), Playa Blanca, Canary Islands, April 9-11 2018.

Eric Nalisnick and Padhraic Smyth. Learning Approximately Objective Priors. In Proceedings of the 33rd Conference on Uncertainty in Artificial Intelligence (UAI), Sydney, Australia, August 11-15 2017.

Eric Nalisnick and Padhraic Smyth. Stick-Breaking Variational Autoencoders. In Proceedings of the 5th International Conference on Learning Representations (ICLR), Toulon, France, April 24-26 2017. [Code] [Supplemental Materials]

Eric Nalisnick, Bhaskar Mitra, Nick Craswell, and Rich Caruana. Improving Document Ranking with Dual Word Embeddings. In Proceedings of the 25th World Wide Web Conference (WWW), Short Paper, Montreal, Canada, April 11-15 2016.

Eric T. Nalisnick and Henry S. Baird. Character-to-Character Sentiment Analysis in Shakespeare's Plays. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (ACL), Short Paper, pages 479-83, Sofia, Bulgaria, August 4-9 2013.

Eric T. Nalisnick and Henry S. Baird. Extracting Sentiment Networks from Shakespeare's Plays. In Proceedings of the 12th International Conference on Document Analysis and Recognition (ICDAR), pages 758-762, Washington, USA, August 25-28, 2013.

workshop papers
Eric Nalisnick and José Miguel Hérnandez-Lobato. Automatic Depth Determination for Bayesian ResNets. Bayesian Deep Learning, Workshop at NeurIPS 2018, Montreal, Canada, December 7, 2018.

Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Do Deep Generative Models Know What They Don't Know? Bayesian Deep Learning, Workshop at NeurIPS 2018, Montreal, Canada, December 7, 2018.

Eric Nalisnick*, Akihiro Matsukawa*, Yee Whye Teh, Dilan Gorur, and Balaji Lakshminarayanan. Hybrid Models with Deep and Invertible Features. Bayesian Deep Learning, Workshop at NeurIPS 2018, Montreal, Canada, December 7, 2018.

Oleg Rybakov et al. The Effectiveness of a Two-Layer Neural Network for Recommendations. Workshop Track, ICLR 2018, Vancouver, Canada, April 30 - May 3, 2018.

Disi Ji, Eric Nalisnick, and Padhraic Smyth. Mondrian Processes for Flow Cytometry Analysis. Machine Learning for Health, Workshop at NIPS 2017, Long Beach, USA, December 8, 2017.

Eric Nalisnick and Padhraic Smyth. Variational Inference with Stein Mixtures. Advances in Approximate Bayesian Inference, Workshop at NIPS 2017, Long Beach, USA, December 8, 2017.

Eric Nalisnick and Padhraic Smyth. The Amortized Bootstrap. Implicit Models, Workshop at ICML 2017, Sydney, Australia, August 10, 2017. [Oral Presentation]

Eric Nalisnick and Padhraic Smyth. Variational Reference Priors. Workshop Track, ICLR 2017, Toulon, France, April 24-26 2017.

Eric Nalisnick, Lars Hertel, and Padhraic Smyth. Approximate Inference for Deep Latent Gaussian Mixtures. Bayesian Deep Learning, Workshop at NIPS 2016, Barcelona, Spain, December 19, 2016.

Eric Nalisnick and Padhraic Smyth. Nonparametric Deep Generative Models with Stick-Breaking Priors. Data-Efficient Machine Learning, Workshop at ICML 2016, New York, USA, June 24, 2016. [Oral Presentation]

Jihyun Park, Meg Blume-Kohout, Ralf Krestel, Eric Nalisnick, and Padhraic Smyth. Analyzing NIH Funding Patterns over Time with Statistical Text Analysis. Scholarly Big Data: AI Perspectives, Challenges, and Ideas, Workshop at AAAI 2016, Phoenix, USA, February 12-13, 2016.


TALKS


Structured Shrinkage Priors for Neural Networks. Imperial College Statistics Seminar November 2, 2018
On Priors for Bayesian Neural Networks. Doctoral Dissertation Defense May 16, 2018
Approx. Inference for Frequentist Uncertainty Estimation. SoCal Machine Learning Symposium October 6, 2017
The Amortized Bootstrap. ICML Workshop on Implicit Models August 10, 2017
Deep Generative Models with Stick-Breaking Priors. UCI AI/ML Seminar February 27, 2017
Alternative Priors for Deep Generative Models. OpenAI February 20, 2017
Nonparametric Deep Generative Models with Stick-Breaking Priors. ICML Workshop on Data-Efficient ML June 24, 2016



TEACHING


uci data science workshops