Andrew Bennett

Andrew Bennett

Machine Learning Researcher

Morgan Stanley

Biography

I am a machine learning researcher in the Machine Learning Research group at Morgan Stanley. I have diverse research interests in the area of machine learning, with particular interest in causal machine learning, data-driven decision making, reinforcement learning, econometrics, and probabilistic machine learning. Before that, I was a PhD student at Cornell University in the Computer Science department, supervised by Nathan Kallus.

Interests

  • Causal Machine Learning
  • Data-driven Decision Making
  • Reinforcement Learning
  • Econometrics
  • Probabilistic Machine Learning

Education

  • Doctor of Philosophy in Computer Science, 2023

    Cornell University

  • Master of Science in Computer Science, 2016

    The University of Melbourne

  • Bachelor of Science, 2013

    The University of Melbourne

Publications

Quickly discover relevant content by filtering publications.
(2024). Efficient and Sharp Off-Policy Evaluation in Robust Markov Decision Processes. Accepted at NeurIPS.

PDF

(2024). VQ-TR: Vector Quantized Attention for Time Series Forecasting. International Conference on Learning Representations (ICLR).

PDF

(2024). Low-rank MDPs with Continuous Action Spaces. International Conference on Artificial Intelligence and Statistics (AISTATS).

PDF

(2024). Inference on strongly identified functionals of weakly identified functions. Major revision in JRSS:B.

PDF

(2023). Future-Dependent Value-Based Off-Policy Evaluation in POMDPs. International Conference on Neural Information Processing Systems (NeurIPS).

PDF

(2023). Source Condition Double Robust Inference on Functionals of Inverse Problems. arXiv preprint.

PDF

(2023). Minimax Instrumental Variable Regression and L2 Convergence Guarantees without Identification or Closedness. Conference on Learning Theory (COLT).

PDF

(2023). The Variational Method of Moments. Journal of the Royal Statistical Society Series B: Statistical Methodology (JRSS:B).

PDF

(2023). Provable Safe Reinforcement Learning with Binary Feedback. International Conference on Artificial Intelligence and Statistics (AISTATS).

PDF

(2021). Have you tried Neural Topic Models? Comparative Analysis of Neural and Non-Neural Topic Models with Application to COVID-19 Twitter Data. The 3rd Workshop on Data Science for Social Good (DSSG-21).

PDF

(2021). Off-policy Evaluation in Infinite-Horizon Reinforcement Learning with Latent Confounders. International Conference on Artificial Intelligence and Statistics (AISTATS).

PDF

(2020). Efficient Policy Learning from Surrogate-Loss Classification Reductions. International Conference on Machine Learning (ICML).

PDF

(2019). Deep Generalized Method of Moments for Instrumental Variable Analysis. Advances in Neural Information Processing Systems (NeurIPS).

PDF

(2019). Policy Evaluation with Latent Confounders via Optimal Balance. Advances in Neural Information Processing Systems (NeurIPS).

PDF

(2018). Mapping Instructions to Actions in 3D Environments with Visual Goal Prediction. Conference on Empirical Methods in Natural Language Processing (EMNLP).

PDF

(2018). Preferred answer selection in stack overflow: Better text representations... and metadata, metadata, metadata. The 4th Workshop on Noisy User-generated Text (W-NUT), at EMNLP.

PDF

(2018). Following High-level Navigation Instructions on a Simulated Quadcopter with Imitation Learning. Robotics: Science and Systems (RSS).

PDF

(2018). Detecting Misflagged Duplicate Questions in Community Question-Answering Archives. The AAAI Conference on Web and Social Media (ICWSM).

PDF

(2018). CHALET: Cornell House Agent Learning Environment. Technical Report.

PDF

(2016). LexSemTM: A Semantic Dataset Based on All-words Unsupervised Sense Distribution Learning. The Annual Meeting of the Association for Computational Linguistics (ACL).

PDF

(2016). Unsupervised All-words Sense Distribution Learning. Masters Thesis.

PDF

Contact