I am a fourth-year PhD student in Carnegie Mellon Univiersity's Machine Learning Department advised by Ameet Talwalkar. My research focuses on the theory and applications of explainable machine learning, with a focus on understanding powerful black-box models such as neural networks and random forests.

Recent Updates

  • November 2020: A paper on the connections between interpretable machine learning and learning theory through the lense of local approximation explanation fidelity is under review.
  • October 2020: ExpO will be presented at NeurIPS 2020.
  • July 2020: ELDR was presented at ICML 2020.
  • February 2020: ELDR, a method for identifying the key differences between groups of points in low dimensional representations, is under review.
  • February 2020: An improved version of ExpO, where we ran a user study demonstrating its practical usefulness, is under review.
  • June 2019: ExpO will be presented as an oral at the Workshop on Human In the Loop Learning and as a poster at the Midwest Machine Learning Symposium.
  • May 2019: An improved version of ExpO, where we we explored a computationally efficient version of the regularizer and added more experiments, is under review.
  • January 2019: ExpO, a paper on regularizing black-box models to be more amenable to local explanation, is under review.
  • December 2018: Published MAPLE, a paper on producing better local explanations for black-box models using tree ensembles, at NeurIPS 2018.