Aneesh Muppidi

I am a Rhodes Scholar and a Harvard undergrad (AB/SM program). At Harvard's Kempner AI institute, I am a KURE fellow advised by Prof. Samuel Gershman. At the Harvard Computational Robotics Lab, I am advised by Prof. Hank Yang. At MIT , I'm advised by Ila Fiete. I was the President of the Harvard Computational Neuroscience Undergraduate Society and Harvard Dharma. I write for the Harvard Crimson. I also like film photography.

How can agents learn in unknown worlds?  (read my SOP).

Email  /  Scholar  /  Follow on X  /  LinkedIn  /  Github

profile photo

Recent News

  • I received the US Rhodes Scholarship 
  • My first-author work was accepted to NeurIPS (Poster) Main Conference, and RLC  (Spotlight), RSS (Spotlight), and TTIC workshops! Check out the website and code.
  • Preprint on Particle Filters for Continual DL/RL is out!  

Research

Permutation Invariant Learning with High-Dimensional Particle Filters
Akhilan Boopathy*, Aneesh Muppidi*, Peggy Yang, Abhiram Iyer, William Yue Ila Fiete
* equal contribution
arXiv, 2024
project page / code / arXiv

What is the optimal order of training data? Particle filters can be invariant to training data permutations, mitigiating plasticity loss and catastrophic forgetting.

Fast TRAC: A Parameter-free Optimizer for Lifelong Reinforcement Learning
Aneesh Muppidi, Zhiyu Zhang, Hank Yang
NeurIPS, 2024
project page / code / arXiv

Mitigate plasticity loss, accelerate forward transfer, and avoid policy collapse with just one line of code. 

Resampling-free Particle Filters in High-dimensions
Akhilan Boopathy, Aneesh Muppidi, Peggy Yang, Abhiram Iyer, William Yue Ila Fiete
ICRA, 2024
arXiv

Particle filters for Pose Estimation. 

Speech Emotion Recognition using Quaternion Convolutional Neural Networks
Aneesh Muppidi, Martin Radfar
ICASSP, 2021
arXiv

QCNNs beat SOTA SER models. 

RL Projects

Let's Learn Agency: Learning Emergent Agent and Non-Agent Trajectory Representations
MIT 6.8200 with Pulkit Agrawal
Project Report, Video Presentation
Generating Suboptimal Expert Demonstrations with Large Language Models
MIT 6.4212 with Russ Tedrake, project advised by Lirui Wang
Project Report, Video Presentation
Rapid Learning Mechanisms and Neural Representations in Reinforcement Learning
Harvard PSY 2350R with Sam Gershman, project advised by Jay Henning
Project Report, Code, Research Notebook
Diffusion Policy for Classical Control Problems
Harvard ES158 with Heng Yang
Project Report, Code, Slides
Visualizing Collaborative Multi-Agent Reinforcement Learning
Harvard CS271 with Johanna Beyer and Hanspeter Pfister
Project Report, Slides

A big thanks to the kind Jon Barron.