About Me:

UPDATES

27-04-2026 I have joined Microsoft’s M365 Research Efficient AI team as a Research SDE.

19-12-2025 Our work on Kascade: a practical sparse attention technique for LLMs is out on Arxiv Code.

14-12-2024 I will be presenting a tutorial on distributed GNN training at CODS-COMAD’24 on 18th DecemberTutorial Website.

01-07-2024 I have joined Microsoft Research India as a Research Fellow.

04-11-2023 My first paper accepted at ICDM’23 Arxiv IEEE.

I am a young researcher and engineer with keen interest in development of ML Systems. Recently, I joined the Microsoft M365 Research team and continue my journey in making llm inference efficient. Before this I was a Research Fellow at Microsoft Research India and worked on developing Kascade: a practical sparse attention technique.

My previous work was on distributed training of graph neural networks, developing better partitioning schemes of graph datasets that yield better accuracy through distributed training. Class imbalance is a prevalent problem graph datasets and needs to be handled to achieve better accuracies. This work focused on researching loss functions and sampling techniques that mitigate class imabalance and achieve better macro-F1 scores. We were able to bring some speed up in training and improve the macro and micro-F1 scores. The work was accepted at ICDM’23.

Before joining MSR I worked as a SDE at Publicis Sapient. My work there focused building efficient and accessible frontends. I won an award for Learning Mindset during our annual client townhall.