About Me
Hi, I’m Harsha – currently a MSc Visual Computing student at Saarland University. I study systems and theory for developing compute and data efficient deep learning models.
Topics I am currently interested in [in no specific order]:
- Systems-aware efficient training [or fine-tuning] and inference techniques for deep learning models, [I like working with ResNets and Transformers equally :D].
- Model Compression methods (Sparsity, Pruning, Quantization, Knowledge Distillation): exploring their effectiveness and trade-offs.
- Enhancing communication efficiency in distributed and federated learning settings.
- Large-scale models (Transformers): unraveling the secrets of their success and the mysteries of over-parameterization.
- The interplay between interpretability, fairness, privacy, and model compression/efficiency techniques.
Please feel free to go through my projects, publications and if you’re interested in some music/book/movie recommendations check out media!
News
September 2024 | Practical on Federated Learning, presented at Deep Learning Indaba (Dakar, Senegal). Joint work with Andrej Jovanovic and Luca Powell. |
August 2024 |
|
June 2024 | Pre-print: Cyclic Sparse Training: is it enough? joint work led by Advait Gadhikar and advised by Dr. Rebekka Burkholz is now out! :D |
May 2024 | On The Fairness Impacts of Hardware Selection in Machine Learning - joint work in collaboration between Cohere For AI + RAISE Lab (University of Virginia) accepted as a poster at ICML 2024 :) |
December 2023 | Pre-print On The Fairness Impacts of Hardware Selection in Machine Learning - joint work in collaboration between Cohere For AI + RAISE Lab (University of Virginia) is out! Advised by Ferdinando Fioretto and Sara Hooker. |
July 2023 | Started as a Research Assistant (HiWi) at the Relational Machine Learning Lab (RML) advised by Dr. Rebekka Burkholz working on topics related to sparsity and lottery tickets. |