Research

I am interested in challenging the “scale-first” paradigm in AI. My research focuses on three interconnected areas:

  • Neuro-symbolic methods: Combining neural networks with logical constraints to build systems that are accurate, consistent, and explainable, especially when data is limited.

  • Evaluation beyond accuracy: Developing rigorous methods to reveal systematic failures in AI systems, particularly biases and robustness issues that standard metrics miss.

  • Low-resource languages and domains: Creating data and software for languages, phenomena and domains that are not adequately represented in large scale AI systems.

I work with mental health applications, multilingual systems, and other domains where constraints (e.g., limited data, API access prohibited, high stakes, infrastructure limitations) require rethinking standard approaches.

 

Publications

Here are some recent papers I have authored. Google scholar has the most up-to-date list of publications.

  • Found in Translation: Measuring Multilingual LLM Consistency as Simple as Translate Then Evaluate. Ashim Gupta, Maitrey Mehta, Zhichao Xu and Vivek Srikumar. Findings of the Association for Computational Linguistics: AACL-IJCNLP, 2025.
    [link]  [details]
  • Distillation versus Contrastive Learning: How to Train Your Rerankers. Zhichao Xu, Zhiqi Huang, Shengyao Zhuang and Vivek Srikumar. Findings of the Association for Computational Linguistics: AACL-IJCNLP, 2025.
    [link]  [details]
  • A Framework for Automation in Psychotherapy. Zac Imel, Torrey Creed, Brent Kious, Tim Althoff, Dana Atzil-Slonim and Vivek Srikumar. Current Directions in Psychological Science, 2025.
    [link]  [details]
  • LLM-Symbolic Integration for Robust Temporal Tabular Reasoning. Atharv Kulkarni, Kushagra Dixit, Vivek Srikumar, Dan Roth and Vivek Gupta. Findings of the Association for Computational Linguistics: ACL 2025, 2025.
    [link]  [details]
  • Understanding the Logic of Direct Preference Alignment through Logic. Kyle Richardson, Vivek Srikumar and Ashish Sabharwal. Forty-Second International Conference on Machine Learning, 2025.
    [link]  [details]

All papers →

 

Code for recent papers

The code for my recent papers is available at the Utah NLP github repository. Please follow the github projects and let us know if things don’t work.