Trang Vu

Modus omnibus in rebus

prof_pic.jpg

257 Woodside buidling

20 Exhibition Walk

Monash University, Australia

E: trang.vu1@monash.edu

Trang Vu is a Lecturer in Department of Data Science and Artificial Intelligence, Faculty of Information Technology, Monash University. Her research interests lie at the intersection of natural language processing and machine learning. Her current research focuses on efficient and trustworthy methods to make NLP technologies safe and accessible.

My current research interests include

  • Safe and trustworthy NLP: alignment and hallucination mitigation for LLMs
  • Cultural-aware machine translation
  • ML methods to facilitate efficient NLP such as active learning, transfer learning and semi-supervised learning

news

Nov 13, 2025 Several papers accepted in 2025: 2xACL/-Findings, 2xEMNLP/-Findings, ICLR, ICML, INTERSPEECH, IWSLT, CoNLL, CoLM, PKDD, COLING.
Nov 25, 2024 Exciting to present our tutorial about Continual Learning for LLMs Tutorial @ AJCAI 2024. This tutorial is also accepted to present at EMNLP2025. See you in Suzhou, China.
Sep 21, 2024 Two papers accepted to EMNLP 2024! Congratulations to Minghan and Minghao.
Aug 13, 2024 Invited talk at Lee Lab - Ontario Tech University about “Multi-Domain Multilingual NMT”.

selected publications

  1. transgraph.png
    Discourse Graph Guided Document Translation with Large Language Models
    Viet Thanh Pham , Minghan Wang , Hao-Han Liao , and Thuy-Trang Vu
    In Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers) , Mar 2026
  2. congrad.png
    CONGRAD: Conflicting Gradient Filtering for Multilingual Preference Alignment
    Jiangnan Li , Thuy-Trang Vu, Christian Herold , Amirhossein Tebbifakhr , Shahram Khadivi , and 1 more author
    In Proceedings of the 19th Conference of the European Chapter of the Association for Computational Linguistics (Volume 1: Long Papers) , Mar 2026
  3. timeeval.png
    Discrete Minds in a Continuous World: Do Language Models Know Time Passes?
    Minghan Wang , Ye Bai , Thuy-Trang Vu, Ehsan Shareghi , and Gholamreza Haffari
    In Findings of the Association for Computational Linguistics: EMNLP 2025 , Mar 2025
  4. ICML
    The Best of Both Worlds: Bridging Quality and Diversity in Data Selection with Bipartite Graph
    Minghao Wu , Thuy-Trang Vu, Lizhen Qu , and Gholamreza Haffari
    In ICML , Mar 2025
  5. EMNLP
    Mixture-of-Skills: Learning to Optimize Data Usage for Fine-Tuning Large Language Models
    Minghao Wu , Thuy-Trang Vu, Lizhen Qu , and Reza Haf
    In Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing , Mar 2024
  6. koala.png
    Koala: An Index for Quantifying Overlaps with Pre-training Corpora
    Thuy-Trang Vu, Xuanli He , Gholamreza Haffari , and Ehsan Shareghi
    In EMNLP Demos , Mar 2023
  7. llm-factual.png
    Systematic Assessment of Factual Knowledge in Large Language Models
    Linhao Luo , Trang Vu, Dinh Phung , and Reza Haf
    In Findings of the Association for Computational Linguistics: EMNLP 2023 , Mar 2023