image-title-here

Jinlan Fu

  • Research Fellow @ NUS-CS
  • jinlanjonna@gmail.com
  • NUS Innovation 4.0 Rm 406
  • NLP/ML/Multimodal

About Me

I am a postdoctoral researcher at the National University of Singapore, working with Prof. See-Kiong Ng. I earned my Ph.D. in Computer Science from Fudan University (Sep 2016 – Jul 2021), under the supervision of Prof. Xuanjing Huang . My research interests include:

  • Trustworthiness: How can we make MLLMs/LLMs more reliable and factual? Key aspects include mitigating hallucinations, biases, and unsafe or malicious behavior, ensuring alignment with human values and expectations, and enhancing interpretability.

  • Multimodal Foundation Models: Encompassing both generation and understanding of text, images, video, and audio.

My research has been published in top-tier NLP, ML, and CV conferences, including ICLR, CVPR, ACL, EMNLP, NAACL, SIGIR, WWW, WSDM, AAAI, and IJCAI. I have received the ACL Best Demo Paper Award and the ACL Outstanding Demo Paper Award. I regularly serve as an Area Chair for NLP conferences (ACL, EMNLP, NAACL, etc) and as a Senior Program Committee member for AI conferences. My Ph.D. thesis was awarded the Excellent Doctoral Thesis of Chinese Information Processing Society (CIPS).


Advertising

I am constantly seeking collaborators on above topics, both locally and remotely. For highly self-motivated and promising students, I will provide GPU support. The available GPU support options include academic internships at Chinese companies (on-site), academic internships at Singaporean universities, or CSC joint PhD programs (on-site), as well as other potential online collaboration opportunities. If you are interested in my research topics, please email me with your research status and resume.


Selected Papers

  1. CHiP: Cross-modal Hierarchical Direct Preference Optimization for Multimodal LLMs, Jinlan Fu, Shenzhen Huangfu, Hao Fei, Xiaoyu Shen, Bryan Hooi, Xipeng Qiu, See-Kiong Ng, in ICLR, 2025.
  2. Safe Inputs but Unsafe Output: Benchmarking Cross-modality Safety Alignment of Large Vision-Language Models, Siyin Wang, Xingsong Ye, Qinyuan Cheng, Junwen Duan, Shimin Li, Jinlan Fu*, Xipeng Qiu*, Xuanjing Huang, in NAACL, 2025.
  3. Multi-layer Visual Feature Fusion in Multimodal LLMs: Methods, Analysis, and Best Practices, Junyan Lin, Haoran Chen, Yue Fan, Yingqi Fan, Xin Jin, Hui Su, Jinlan Fu*, Xiaoyu Shen*, in CVPR, 2025.
  4. GPTScore: Evaluate as You Desire, Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, Pengfei Liu, in NAACL, 2024. (Highly Cited: 475)
  5. Polyglot Prompt: Multilingual Multitask PrompTraining, Jinlan Fu, See-Kiong Ng, Pengfei Liu, in EMNLP, 2023.

Recent News

  16 May 2025: Three papers (2 Main, 1 Findings) have been accepted to ACL 2025: one focuses on Embodied Task Planning (D2PO), another explores the Robustness of Large Video Models (LVMs), and the third discusses LLMs as Effective Streaming Processes. Congratulations to Siyin Wang, Jiafeng Liang, and all my collaborators!

  01 May 2025: Two papers have been accepted to ICML 2025: one focuses on Video Hallucination (VistaDPO), and the other explores Jailbreaking LLMs. Congratulations to my collaborators.

  03 Mar 2025: I will be attending ICLR (Singapore, April 24–28, 2025) and CVPR (Nashville, TN, USA, June 11–15, 2025).

  27 Feb 2025: Two papers have been accepted to CVPR 2025: one focuses on Safety Preference Alignment Dataset for MLLM, and the other explores Visual Feature Fusion in MLLM. Congratulations to Junyan Lin (co-supervised with Professor Xiaoyu Shen.)

  16 Feb 2025: Serving as an Area Chair for ACL 2025 (ARR February Cycle).

  07 Feb 2025: One paper about Omnipotent Dialogue System was accepted to TASLP (Journal). Congratulations to Mingtao Yang.

  23 Jan 2025: One paper about Cross-modal Safety Alignment was accepted to NAACL 2025. Congratulations to Siyin Wang (co-supervised with Professor Xipeng Qiu).

  22 Jan 2025: One paper about MLLM Alignment via DPO was accepted to ICLR 2025.