About me
Ting-En (Tony) Lin is a research scientist at Tongyi Laboratory, Alibaba Group. His research focuses on Natural Language Processing, Conversational AI, and Multimodal Understanding. He publishes papers and serves as a program committee for various leading conferences such as ACL/AAAI/ICLR/NeurIPS. Tony received his MPhil degree in Computer Science and Technology at Tsinghua University (THU), working with Prof. Hua Xu, and his bachelor’s degree from Electrical and Computer Engineering at the National Chiao Tung University (NCTU).
News
2025
- Jan. 2025: We are excited to release OpenOmni, a state-of-the-art open-source model that achieves zero-shot omnimodal alignment across languages with self-aware emotional speech synthesis. Explore the code on GitHub, and access the data and model on Huggingface.
2024
- Oct. 2024: Thrilled to share that our work, DAIF, which leverages diverse AI feedback for efficient LLM alignment, has been accepted for publication in TACL 2025.
- Sep. 2024: Our work MMEvol: Empowering Multimodal Large Language Models with Evol-Instruct has released code, data, and models on the Huggingface. You can check it on arXiv.
- Aug. 2024: Our two papers, Attention Bucket and Masked Thought on LLM tools and planning, have been accepted to ACL 2024!@Bangkok
- May. 2024: Self-Explanation, a simple yet effective LLM prompting method to improve task-oriented dialogue understanding, is accepted to LREC-COLING 2024!@Torino
- Apr. 2024: We are pleased to present our latest survey, A Survey on Self-Evolution of Large Language Models. This work introduces a conceptual framework for the self-evolution of LLMs, including models like WizardLM, LLaMA, and Phi. Explore the full survey on arXiv.
2023
- Dec. 2023: SpokenWoz, a bi-modal new benchmark for spoken dialogue agents, is accepted to NeurIPS 2023!@New Orleans
- Oct. 2023: UniSA, a unified multimodal generative framework for sentiment analysis, is accepted to ACM MM 2023!@Ottawa
- Jul. 2023: SPECTRA, the first-ever speech-text dialogue pre-training model, is accepted to ACL 2023!@Toronto
- Jun. 2023: Tongyi Tingwu - Unveiled as Alibaba’s state-of-the-art AI Assistant for both Work and Study, it stands as China’s first LLM application product to enter public beta, boasting powerful summarization capabilities powered by us.
- Apr. 2023: Dial-Start, a method for unsupervised dialogue topic segmentation, is accepted to SIGIR 2023!@Taipei
- Feb. 2023: ECTG, an empathetic response generation method, is accepted to ICASSP 2023!@Rhodes
~ 2022
- Oct. 2022: UniMSE, our work that unified multimodal sentiment analysis and emotion recognition in conversation, is accepted to EMNLP 2022!@Abu Dhabi
- May. 2022: Duplex Conversation, our multimodal spoken dialogue system that enables human-like interactions, is accepted to KDD 2022! @Washington DC
- Feb. 2021: Two papers (open intent detection and discovery) are accepted to AAAI 2021!
- Feb. 2020: CDAC+, a semi-supervised clustering method for open intent discovery, is presented at AAAI 2020! @New York
- Jul. 2019: DeepUnk (LMCL), a simple yet effective method for open intent detection, is presented at ACL 2019! @Florence
- Dec. 2015: Visited Stanford Research Institute (SRI) for a two-week innovation workshop! @California
Awards
- Outstanding Dissertation and Graduate Award (Top 5%), Tsinghua University, 2020.
- Special Prize Scholarship for Overseas Students (Top 2%), Ministry of Education, 2019.
- Special Prize Scholarship for Overseas Students (Top 2%), Ministry of Education, 2018.
- Outstanding Contribution Award (Top 1%), National Chiao Tung University, 2016.
- Winner of Booking.com Hackathon, Taipei, 2017.
Professional Services
Reviewer / Program Committee Member in:
- Natural Language Processing (NLP): ARR/ACL/EMNLP/NAACL 21 ~ now
- Artificial Intelligent (AI): AAAI 21 ~ now, NeurIPS/ICLR/COLM 24 ~ now
- Data Mining (DM): KDD 23, WSDM 22 ~ now
- Journal and Others: TKDE / TMM / KBS / ICASSP / IJCAI / …
We Are Hiring! (2024.12)
We are actively hiring research scientists and interns. Please feel free to contact me if you need further information.