LoGU: Long-form Generation with Uncertainty Expressions
This paper studies how to reduce hallucinations when large language models generate long answers with multiple claims. We propose Long-form Generation with Uncertainty, where models explicitly mark uncertain parts of their responses. Using new training data, supervised fine-tuning, and direct preference optimization, we improve factual accuracy while keeping explanations detailed, readable, and clear about knowledge gaps.