LoGU: Long-form Generation with Uncertainty Expressions

This paper studies how to reduce hallucinations when large language models generate long answers with multiple claims. We propose Long-form Generation with Uncertainty, where models explicitly mark uncertain parts of their responses. Using new training data, supervised fine-tuning, and direct preference optimization, we improve factual accuracy while keeping explanations detailed, readable, and clear about knowledge gaps.

Previous
Previous

On Reality and the Limits of Language Data: Aligning LLMs with Human Norms

Next
Next

Time to Revisit Exact Match