All Roads Lead to Rome: Graph-Based Confidence Estimation for Large Language Model Reasoning

How can we be confident large language models are confident for the right reasons? Our EMNLP 2025 paper introduces training-free, graph-based confidence estimation for reasoning tasks, modeling LLM thought paths as directed graphs using centrality and convergence to improve reliability, interpretability, and downstream performance.

Read the full article here on ACL Anthology.

Previous
Previous

Time to Revisit Exact Match

Next
Next

Trident: Benchmarking llm safety in finance, medicine, and law