[Events] Advancing LLM’s capabilities to aggregate its own reasoning(10:00 - 11:00, Nov 18th, 2025)
- 소프트웨어융합대학
- Hit376
- 2025-11-17
Title: Advancing LLM’s capabilities to aggregate its own reasoning
Speaker: Dr. Ilia Kulikov @ Meta FAIR
Time : 10:00 - 11:00, Nov 18th, 2025
Location: Online
https://hli.skku.edu/InvitedTalk251118
Language: English speech & English slides
Abstract:
As large language models tackle increasingly complex reasoning tasks, scaling test-time computation through generating and aggregating multiple solution candidates has emerged as a key paradigm for improvement. However, traditional aggregation methods like majority voting often fail to fully exploit the information contained in diverse reasoning traces. In this talk, I will present our recent work on AggLM, where we reframe aggregation as an explicit reasoning skill that can be learned through reinforcement learning. Our approach trains models to review, reconcile, and synthesize correct answers from candidate solutions, effectively recovering minority-correct answers while maintaining strong performance on majority-correct cases. I will demonstrate how this learned aggregation outperforms both rule-based and reward-model baselines across multiple benchmarks, generalizes to solutions from stronger models, and achieves better efficiency than majority voting. Finally, I will discuss our ongoing work that extends beyond single-step aggregation, exploring new directions for enhancing LLMs' meta-reasoning capabilities.
Bio:
Dr. Ilia Kulikov is a research scientist at FAIR. He obtained his PhD from New York University in 2022, where he was supervised by Kyunghyun Cho and Jason Weston. Their team (RAM) works on advancing algorithms, data, and models that enable self-improving capabilities of LLMs.








