Post-training of large language models has long been clearly divided into two paradigms: supervised fine-tuning (SFT) centered on imitation and reinforcement learning (RL) driven by exploration.
Some results have been hidden because they may be inaccessible to you
Show inaccessible results