CodeContests Finetuning: Details for Multi-Token LLMs

cover
11 Jun 2025

Abstract and 1. Introduction

2. Method

3. Experiments on real data

3.1. Benefits scale with model size and 3.2. Faster inference

3.3. Learning global patterns with multi-byte prediction and 3.4. Searching for the optimal n

3.5. Training for multiple epochs and 3.6. Finetuning multi-token predictors

3.7. Multi-token prediction on natural language

4. Ablations on synthetic data and 4.1. Induction capability

4.2. Algorithmic reasoning

5. Why does it work? Some speculation and 5.1. Lookahead reinforces choice points

5.2. Information-theoretic argument

6. Related work

7. Conclusion, Impact statement, Environmental impact, Acknowledgements, and References

A. Additional results on self-speculative decoding

B. Alternative architectures

C. Training speeds

D. Finetuning

E. Additional results on model scaling behavior

F. Details on CodeContests finetuning

G. Additional results on natural language benchmarks

H. Additional results on abstractive text summarization

I. Additional results on mathematical reasoning in natural language

J. Additional results on induction learning

K. Additional results on algorithmic reasoning

L. Additional intuitions on multi-token prediction

M. Training hyperparameters

F. Details on CodeContests finetuning

We use the Python subset of the CodeContests (Li et al., 2022) train split with reward annotations (“correct” / “incorrect”) and condition on correct solutions at evaluation time. For evaluation, we generate 1000 samples per problem from the test split for each temperature T ∈ {0.5, 0.6, 0.7, 0.8, 0.9}, and compute the unbiased estimator for pass@k from Chen et al. (2021) for each value of k and T. It is possible that models that were pretrained with different losses have different respective optimal temperatures for pass@k, so we compute and show k 7→ maxT pass_at(k, T) in Figure 4. In other words, we grant pass@k access to a temperature oracle. For small values of k, pass@k measures the capability of understanding and solving tasks while for large k, it additionally favors diversity in outputs. According to the results in Figure 4, multi-token prediction pretraining leads to finetuned models that are better on both axes.

This paper is available on arxiv under CC BY 4.0 DEED license.

Authors:

(1) Fabian Gloeckle, FAIR at Meta, CERMICS Ecole des Ponts ParisTech, and contributed equally;

(2) Badr Youbi IdrissiFAIR at Meta, LISN Université Paris-Saclay, and contributed equally;

(3) Baptiste Rozière, FAIR at Meta;

(4) David Lopez-Paz, FAIR at Meta and his the last author;

(5) Gabriel Synnaeve, FAIR at Meta and the last author.