ProFit: Leveraging High-Value Signals in SFT via Probability-Guided Token Selection
Abstract
Supervised fine-tuning with multiple references addresses overfitting to non-core expressions by masking low-probability tokens based on their semantic importance.
Supervised fine-tuning (SFT) is a fundamental post-training strategy to align Large Language Models (LLMs) with human intent. However, traditional SFT often ignores the one-to-many nature of language by forcing alignment with a single reference answer, leading to the model overfitting to non-core expressions. Although our empirical analysis suggests that introducing multiple reference answers can mitigate this issue, the prohibitive data and computational costs necessitate a strategic shift: prioritizing the mitigation of single-reference overfitting over the costly pursuit of answer diversity. To achieve this, we reveal the intrinsic connection between token probability and semantic importance: high-probability tokens carry the core logical framework, while low-probability tokens are mostly replaceable expressions. Based on this insight, we propose ProFit, which selectively masks low-probability tokens to prevent surface-level overfitting. Extensive experiments confirm that ProFit consistently outperforms traditional SFT baselines on general reasoning and mathematical benchmarks.
Community
Supervised fine-tuning (SFT) is a fundamental post-training strategy to align Large Language
Models (LLMs) with human intent. However, traditional SFT often ignores the one-to-many nature of language by forcing alignment with a single reference answer, leading to the model overfitting to non-core expressions. Although our empirical analysis suggests that introducing multiple reference answers can mitigate this issue, the prohibitive data and computational costs necessitate a strategic shift: prioritizing the mitigation of single-reference overfitting over the costly pursuit of answer diversity. To achieve this, we reveal the intrinsic connection between token probability and semantic importance: high-probability tokens carry the core logical framework, while low-probability tokens are mostly replaceable expressions. Based on this insight, we propose ProFit, which selectively masks low-probability tokens to prevent surface-level overfitting. Extensive experiments confirm that ProFit consistently outperforms traditional SFT baselines on general reasoning and mathematical benchmarks. Codes are available at https://github.com/Utaotao/ProFit
Quick Takeaway:
- We need to lose the target of SFT as part of the semantically crucial tokens.
- We find that predicted token probability and semantic importance: high-probability tokens carry the core logical framework, while low-probability tokens are mostly replaceable expressions.
- Proposed ProFit method selectively masks low-probability tokens to prevent surface-level overfitting.
I created a podcast to explain the key concepts:
https://researchpod-share.vercel.app/episode/ace13947-7c31-4ec2-b1d3-3cfe4115da3f
Wow, many thanks!
arXivlens breakdown of this paper ๐ https://arxivlens.com/PaperView/Details/profit-leveraging-high-value-signals-in-sft-via-probability-guided-token-selection-5856-2d8ab01c
- Executive Summary
- Detailed Breakdown
- Practical Applications
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Entropy-Adaptive Fine-Tuning: Resolving Confident Conflicts to Mitigate Forgetting (2026)
- DaGRPO: Rectifying Gradient Conflict in Reasoning via Distinctiveness-Aware Group Relative Policy Optimization (2025)
- AIR: Post-training Data Selection for Reasoning via Attention Head Influence (2025)
- Rethinking Supervised Fine-Tuning: Emphasizing Key Answer Tokens for Improved LLM Accuracy (2025)
- GIFT: Unlocking Global Optimality in Post-Training via Finite-Temperature Gibbs Initialization (2026)
- Learning from Mistakes: Negative Reasoning Samples Enhance Out-of-Domain Generalization (2026)
- Diversity or Precision? A Deep Dive into Next Token Prediction (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper