OpenAI introduces reinforcement fine-tuning for o4 model


Join our daily and weekly newsletters for the latest updates and exclusive content on industry-leading AI coverage. Learn More


OpenAI today announced on its developer-focused account on the social network X that third-party software developers outside the company can now access reinforcement fine-tuning (RFT) for its new o4-mini language reasoning model. This enables them to customize a new, private version of it based on their enterprise’s unique products, internal terminology, goals, employees, processes and more.

Essentially, this capability lets developers take the model available to the general public and tweak it to better fit their needs using OpenAI’s platform dashboard.

Then, they can deploy it through OpenAI’s application programming interface (API), another part of its developer platform, and connect it to their internal employee computers, databases, and applications.

Once deployed, if an employee or leader at the company wants to use it through a custom internal chatbot or custom OpenAI GPT to pull up private, proprietary company knowledge, answer specific questions about company products and policies, or generate new communications and collateral in the company’s voice, they can do so more easily with their RFT version of the model.

However, one cautionary note: research has shown that fine-tuned models may be more prone to jailbreaks and hallucinations, so proceed cautiously!

This launch expands the company’s model optimization tools beyond supervised fine-tuning (SFT) and introduces more flexible control for complex, domain-specific tasks.

Additionally, OpenAI announced that supervised fine-tuning is now supported for its GPT-4.1 nano model, the company’s most affordable and fastest offering to date.

How does Reinforcement Fine-Tuning (RFT) help organizations and enterprises?

RFT creates a new version of OpenAI’s o4-mini reasoning model that is automatically adapted to the user’s or their enterprise/organization’s goals.

It does so by applying a feedback loop during training, which developers at large enterprises (or even independent developers working independently) can now initiate relatively simply, easily and affordably through OpenAI’s online developer platform.

Instead of training on a set of questions with fixed correct answers — which is what traditional supervised learning does — RFT uses a grader model to score multiple candidate responses per prompt.

The training algorithm then adjusts model weights to make high-scoring outputs more likely.

This structure allows customers to align models with nuanced objectives such as an enterprise’s “house style” of communication and terminology, safety rules, factual accuracy, or internal policy compliance.

To perform RFT, users need to:

  1. Define a grading function or use OpenAI model-based graders.
  2. Upload a dataset with prompts and validation splits.
  3. Configure a training job via API or the fine-tuning dashboard.
  4. Monitor progress, review checkpoints and iterate on data or grading logic.

RFT currently supports only o-series reasoning models and is available for the o4-mini model.

Early enterprise use cases

On its platform, OpenAI highlighted several early customers who have adopted RFT across diverse industries:

  • Accordance AI used RFT to fine-tune a model for complex tax analysis tasks, achieving a 39% improvement in accuracy and outperforming all leading models on tax reasoning benchmarks.
  • Ambience Healthcare applied RFT to ICD-10 medical code assignment, raising model performance by 12 points over physician baselines on a gold-panel dataset.
  • Harvey used RFT for legal document analysis, improving citation extraction F1 scores by 20% and matching GPT-4o in accuracy while achieving faster inference.
  • Runloop fine-tuned models for generating Stripe API code snippets, using syntax-aware graders and AST validation logic, achieving a 12% improvement.
  • Milo applied RFT to scheduling tasks, boosting correctness in high-complexity situations by 25 points.
  • SafetyKit used RFT to enforce nuanced content moderation policies and increased model F1 from 86% to 90% in production.
  • ChipStack, Thomson Reuters, and other partners also demonstrated performance gains in structured data generation, legal comparison tasks and verification workflows.

These cases often shared characteristics: clear task definitions, structured output formats and reliable evaluation criteria—all essential for effective reinforcement fine-tuning.

RFT is available now to verified organizations. To help improve future models, OpenAI offers teams that share their training datasets with OpenAI a 50% discount. Interested developers can get started using OpenAI’s RFT documentation and dashboard.

Pricing and billing structure

Unlike supervised or preference fine-tuning, which is billed per token, RFT is billed based on time spent actively training. Specifically:

  • $100 per hour of core training time (wall-clock time during model rollouts, grading, updates and validation).
  • Time is prorated by the second, rounded to two decimal places (so 1.8 hours of training would cost the customer $180).
  • Charges apply only to work that modifies the model. Queues, safety checks, and idle setup phases are not billed.
  • If the user employs OpenAI models as graders (e.g., GPT-4.1), the inference tokens consumed during grading are billed separately at OpenAI’s standard API rates. Otherwise, the company can use outside models, including open source ones, as graders.

Here is an example cost breakdown:

ScenarioBillable TimeCost
4 hours training4 hours$400
1.75 hours (prorated)1.75 hours$175
2 hours training + 1 hour lost (due to failure)2 hours$200

This pricing model provides transparency and rewards efficient job design. To control costs, OpenAI encourages teams to:

  • Use lightweight or efficient graders where possible.
  • Avoid overly frequent validation unless necessary.
  • Start with smaller datasets or shorter runs to calibrate expectations.
  • Monitor training with API or dashboard tools and pause as needed.

OpenAI uses a billing method called “captured forward progress,” meaning users are only billed for model training steps that were successfully completed and retained.

So should your organization invest in RFTing a custom version of OpenAI’s o4-mini or not?

Reinforcement fine-tuning introduces a more expressive and controllable method for adapting language models to real-world use cases.

With support for structured outputs, code-based and model-based graders, and full API control, RFT enables a new level of customization in model deployment. OpenAI’s rollout emphasizes thoughtful task design and robust evaluation as keys to success.

Developers interested in exploring this method can access documentation and examples via OpenAI’s fine-tuning dashboard.

For organizations with clearly defined problems and verifiable answers, RFT offers a compelling way to align models with operational or compliance goals — without building RL infrastructure from scratch.



Source link

Share

Latest Updates

Frequently Asked Questions

Related Articles

Microsoft Copilot gets 12 big updates for fall, including new AI assistant character Mico

Microsoft today held a live announcement event online for its Copilot AI digital...

Secret Plans Reveal Amazon Plot to Replace 600,000 Workers With Robot Army

Illustration by Tag Hartman-Simkins / Futurism. Source: Getty Images Amazon has spent years...

This ultra-mini MagSafe wireless power bank just hit its lowest price

I’ve seen plenty of compact magnetic power banks in my time, but none...

Dell Pro Max with GB10 brings data center AI power to the desktop

As AI models get bigger and more complex, one of the biggest hurdles...
custom cakes home inspections business brokerage life counseling rehab center residences chiropractic clinic surf school merchant advisors poker room med spa facility services creative academy tea shop life coach restaurant life insurance fitness program electrician NDIS provider medical academy Judi Bola Sabung Ayam Online Mahjong Ways Judi Bola Sabung Ayam Online Mahjong Ways Judi Bola SABUNG AYAM ONLINE Judi Bola Live Casino Sabung Ayam Online Judi Bola Judi Bola sabung ayam online judi bola judi bola judi bola judi bola Slot Mahjong slot mahjong Slot Mahjong judi bola sabung ayam online mahjong ways mahjong ways mahjong ways judi bola SV388 SABUNG AYAM ONLINE GA28 judi bola online sabung ayam online live casino online live casino online SV388 SV388 SV388 SV388 SV388 Mix parlay sabung ayam online SV388 SBOBET88 judi bola judi bola judi bola Reset Pola Blackjack Jadi Kasus Study Mahjong Ways Mahjong Ways Mahjong Ways Mahjong Ways sabung ayam online sabung ayam online judi bola sabung ayam online judi bola Judi Bola Sabung Ayam Online Live Casino Online Sabung Ayam Online Sabung Ayam Online Sabung Ayam Online Sabung Ayam Online Sabung Ayam Online Sabung Ayam Online sabung ayam online judi bola mahjong ways sabung ayam online judi bola mahjong ways mahjong ways sabung ayam online sv388 Sv388 judi bola judi bola judi bola JUARA303 Mahjong ways Judi Bola Judi Bola Sabung Ayam Online Live casino mahjong ways 2 sabung ayam online sabung ayam online mahjong ways mahjong ways mahjong ways SV388 SBOBET88 judi bola judi bola judi bola judi bola judi bola https://himakom.fisip.ulm.ac.id/ SABUNG AYAM ONLINE MIX PARLAY SLOT GACOR judi bola online sabung ayam online LIVE CASINO ONLINE Judi Bola Online SABUNG AYAM ONLINE JUDI BOLA ONLINE LIVE CASINO ONLINE JUDI BOLA ONLINE LIVE CASINO ONLINE LIVE CASINO ONLINE sabung ayam online Portal SV388 SBOBET88 SABUNG AYAM ONLINE JUDI BOLA ONLINE CASINO ONLINE MAHJONG WAYS 2 sabung ayam online judi bola SABUNG AYAM ONLINE JUDI BOLA ONLINE Sabung Ayam Online JUDI BOLA Sabung Ayam Online JUDI BOLA SV388, WS168 & GA28 SBOBET88 SV388, WS168 & GA28 SBOBET88 SBOBET88 CASINO ONLINE SLOT GACOR Sabung Ayam Online judi bola