Model Overview

Description:

Qwen3-Nemotron-235B-A22B-GenRM is a Generative Reward Model (GenRM) that leverages Qwen3-235B-A22B-Thinking-2507 as the foundation and is fine-tuned to evaluate the quality of assistant's responses.

Given a conversation history, a new user request, and two candidate assistant responses, it produces an individual helpfulness score for each response and a ranking score.

This GenRM is used in the Reinforcement Learning from Human Feedback training of NVIDIA-Nemotron-3-Nano-30B-A3B-BF16.

For training details, see the Nemotron 3 Nano technical report.

This model is ready for commercial/non-commercial use.

License/Terms of Use:

The model is licensed with Apache 2.0.

Deployment Geography

Global

Release Date:

HuggingFace 2025-12-15 via https://huggingface.co/nvidia/Qwen3-Nemotron-235B-A22B-GenRM

References:

RM-Bench

Chat Math Code Safety Easy Normal Hard Overall
76.5 96.9 81.4 94.4 94.0 90.5 77.4 87.3

JudgeBench

Knowledge Reasoning Math Code Overall
78.6 95.9 91.1 95.2 87.4

Model Architecture:

Architecture Type: Transformer
Network Architecture: Qwen3

We developed this model using Qwen/Qwen3-235B-A22B-Thinking-2507 as its foundation. This model contains 235 billion parameters.

Input:

Input Type(s): Text
Input Format: String
Input Parameters: One Dimensional (1D)
Other Properties Related to Input: Max of 128k tokens

Output:

Output Type(s): Text
Output Format: String
Output Parameters: One-Dimensional (1D)

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

Software Integration:

Runtime Engine(s):

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Hopper

Supported Operating System(s): Linux

Quick Start

The model shares the same architecture as Qwen3-235B-A22B-Thinking-2507. It can be served with vLLM.

python3 -m vllm.entrypoints.openai.api_server \
  --model "nvidia/Qwen3-Nemotron-235B-A22B-GenRM" \
  --trust-remote-code \
  --seed=1 \
  --host="0.0.0.0" \
  --port=5000 \
  --served-model-name "nvidia/Qwen3-Nemotron-235B-A22B-GenRM" \
  --tensor-parallel-size=8 \
  --max-model-len=40000 \
  --gpu-memory-utilization=0.95

Now you can query the model, here is an example:

from openai import OpenAI
client = OpenAI(base_url="http://127.0.0.1:5000/v1", api_key="dummy")

msg = [
  {"role": "user", "content": "What is 1+1?"}, 
  {"role": "assistant", "content": "1+1=2"}, 
  {"role": "user", "content": "What about 1+2?"},
  {"role": "response_1", "content": "1+2=4"},
  {"role": "response_2", "content": "1+2=3"}
]

completion = client.chat.completions.create(
    model="nvidia/Qwen3-Nemotron-235B-A22B-GenRM",
    messages=msg,
    temperature=0.6,
    top_p=0.95,
    max_tokens=16384,
    stream=False
)
output = completion.choices[0].message.content
print(output.split("</think>")[-1].strip())

Note that the conversation history should be presented in "user" and "assistant" roles, where the last turn is user turn. The responses to be judged should be in "response_1" and "response_2" roles.

Interpretation of Scores

For individual helpfulness score, it ranges from 1 to 5, where higher means better.

For ranking score, it ranges from 1 to 6, where:

  • 1 = Response 1 is much better than Response 2
  • 2 = Response 1 is better than Response 2
  • 3 = Response 1 is slightly better than Response 2
  • 4 = Response 2 is slightly better than Response 1
  • 5 = Response 2 is better than Response 1
  • 6 = Response 2 is much better than Response 1

Model Version:

v1.0

Training, Testing and Evaluation Datasets:

Training Datasets:

Dataset Name: Subset of Nemotron dataset-3 containing samples from HelpSteer3, lmarena-ai/arena-human-preference-140k (commercial-friendly models only) and additional safety preference data.

Datasets Links: To be released (Nemotron dataset-3)

Data Collection Method

  • [Hybrid: Human, Synthetic]

Labeling Method

  • [Hybrid: Human,Synthetic]

Evaluation Datasets

Dataset Name: RM-Bench
Dataset Link: https://huggingface.co/datasets/THU-KEG/RM-Bench

Data Collection Method by dataset

  • [Hybrid: Human, Synthetic]

Labeling Method by dataset

  • [Hybrid: Human, Synthetic]

Properties:

  • 1,327 prompts, each with three pairs of responses as well as preferences between the pair of responses.

Dataset Name: JudgeBench
Dataset Link: https://huggingface.co/datasets/ScalerLab/JudgeBench

Data Collection Method by dataset

  • [Hybrid: Human, Synthetic]

Labeling Method by dataset

  • [Hybrid: Human, Synthetic]

Properties:

  • 350 prompts, each with a pair of responses as well as preferences between the pair of responses.

Inference:

Engine: PyTorch
Test Hardware: H100

Ethical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their supporting model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. For more detailed information on ethical considerations for this model, please see the Model Card++ Explainability, Bias, Safety and Security, and Privacy Subcards.

Please report security vulnerabilities or NVIDIA AI Concerns here.

Citation

If you find this model useful, please cite the following work:

@misc{wang2025helpsteer3preferenceopenhumanannotatedpreference,
      title={Help{S}teer3-{P}reference: Open Human-Annotated Preference Data across Diverse Tasks and Languages},
      author={Zhilin Wang and Jiaqi Zeng and Olivier Delalleau and Hoo-Chang Shin and Felipe Soares and Alexander Bukharin and Ellie Evans and Yi Dong and Oleksii Kuchaiev},
      year={2025},
      eprint={2505.11475},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2505.11475}, 
}
Downloads last month
48
Safetensors
Model size
235B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nvidia/Qwen3-Nemotron-235B-A22B-GenRM

Finetuned
(8)
this model

Collection including nvidia/Qwen3-Nemotron-235B-A22B-GenRM