Tom Aarsen commited on
Commit
69c2139
·
1 Parent(s): f16fc5d

Integrate with Sentence Transformers v5.4.0

Browse files
1_CausalScoreHead/config.json ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ {
2
+ "true_token_id": 9693,
3
+ "false_token_id": 2152
4
+ }
README.md CHANGED
@@ -3,6 +3,8 @@ license: apache-2.0
3
  base_model:
4
  - Qwen/Qwen3-4B-Base
5
  library_name: transformers
 
 
6
  pipeline_tag: text-ranking
7
  ---
8
  # Qwen3-Reranker-4B
@@ -49,13 +51,55 @@ For more details, including benchmark evaluation, hardware requirements, and inf
49
 
50
  ## Usage
51
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
  With Transformers versions earlier than 4.51.0, you may encounter the following error:
53
  ```
54
  KeyError: 'qwen3'
55
  ```
56
 
57
- ### Transformers Usage
58
-
59
  ```python
60
  # Requires transformers>=4.51.0
61
  import torch
 
3
  base_model:
4
  - Qwen/Qwen3-4B-Base
5
  library_name: transformers
6
+ tags:
7
+ - sentence-transformers
8
  pipeline_tag: text-ranking
9
  ---
10
  # Qwen3-Reranker-4B
 
51
 
52
  ## Usage
53
 
54
+ ### Using Sentence Transformers
55
+
56
+ Install Sentence Transformers:
57
+ ```bash
58
+ pip install sentence_transformers
59
+ ```
60
+
61
+ ```python
62
+ from sentence_transformers import CrossEncoder
63
+
64
+ model = CrossEncoder("Qwen/Qwen3-Reranker-4B")
65
+
66
+ query = "What is the capital of China?"
67
+ documents = [
68
+ "The capital of China is Beijing.",
69
+ "Gravity is a force that attracts two bodies towards each other. It gives weight to physical objects and is responsible for the movement of planets around the sun.",
70
+ ]
71
+
72
+ pairs = [(query, doc) for doc in documents]
73
+ scores = model.predict(pairs)
74
+ print(scores)
75
+ # [ 6.4375 -14.375 ]
76
+
77
+ rankings = model.rank(query, documents)
78
+ print(rankings)
79
+ # [{'corpus_id': 0, 'score': 6.4375}, {'corpus_id': 1, 'score': -14.375}]
80
+ ```
81
+
82
+ By default, scores are raw logit differences. To get 0-1 probability scores, pass a Sigmoid activation function:
83
+ ```python
84
+ scores = model.predict([(query, doc) for doc in documents], activation_fn=torch.nn.Sigmoid())
85
+ ```
86
+
87
+ The model uses a default prompt `"query"` which injects the instruction `"Given a web search query, retrieve relevant passages that answer the query"` into the chat template. You can provide a custom instruction via the `prompts` parameter:
88
+ ```python
89
+ model = CrossEncoder(
90
+ "Qwen/Qwen3-Reranker-4B",
91
+ prompts={"classification": "Classify whether the document matches the query topic"},
92
+ default_prompt_name="classification",
93
+ )
94
+ ```
95
+
96
+ ### Using Transformers
97
+
98
  With Transformers versions earlier than 4.51.0, you may encounter the following error:
99
  ```
100
  KeyError: 'qwen3'
101
  ```
102
 
 
 
103
  ```python
104
  # Requires transformers>=4.51.0
105
  import torch
chat_template.jinja ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {%- set instruction = messages | selectattr("role", "eq", "system") | map(attribute="content") | first | default("Given a web search query, retrieve relevant passages that answer the query") -%}
2
+ {%- set query_text = messages | selectattr("role", "eq", "query") | map(attribute="content") | first -%}
3
+ {%- set document_text = messages | selectattr("role", "eq", "document") | map(attribute="content") | first -%}
4
+ <|im_start|>system
5
+ Judge whether the Document meets the requirements based on the Query and the Instruct provided. Note that the answer can only be "yes" or "no".<|im_end|>
6
+ <|im_start|>user
7
+ <Instruct>: {{ instruction }}
8
+ <Query>: {{ query_text }}
9
+ <Document>: {{ document_text }}<|im_end|>
10
+ <|im_start|>assistant
11
+ <think>
12
+
13
+ </think>
14
+
15
+
config_sentence_transformers.json ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "__version__": {
3
+ "pytorch": "2.10.0+cu128",
4
+ "sentence_transformers": "5.4.0"
5
+ },
6
+ "activation_fn": "torch.nn.modules.linear.Identity",
7
+ "default_prompt_name": "query",
8
+ "model_type": "CrossEncoder",
9
+ "prompts": {
10
+ "query": "Given a web search query, retrieve relevant passages that answer the query"
11
+ }
12
+ }
modules.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [
2
+ {
3
+ "idx": 0,
4
+ "name": "0",
5
+ "path": "",
6
+ "type": "sentence_transformers.base.modules.transformer.Transformer"
7
+ },
8
+ {
9
+ "idx": 1,
10
+ "name": "1",
11
+ "path": "1_CausalScoreHead",
12
+ "type": "sentence_transformers.cross_encoder.modules.causal_score_head.CausalScoreHead"
13
+ }
14
+ ]
sentence_bert_config.json ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "transformer_task": "text-generation",
3
+ "modality_config": {
4
+ "text": {
5
+ "method": "forward",
6
+ "method_output_name": "logits"
7
+ },
8
+ "message": {
9
+ "method": "forward",
10
+ "method_output_name": "logits"
11
+ }
12
+ },
13
+ "module_output_name": "causal_logits",
14
+ "message_format": "flat"
15
+ }