--- license: apache-2.0 datasets: - sentence-transformers/natural-questions language: - en base_model: cross-encoder/ms-marco-MiniLM-L12-v2 pipeline_tag: text-ranking library_name: rk-transformers tags: - transformers - rknn - rockchip - npu - rk-transformers - rk3588 model_name: ms-marco-MiniLM-L12-v2 --- # ms-marco-MiniLM-L12-v2 (RKNN2) > This is an RKNN-compatible version of the [cross-encoder/ms-marco-MiniLM-L12-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L12-v2) model. It has been optimized for Rockchip NPUs using the [rk-transformers](https://github.com/emapco/rk-transformers) library.
Click to see the RKNN model details and usage examples ## Model Details - **Original Model:** [cross-encoder/ms-marco-MiniLM-L12-v2](https://huggingface.co/cross-encoder/ms-marco-MiniLM-L12-v2) - **Target Platform:** rk3588 - **rknn-toolkit2 Version:** 2.3.2 - **rk-transformers Version:** 0.3.0 ### Available Model Files | Model File | Optimization Level | Quantization | File Size | | :--------- | :----------------- | :----------- | :-------- | | [model.rknn](./model.rknn) | 0 | float16 | 68.8 MB | | [model_b1_s256.rknn](./model_b1_s256.rknn) | 0 | float16 | 67.0 MB | | [model_b4_s256.rknn](./model_b4_s256.rknn) | 0 | float16 | 75.1 MB | | [model_b4_s512.rknn](./model_b4_s512.rknn) | 0 | float16 | 81.7 MB | | [rknn/model_o1.rknn](./rknn/model_o1.rknn) | 1 | float16 | 68.8 MB | | [rknn/model_o2.rknn](./rknn/model_o2.rknn) | 2 | float16 | 68.8 MB | | [rknn/model_o3.rknn](./rknn/model_o3.rknn) | 3 | float16 | 68.8 MB | | [rknn/model_w8a8.rknn](./rknn/model_w8a8.rknn) | 0 | w8a8 | 36.5 MB | ## Usage ### Installation Install `rk-transformers` with inference dependencies to use this model: ```bash pip install rk-transformers[inference] ``` #### RK-Transformers API ```python from rktransformers import RKModelForSequenceClassification from transformers import AutoTokenizer tokenizer = AutoTokenizer.from_pretrained("rk-transformers/ms-marco-MiniLM-L12-v2") model = RKModelForSequenceClassification.from_pretrained( "rk-transformers/ms-marco-MiniLM-L12-v2", platform="rk3588", core_mask="auto", ) inputs = tokenizer("Hello, my dog is cute", return_tensors="np") outputs = model(**inputs) logits = outputs.logits print(logits.shape) # Load specific optimized/quantized model file model = RKModelForSequenceClassification.from_pretrained( "rk-transformers/ms-marco-MiniLM-L12-v2", platform="rk3588", file_name="rknn/model_w8a8.rknn" ) ``` ## Configuration The full configuration for all exported RKNN models is available in the [config.json](./config.json) file.
--- # Cross-Encoder for MS Marco This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task. The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/cross_encoder/training/ms_marco) ## Usage with SentenceTransformers The usage is easy when you have [SentenceTransformers](https://www.sbert.net/) installed. Then you can use the pre-trained models like this: ```python from sentence_transformers import CrossEncoder model = CrossEncoder('cross-encoder/ms-marco-MiniLM-L12-v2') scores = model.predict([ ("How many people live in Berlin?", "Berlin had a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers."), ("How many people live in Berlin?", "Berlin is well known for its museums."), ]) print(scores) # [ 9.218911 -4.0780287] ``` ## Usage with Transformers ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/ms-marco-MiniLM-L12-v2') tokenizer = AutoTokenizer.from_pretrained('cross-encoder/ms-marco-MiniLM-L12-v2') features = tokenizer(['How many people live in Berlin?', 'How many people live in Berlin?'], ['Berlin has a population of 3,520,031 registered inhabitants in an area of 891.82 square kilometers.', 'New York City is famous for the Metropolitan Museum of Art.'], padding=True, truncation=True, return_tensors="pt") model.eval() with torch.no_grad(): scores = model(**features).logits print(scores) ``` ## Performance In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset. | Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec | | ------------- |:-------------| -----| --- | | **Version 2 models** | | | | cross-encoder/ms-marco-TinyBERT-L2-v2 | 69.84 | 32.56 | 9000 | cross-encoder/ms-marco-MiniLM-L2-v2 | 71.01 | 34.85 | 4100 | cross-encoder/ms-marco-MiniLM-L4-v2 | 73.04 | 37.70 | 2500 | cross-encoder/ms-marco-MiniLM-L6-v2 | 74.30 | 39.01 | 1800 | cross-encoder/ms-marco-MiniLM-L12-v2 | 74.31 | 39.02 | 960 | **Version 1 models** | | | | cross-encoder/ms-marco-TinyBERT-L2 | 67.43 | 30.15 | 9000 | cross-encoder/ms-marco-TinyBERT-L4 | 68.09 | 34.50 | 2900 | cross-encoder/ms-marco-TinyBERT-L6 | 69.57 | 36.13 | 680 | cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340 | **Other models** | | | | nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900 | nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340 | nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100 | Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340 | amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330 | sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720 Note: Runtime was computed on a V100 GPU.