DACTYL Classifiers
Collection
Trained AI-generated text classifiers. Pretrained means using binary cross entropy loss, finetuned refers to deep X-risk optimized classifiers.
•
10 items
•
Updated
{
"training_split": "training",
"evaluation_split": "testing",
"results_path": "bce-pretraining-modernbert.csv",
"num_epochs": 5,
"model_path": "answerdotai/ModernBERT-base",
"tokenizer": "answerdotai/ModernBERT-base",
"optimizer": "AdamW",
"optimizer_type": "torch",
"optimizer_args": {
"lr": 2e-05,
"weight_decay": 0.01
},
"loss_fn": "BCEWithLogitsLoss",
"reset_classification_head": false,
"loss_type": "torch",
"loss_fn_args": {},
"needs_loss_fn_as_parameter": false,
"save_path": "ShantanuT01/dactyl-modernbert-base-pretrained",
"training_args": {
"batch_size": 64,
"needs_sampler": false,
"needs_index": false,
"shuffle": true,
"sampling_rate": null,
"apply_sigmoid": false
}
}
| model | AP Score | AUC Score | OPAUC Score | TPAUC Score |
|---|---|---|---|---|
| DeepSeek-V3 | 0.999385 | 0.999967 | 0.999666 | 0.996747 |
| ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-RedditWritingPrompts-testing | 0.873078 | 0.993627 | 0.967028 | 0.681125 |
| ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-abstracts-testing | 0.939616 | 0.99424 | 0.98274 | 0.842789 |
| ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-news-testing | 0.713771 | 0.986672 | 0.938502 | 0.459902 |
| ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-reviews-testing | 0.540008 | 0.967417 | 0.880643 | 0.0987242 |
| ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-student_essays-testing | 0.170411 | 0.909164 | 0.745688 | 0 |
| ShantanuT01/fine-tuned-Llama-3.2-1B-Instruct-apollo-mini-tweets-testing | 0.875598 | 0.992623 | 0.963157 | 0.655177 |
| claude-3-5-haiku-20241022 | 0.987401 | 0.998385 | 0.991357 | 0.915794 |
| claude-3-5-sonnet-20241022 | 0.997566 | 0.999781 | 0.998251 | 0.982997 |
| gemini-1.5-flash | 0.983599 | 0.997912 | 0.989025 | 0.893064 |
| gemini-1.5-pro | 0.970013 | 0.996088 | 0.979766 | 0.802907 |
| gpt-4o-2024-11-20 | 0.984528 | 0.99771 | 0.989327 | 0.896023 |
| gpt-4o-mini | 0.99977 | 0.999989 | 0.999889 | 0.998921 |
| llama-3.2-90b | 0.970229 | 0.994643 | 0.980641 | 0.811787 |
| llama-3.3-70b | 0.991184 | 0.998661 | 0.994198 | 0.943534 |
| mistral-large-latest | 0.997942 | 0.999712 | 0.998801 | 0.988319 |
| mistral-small-latest | 0.997993 | 0.999273 | 0.9988 | 0.988307 |
| overall | 0.995644 | 0.996724 | 0.987444 | 0.877631 |