--- license: mit base_model: - deepset/gbert-base pipeline_tag: text-classification language: - de --- This model is a fine-tuned version of [deepset/gbert-base](https://huggingface.co/deepset/gbert-base). It was fine-tuned on the task of text classification for conspiracy theory detection. The model was retrained on ~ 1.25 million German-language messages from conspiracist Telegram channels, and then fine-tuned using the dataset TelCovACT dataset, which comprises ~ 4.000 German-language Telegram message annotated for conspiracy theories. The TelCovACT dataset is available upon request. For more information see the [dataset paper](https://ojs.aaai.org/index.php/ICWSM/article/view/22216) and the [datasheet](https://zenodo.org/records/7093870). For fine-tuning details and evaluation see the [reference paper](https://arxiv.org/abs/2404.17985). ## Citation Information Please cite the reference paper if you use this model: ```bibtex @misc{pustet2024detection, title={Detection of Conspiracy Theories Beyond Keyword Bias in German-Language Telegram Using Large Language Models}, author={Milena Pustet and Elisabeth Steffen and Helena Mihaljević}, year={2024}, eprint={2404.17985}, archivePrefix={arXiv} } ```