--- dataset_info: - config_name: C# features: - name: text dtype: string splits: - name: train num_bytes: 548238273 num_examples: 300000 download_size: 199846629 dataset_size: 548238273 - config_name: C#-long features: - name: text dtype: string splits: - name: train num_bytes: 2064546842 num_examples: 96068 download_size: 530467782 dataset_size: 2064546842 - config_name: js features: - name: text dtype: string splits: - name: train num_bytes: 483951160 num_examples: 296473 download_size: 213787604 dataset_size: 483951160 - config_name: js-long features: - name: text dtype: string splits: - name: train num_bytes: 398167356 num_examples: 41492 download_size: 145648694 dataset_size: 398167356 - config_name: md features: - name: text dtype: string splits: - name: train num_bytes: 481042012 num_examples: 400000 download_size: 271179932 dataset_size: 481042012 - config_name: md-long features: - name: text dtype: string splits: - name: train num_bytes: 1392553381 num_examples: 100000 download_size: 618487776 dataset_size: 1392553381 - config_name: py features: - name: text dtype: string splits: - name: train num_bytes: 5780780483 num_examples: 2000000 download_size: 2522248734 dataset_size: 5780780483 - config_name: py-long features: - name: text dtype: string splits: - name: train num_bytes: 4825290455 num_examples: 500000 download_size: 1825128953 dataset_size: 4825290455 - config_name: ts features: - name: text dtype: string splits: - name: train num_bytes: 356095260 num_examples: 237992 download_size: 148690703 dataset_size: 356095260 - config_name: ts-long features: - name: text dtype: string splits: - name: train num_bytes: 3698485815 num_examples: 62200 download_size: 931446724 dataset_size: 3698485815 configs: - config_name: C# data_files: - split: train path: C#/train-* - config_name: C#-long data_files: - split: train path: C#-long/train-* - config_name: js data_files: - split: train path: js/train-* - config_name: js-long data_files: - split: train path: js-long/train-* - config_name: md data_files: - split: train path: md/train-* - config_name: md-long data_files: - split: train path: md-long/train-* - config_name: py data_files: - split: train path: py/train-* - config_name: py-long data_files: - split: train path: py-long/train-* - config_name: ts data_files: - split: train path: ts/train-* - config_name: ts-long data_files: - split: train path: ts-long/train-* license: apache-2.0 task_categories: - text-generation language: - en --- # Reactive AI / Beta Code Code-based pre-training corpus for RxT-Beta models, created from public & open datasets. Includes code in different programming languages. Subsets are divided into short (< ~1024 tokens) and long (> ~1024 tokens) categories. ## Original dataset It's created from [codeparrot](https://huggingface.co/codeparrot) datasets: - Python subsets from [codeparrot/codeparrot-clean]([codeparrot/codeparrot-clean](https://huggingface.co/datasets/codeparrot/codeparrot-clean)) - other subsets from [codeparrot/github-code-clean](https://huggingface.co/datasets/codeparrot/github-code-clean)