BioCLIP 2.5 Huge Training Data Update (catalog & provenance)
#20
by netzhang - opened
This PR contains updates from the 240M_v1.1, updating:
- catalog
- provenance
- dataset card
Provenance Update
Schema is preserved to stay consistent with the current provenance.
uuid, source_id, data_source, source_url, license_name, copyright_owner, license_link, title, bibliographicCitation
Row-count assertion passed: matches catalog 233,055,986 exactly.
Construction
- 211,152,120 rows preserved as-is from HF main snapshot (214M rows - 2.8M dropped UUIDs no longer in new catalog)
- 21,903,866 net-new rows derived:
- 21,901,107 gbif:
license_linkfrom per-HDF5 metadata,license_namevia HF lookup -> CC URL regex -> "other" fallback, copyright_owner from occurrence.rightsHolder (fallback "not provided"), title = "not provided", bibliographicCitation from GBIF occurrence (else NULL) - 2,759 eol:
license_namefrom eol per-HDF5 metadata,license_linkvia HF reverse-lookup,copyright_ownerset to"not provided", title set to"not provided",bibliographicCitation= NULL
- 21,901,107 gbif:
title and copyright_owner: 0 null everywhere, matching current provenance literal "not provided" convention.
Worth Noticing
title is 100% NULL "not provided" in the existing and new provenance...
egrace479 changed pull request title from 240M_v1.1 to BioCLIP 2.5 Huge Training Data Update
In a new PR, we will add the embeddings:
- another copy of existing text embeddings, with
speciesreplaced withbioclip-2 - the BioCLIP 2.5 Huge text embeddings with
bioclip-2.5-vith14
In the end, we'll have embeddings/txt_emb_{model_name}.npy and embeddings/txt_emb_{model_name}.json for each model, as described here.
egrace479 changed pull request title from BioCLIP 2.5 Huge Training Data Update to BioCLIP 2.5 Huge Training Data Update (catalog & provenance)