Does this mean that we will have GGUF quants of models as they release, or at least support for gguf out of the box for new models in the future?
rombo dawg
rombodawg
AI & ML interests
My patreon:
https://www.patreon.com/c/Rombodawg
My Twitter:
https://x.com/dudeman6790
Recent Activity
new activity 2 days ago
openai/privacy-filter:Classic OpenAI: Overhyped and censored by OpenAI new activity 3 days ago
openai/privacy-filter:Classic Open-AI new activity 4 days ago
tecaprovn/deepseek-v4-flash-gguf:why artificially raise the model size?