🎉 ComfyUI ready
#3
by drbaph - opened
test
I wish they would leave the VRAM requirements on these pages, at least estimates!
I wish they would leave the VRAM requirements on these pages, at least estimates!
minimum 4 gb vram for the comfyui wrapper
How are you infering it in your wrapper?
I have it running in docker now with only 3.6gb vram used after the first inference.
Also I am trying to bring it down further by using an fp8 or fp4 qwen3-0.6b as backbone
I just published my repo for this.
I got even under 2.6gb vram now during inference, by using nf4 for the LM Model and bf16 for the tts:
https://github.com/Wladastic/omnivoice-tts-nano-webui
I am trying to squeeze more out of it