vLLM inference

#3
by orange0318 - opened

Hi inclusionAI team, does Ming-flash-omni-2.0 officially support vLLM inference now?

inclusionAI org

Hi, @orange0318 thanks so much for your question. At this moment not yet, we are working internally on some fundamentals and will engage with the community to make sure we could bring the best experience to our users. Stay tuned!

Hi @RichardBrian any hope for helping out with llamacpp support, even if it's text/image only at first?

Sign up or log in to comment