chatllm.cpp adds support of this model

#16
by J22 - opened

chatllm.cpp now supports this model.

Initial tests show that this model is strong on solving some tricky math problems.

server ---chat :step3-vl -ngl all --max-length 10000 +detect-thoughts

image

As a bonus, since this model can support native resolution mathematically using only the global view, I have added an option to test this --set native-resolution 1.

can you make a full tutorial of how to download and use this model in chatllm.cpp? it looks like gguf is not working on your fork.

@rosspanda0 This command will download quantized model automatically.

server ---chat :step3-vl   .............

better doc is very necessary for good stuff.

Sign up or log in to comment