x-polyglot-x
x-polyglot-x
AI & ML interests
None yet
Recent Activity
new activity
3 days ago
ubergarm/Kimi-K2.5-GGUF:RPC within ik_llama new activity
3 days ago
unsloth/Qwen3.5-397B-A17B-GGUF:Qwen3.5 GGUF Evaluation Results new activity
14 days ago
ubergarm/Qwen3.5-397B-A17B-GGUF:Compatible with RPC? Organizations
None yet
RPC within ik_llama
1
#6 opened 3 days ago
by
x-polyglot-x
Qwen3.5 GGUF Evaluation Results
๐ 6
9
#9 opened 26 days ago
by
danielhanchen
Compatible with RPC?
3
#11 opened 14 days ago
by
x-polyglot-x
Unable to use mmap() on this model?
#8 opened 18 days ago
by
x-polyglot-x
Minimum RAM required to run this model
๐ 1
2
#1 opened about 1 month ago
by
Arete7
Never mind the benchmarks, MiniMax M2.1 outshines GLM 4.7
๐ค 2
4
#11 opened 2 months ago
by
aaron-newsome
Integrating prior conversation content (plus best general settings)
#18 opened 4 months ago
by
x-polyglot-x
Need assistance in running on Mac
3
#16 opened 4 months ago
by
x-polyglot-x
ValueError: Model type kimi_linear not supported.
๐ 1
3
#1 opened 4 months ago
by
x-polyglot-x
A little confused on memory usage (vLLM newbie)
4
#12 opened 5 months ago
by
x-polyglot-x
Q8_0 and UD-Q8_K_XL missing parts?
4
#1 opened 5 months ago
by
superciliousdude
where are your lower quants?
7
#1 opened 6 months ago
by
jc2375
Is it the same architecture than GLM 4.5 ?
โ ๐ 2
5
#3 opened 6 months ago
by
AliceThirty
Fingers crossed for the 4.6-air
โ โค๏ธ 6
14
#1 opened 6 months ago
by
aaron-newsome
Trouble running Q5_K_M With Llama.cpp
6
#3 opened 8 months ago
by
simusid
any hope for running on 256gb ram and 12gb vram ?
8
#3 opened 8 months ago
by
gopi87
Error - model contains "custom code"
#1 opened 8 months ago
by
x-polyglot-x
Unknown Architecture 'hunyuan-moe' (using --jinja with llama-server)
5
#4 opened 8 months ago
by
x-polyglot-x
MLX Convert Error
4
#8 opened 9 months ago
by
baggaindia
Request for 8-bit version
โ 2
#2 opened 9 months ago
by
x-polyglot-x