Victor C.
dehnhaide
AI & ML interests
None yet
Recent Activity
new activity 2 days ago
AesSedai/MiMo-V2.5-GGUF:Works great with your PR 22493! liked a model 2 days ago
AesSedai/MiMo-V2.5-GGUF liked a model 10 days ago
ubergarm/Qwen3.6-27B-GGUFOrganizations
None yet
Works great with your PR 22493!
❤️ 2
10
#1 opened 5 days ago
by
Mdubbya
Excellent 5.00 bps quant
#2 opened 17 days ago
by
dehnhaide
Hoping for your magic on MiniMax-M2.7-FP8-INT4-AWQ quant
17
#2 opened 22 days ago
by
dehnhaide
Testing Q5 flavors (ubergarm / aessedai / unsloth) for "speed" on 8x RTX 3090
🔥 1
2
#10 opened 18 days ago
by
dehnhaide
UD-Q4_K_XL of MiniMax-M2.7-GGUF in BROKEN
8
#5 opened 21 days ago
by
dehnhaide
An upgrade in quality and a mixed bag of ... 8x RTX3090
1
#3 opened 19 days ago
by
dehnhaide
GLM 5.1 vs GLM 5 - burns A LOT output tokens on thinking
👍 1
35
#6 opened 23 days ago
by
curiouspp8
Question about benchmark results
🔥🚀 2
8
#5 opened 2 months ago
by
tarruda
Need a 5% and 15% REAP
7
#10 opened 22 days ago
by
nawoalanor
GGUF quants available — all sizes Q2_K through Q8_0 + BF16
🚀❤️ 1
2
#8 opened 22 days ago
by
dennny123
I'm on waiting list for AesSedai Minimax M2.7 Q4_K_M or similar...
❤️ 1
4
#3 opened about 2 months ago
by
MartinPatterson
Testing smol-IQ5_KS
4
#13 opened 24 days ago
by
shewin
Model seems to have issues in vLLM (characters duplication)
🔥 1
8
#15 opened about 1 month ago
by
dehnhaide
Check out Thireus GGUF-Tool-Suite quants!
❤️🤗 5
8
#13 opened about 1 month ago
by
ubergarm
Quant HAS issues + results with vLLM on 8x 3090
4
#1 opened about 1 month ago
by
dehnhaide
accuracy
26
#4 opened 3 months ago
by
ktsaou
Offloading layers is not working for me
9
#9 opened about 1 month ago
by
tnuvkeg
AesSedai/Kimi-K2.5-GGUF using the Q4_X on 8 RTX 3090
🚀 1
12
#7 opened about 1 month ago
by
martossien
The RTX3090 works very well,ths!
👍❤️ 4
3
#2 opened about 2 months ago
by
summerbuild
Yet another excellent quant suite from AesSedai
#1 opened about 1 month ago
by
dehnhaide