Machforr(19:21:55)
I play with Llama3 from time to time, and even on a 24gb 4090, getting a story going is difficult as I seem to run out of Vram quite quickly
zoi(19:18:33)
I picked up a 24GB 7900XTX today. but I'll need a beefier PSU. I'll get one delivered tomorrow
zoi(19:17:54)
Gemma3 is just a general-purpose natural language model. aimed to do well on one GPU