In fairness, unless you have about 800GB of VRAM/HBM you’re not running true Deepseek yet. The smaller models are Llama or Qwen distilled from Deepseek R1.
I’m really hoping Deepseek releases smaller models that I can fit on a 16GB GPU and try at home.
Well, honestly: I have this kind of computational power at my university, and we are in dire need of a locally hosted LLM for a project, so at least for me as a researcher, its really really cool to have that.
Can I download their model and run it on my own hardware? No? Then they’re inferior to deepseek
In fairness, unless you have about 800GB of VRAM/HBM you’re not running true Deepseek yet. The smaller models are Llama or Qwen distilled from Deepseek R1.
I’m really hoping Deepseek releases smaller models that I can fit on a 16GB GPU and try at home.
Well, honestly: I have this kind of computational power at my university, and we are in dire need of a locally hosted LLM for a project, so at least for me as a researcher, its really really cool to have that.