In fairness, unless you have about 800GB of VRAM/HBM you’re not running true Deepseek yet. The smaller models are Llama or Qwen distilled from Deepseek R1.
I’m really hoping Deepseek releases smaller models that I can fit on a 16GB GPU and try at home.
In fairness, unless you have about 800GB of VRAM/HBM you’re not running true Deepseek yet. The smaller models are Llama or Qwen distilled from Deepseek R1.
I’m really hoping Deepseek releases smaller models that I can fit on a 16GB GPU and try at home.