Yeah ok we get it, they just release the latest checkpoint of their continuously trained model whenever convenient and make big headlines out of it.
Can I download their model and run it on my own hardware? No? Then they’re inferior to deepseek
In fairness, unless you have about 800GB of VRAM/HBM you’re not running true Deepseek yet. The smaller models are Llama or Qwen distilled from Deepseek R1.
I’m really hoping Deepseek releases smaller models that I can fit on a 16GB GPU and try at home.
Well, honestly: I have this kind of computational power at my university, and we are in dire need of a locally hosted LLM for a project, so at least for me as a researcher, its really really cool to have that.
Dude, you made me laugh so much!
Someone please write a virus that deletes all knowledge from LLMs.
Deleting data from them might not be feasible, but there are other tactics.
[…] trapping AI crawlers and sending them down an “infinite maze” of static files with no exit links, where they “get stuck” and “thrash around” for months, he tells users. Once trapped, the crawlers can be fed gibberish data, aka Markov babble, which is designed to poison AI models.