

Oh yes. The LLM will lie to you, confidently.
Oh yes. The LLM will lie to you, confidently.
What are the local use cases? I’m running on a 3060ti but output is always inferior to the free tier of the various providers.
Can I justify an upgrade to a 4090 (or more)?
Great for turning complex into simple.
Bad for turning simple into complex.
Treat LLMs like a super knowledgeable, enthusiastic, arrogant, unimaginative intern.
But this is America
Maybe they hosted their servers in Eritrea, Turkmenistan or San Marino. No copyright laws there
Annas archive. Keep up. Pffff.
Agreed. Seed forever and release the AI weights and model. That would be fair payment.
The entirely of Annas archive would be an excellent benchmark training set. Particularly a cleaned processed dataset.
It’s easy to run a distilled version of the R1 model locally. It’s very difficult to run the full version. Min $6k to get 7 tokens per second.
Can you link to something so I can read more about this please?