• 0 Posts
  • 14 Comments
Joined 10 months ago
cake
Cake day: April 30th, 2024

help-circle







  • There’s zero relationship between data being unencrypted and it being sent to chinese servers.

    If you use a chinese service it’s obvious that data is going to be sent to a chinese server and that the chinese server would be able to read it.

    Unencrypted data transfer, it’s a totally different thing. I would like to see if it’s truly unencrypted or just not using apple proprietary encryption.

    I luckily don’t own any apple product, but I have deepseek app on my android device. If I’m bored later I’ll try to intercept my own data to see if it’s truly unencrypted. This is easy to test. If it’s not true that newspaper is going to my “block list” asap.



  • I have thought a lot on it. The LLM per se would not know if the question is answerable or not, as it doesn’t know if their output is good of bad.

    So there’s various approach to this issue:

    1. The classic approach, and the one used for censoring: keywords. When the llm gets a certain key word or it can get certain keyword by digesting a text input then give back a hard coded answer. Problem is that while censoring issues are limited. Hard to answer questions are unlimited, hard to hard code all.

    2. Self check answers. For everything question the llm could process it 10 times with different seeds. Then analyze the results and see if they are equivalent. If they are not then just answer that it’s unsure about the answer. Problem: multiplication of resource usage. For some questions like the one in the post, it’s possible than the multiple randomized answers give equivalent results, so it would still have a decent failure rate.