

most people have absolutely no idea how to ‘run it through an ai voice program’ … yet
most people have absolutely no idea how to ‘run it through an ai voice program’ … yet
your local public library (if in US) should offer free language courses online - all you need is a library card
I agree on both counts - honestly, a lot of companies are probably just posting job openings (that will purposefully remain vacant) with titles like AI Prompt Engineer and SEO Specialist to help boost shareholder confidence. I think I’m just fighting against the idea that LLMs should be used like a search engine - i know you didnt suggest that but I’ve been reading a lot recently about ‘ChatGPT lies!’ when in reality people are wrongly using a pattern recognition system like its a search engine.
screw NewYorkLife, but LLM’s are definitely not bullshit technology. Some amount of skill in so-called ‘prompt-engineering’ makes a huge difference in using LLMs as the tool that they are. I think the big mistake people are making is using it like a search engine. I use it all the time (in a scientific field) but never in a capacity where it can ‘lie’ to me. It’s a very effective ‘assistant’ in both [simple] coding tasks and data analysis/management.
exactly this - SEO (search engine optimization) is huge, just like “prompt engineering” is extremely valuable - and its quite different from SEO. I wouldnt think either is a full-time position but, but learning to effectively prompt and use LLM’s is definitely a skill.
No link to a source in the article. I searched for it and found a CNBC article that posts a link to the “report” from the South Korean Data Protection Authority, but its a bad link. I dont read Korean so its difficult to search any further, but I found no evidence that the south korean authority made this claim.
Through the Voyager phone app I sometimes use the ‘random community’ search option to make Lemmy like StumbleUpon. I dont see this random option through desktop though…maybe its a Voyager thing
it does look like spam - but it is interesting! Especially the linked article under the ‘native ai-memory’ tab (https://arxiv.org/abs/2406.18312). This concept of short-term and long-term USEFUL memory for LLMs is super interesting. I’d definitely be apprehensive about using a third party to do something like this though. This is something where you want to self host every aspect of the tool.