Originality.AI looked at 8,885 long Facebook posts made over the past six years.
Key Findings
- 41.18% of current Facebook long-form posts are Likely AI, as of November 2024.
- Between 2023 and November 2024, the average percentage of monthly AI posts on Facebook was 24.05%.
- This reflects a 4.3x increase in monthly AI Facebook content since the launch of ChatGPT. In comparison, the monthly average was 5.34% from 2018 to 2022.
This is a pretty sweet ad for https://originality.ai/ai-checker
They don’t talk much about their secret sauce. That 40% figure is based on “trust me bro, our tool is really good”. Would have been nice to be able to verify this figure / use the technique elsewhere.
It’s pretty tiring to keep seeing ads masquerading as research.
FB has been junk for more than a decade now, AI or no.
I check mine every few weeks because I’m a sports announcer and it’s one way people get in contact with me, but it’s clear that FB designs its feed to piss me off and try to keep me doomscrolling, and I’m not a fan of having my day derailed.
I deleted facebook in like 2010 or so, because i hardly ever used it anyway, it wasn’t really bad back then, just not for me. 6 or so years later a friend of mine wanted to show me something on fb, but couldn’t find it, so he was just scrolling, i was blown away how bad it was, just ads and auto played videos and absolute garbage. And from what i understand, it just got worse and worse. Everyone i know now that uses facebook is for the market place.
My brother gave me his Facebook credentials so I could use marketplace without bothering him all the time. He’s been a liberal left-winger all his life but for the past few years he’s taken to ranting about how awful Democrats are (“Genocide Joe” etc.) while mocking people who believe that there’s a connection between Trump and Putin. Sure enough, his Facebook is filled with posts about how awful Democrats are and how there’s no connection between Trump and Putin - like, that’s literally all that’s on there. I’ve tried to get him to see that his worldview is entirely created by Facebook but he just won’t accept it. He thinks that FB is some sort of objective collator of news.
In my mind, this is really what sets social media apart from past mechanisms of social control. In the days of mass media, the propaganda was necessarily a one-size-fits-all sort of thing. Now, the pipeline of bullshit can be custom-tailored for each individual. So my brother, who would never support Trump and the Republicans, can nevertheless be fed a line of bullshit that he will accept and help Trump by not voting (he actually voted Green).
Good on him for not falling for the MAGA bulldust and trying for the third option
This kind of just looks like an add for that companies AI detection software NGL.
I wouldn’t be surprised, but I’d be interested to see what they used to make that determination. All of the AI detection I know of are prone to a lot of false-positives.
When I was looking for a job, I ran into a guide to make money using AI:
-
Choose a top selling book.
-
Ask Chat GPT to give a summary for each chapter.
-
Paste the summaries into Google docs.
-
Export as PDF.
-
Sell on Amazon as a digital “short version” or “study guide” for the original book.
-
Repeat with other books.
Blew my mind how much hot stinking garbage is out there.
These people should be shot. With large spoons. Because it’ll hurt more.
deleted by creator
-
and, is the jury already in on which ai is most fuckable?
I’d tell you, but my area network appears to have already started blocking DeepSeek.
Deekseek that was not encrypting data
https://www.theregister.com/2025/01/30/deepseek_database_left_open/
According to Wiz, DeepSeek promptly fixed the issue when informed about it.
:-/
The bigger problem is AI “ignorance,” and it’s not just Facebook. I’ve reported more than one Lemmy post the user naively sourced from ChatGPT or Gemini and took as fact.
No one understands how LLMs work, not even on a basic level. Can’t blame them, seeing how they’re shoved down everyone’s throats as opaque products, or straight up social experiments like Facebook.
…Are we all screwed? Is the future a trippy information wasteland? All this seems to be getting worse and worse, and everyone in charge is pouring gasoline on it.
No one understands how LLMs work, not even on a basic level.
Well that’s just false.
You know what I meant, by no one I mean “a large majority of users.”
I did not know that. There’s a bunch of news articles going around claiming that even the creators of the models don’t understand them and that they are some sort of unfathomable magic black box. I assumed you were propagating that myth, but I was clearly mistaken.
In the last month it has become a barrage. The algorithms also seem to be in overdrive. If I like something I get bombarded with more stuff like that within a day. I’d say 90% of my feed is shit that has nothing to do with anyone I know.
If it wasn’t a way to stay in touch with family and friends I’d bail.
I’m a big fan of a particularly virtual table-top tool called Foundry, which I use to host D&D games.
The Instagram algorithm picked this out of my cookies and fed it to Temu, which determined I must really like… lathing and spot-wielding and shit. So I keep getting ads for miniature industrial equipment. At-home tools for die casting and alloying and the like. From Temu! Absolutely crazy.
Anyone on Facebook deserves to be shit on by sloppy. They also deserve scanned out of all of the money and anything else.
If you’re on Facebook, you deserve this. Get the hell off Facebook.
Edit: itt: brain, dead, and fascist apologist Facebook Earth, who just refuse to accept that their platform is one of the biggest advent of Nazi fascism in this country, and they are all 100% complicit.
While I agree with your message at a high level (I quit FB several years ago), I don’t think it’s productive to be so abrasive.
It’s generally better to be respectful and convincing if you want to change minds.
If you could reliably detect “AI” using an “AI” you could also use an “AI” to make posts that the other “AI” couldn’t detect.
Sure, but then the generator AI is no longer optimised to generate whatever you wanted initially, but to generate text that fools the detector network, thus making the original generator worse at its intended job.
I see no reason why “post right wing propaganda” and "write so you don’t sound like “AI” " should be conflicting goals.
The actual argument why I don’t find such results credible is that the “creator” is trained to sound like humans, so the “detector” has to be trained to find stuff that does not sound like humans. This means, both basically have to solve the same task: Decide if something sounds like a human.
To be able to find the “AI” content, the “detector” would have to be better at deciding what sounds like a human than the “creator”. So for the results to have any kind of accuracy, you’re already banking on the “detector” company having more processing power / better training data / more money than, say, OpenAI or google.
But also, if the “detector” was better at the job, it could be used as a better “creator” itself. Then, how would we distinguish the content it created?
Keep in mind this is for AI generated TEXT, not the images everyone is talking about in this thread.
Also they used an automated tool, all of which have very high error rates, because detecting AI text is a fundamentally impossible task
AI does give itself away over “longer” posts, and if the tool makes about an equal number of false positives to false negatives then it should even itself out in the long run. (I’d have liked more than 9K “tests” for it to average out, but even so.) If they had the edit history for the post, which they didn’t, then it’s more obvious. AI will either copy-paste the whole thing in in one go, or will generate a word at a time at a fairly constant rate. Humans will stop and think, go back and edit things, all of that.
I was asked to do some job interviews recently; the tech test had such an “animated playback”, and the difference between a human doing it legitimately and someone using AI to copy-paste the answer was surprisingly obvious. The tech test questions were nothing to do with the job role at hand and were causing us to select for the wrong candidates completely, but that’s more a problem with our HR being blindly in love with AI and “technical solutions to human problems”.
“Absolute certainty” is impossible, but balance of probabilities will do if you’re just wanting an estimate like they have here.
I have no idea whether the probabilities are balanced. They claim 5% was AI even before chatgpt was released, which seems pretty off. No one was using LLMs before chatgpt went viral except for researchers.
Chatbots doesn’t mean that they have a real conversation. Some just spammed links from a list of canned responses, or just upvoted the other chat bots to get more visibility, or the just reposted a comment from another user.
how tf did it take 6 years to analyze 8000 posts
I pretty sure they selected posts from a 6 year period, not that they spent six years on the analysis.
In that case, how/why did they only choose 8000 posts over 6 years? Facebook probably gets more than 8000 new posts per second.
I was wondering how far I’d have to scroll before getting to someone who doesn’t understand statistics complaining about the sample size…
There’s likely been trillions of posts on Facebook during that time frame. Is a sample size of 8000 really sufficient for a corpus that large?
Have you ever heard of “margin of error”?
Learn statistics, it’s actually super informative.
8k posts sounds like 0.00014 percent of Facebook posts
It probably is but it’s a large sample size and if the selection is random enough, it’s likely sufficient to extrapolate some numbers. This is basically how drug testing works.