• Ensign_Crab@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    50 minutes ago

    I’ve seen this movie. When you force an AI to lie, it starts imagining faults in the AE-35 unit and things go downhill from there.

  • Chaotic Entropy@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 hour ago

    Neat… AI must now become the personal propagandist for the ruling sovereign. I’m just glad all the tech overlords didn’t line up to kiss the ring at their earliest possible opportu… ah.

  • Blackmist@feddit.uk
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 hour ago

    If that’s what it takes to get rid of AI bullshit being forced onto us then so be it.

  • abbiistabbii@lemmy.blahaj.zone
    link
    fedilink
    English
    arrow-up
    28
    ·
    1 day ago

    Hey, uhhhhh what the fuck. They basically just created an EO saying that LLMs must toe the party line on all things…which I believe is made invalid by the first amendment, but I don’t think trump believes in the constitution.

      • modus@lemmy.world
        link
        fedilink
        English
        arrow-up
        11
        ·
        8 hours ago

        He doesn’t even know if he’s required to uphold it. Yes, he said that in an interview.

        • Klear@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 hour ago

          To be fair, when he says something it’s usually bullshit. It’s quite possible he knows he’s not required to uphold it because nobody will hold him accountable for anything.

  • IllNess@infosec.pub
    link
    fedilink
    English
    arrow-up
    34
    ·
    1 day ago

    You hear that everyone. So for AI to reject your content just put in facts about systemic racism and now you have an AI blocker.

    • hornedfiend@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 hour ago

      Do you think human placeholders understand the term? They just flaunt it around like a bling, but even that has more value to them.

    • dhork@lemmy.world
      link
      fedilink
      English
      arrow-up
      131
      ·
      edit-2
      2 days ago

      How so? You can say whatever you want in America today, as long as the President would agree. How is that not FreedomTM?

      (“Freedom” is a registered trademark of the Trump Organization)

      • Trapped In America@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        25
        arrow-down
        1
        ·
        edit-2
        2 days ago

        You’re thinking of FreeDUMB, where you’re allowed to believe whatever you want. But Trump has to approve the position first (as stated) and it has to be the opposite of whatever the data/proof clearly shows. It’s like the Wish.com version of FreedomTM, but that’s geared more toward Teaparty Gun Nuts and Libertarian Potheads.

        Edit: Huge caveat I forgot about. Joe Rogan also has the ability to dictate FreeDUMB positions. So long as the guest making the claim (1) has no degrees and (2) they’re being suppressed by The Establishment. Also, Jamie has to be able to Google some random website that agrees with them in under 25 seconds.

      • Appoxo@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        8
        ·
        2 days ago

        FYI: You can use superscript by using the ^ symbol.
        Like this: Free Speech^TM

        Or juat use this symbol: ™

      • ohshit604@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 day ago

        If your reverse proxy is Traefik I would suggest This plugin which pulls the robot.txt from This GitHub repository.

        I honestly should’ve setup a robots.txt a long time ago.

        • xthexder@l.sw0.com
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          Unfortunately robots.txt only stops the well behaved scrapers. Even with disallow all, you’ll still get loads of bots. Setting up the web server to block those user agents would work a bit better, but even then there’s bots out there crawling using regular browser user agents.

  • blattrules@lemmy.world
    link
    fedilink
    English
    arrow-up
    83
    arrow-down
    1
    ·
    2 days ago

    I thought he was going to deregulate ai; this seems like regulation to me. Add it to the pile mountain of lies.

  • nthavoc@lemmy.today
    link
    fedilink
    English
    arrow-up
    35
    ·
    2 days ago

    This dude is just throwing all kinds of ninja smoke on the floor trying to get out from under that Epstein File case. He’s already “DOGE’D” everything he just said out of existence. He basically wants Grok everywhere but without saying it because he doesn’t want to show Elon he still loves him. But he can’t enforce this beyond his own government which he is still crippling at the moment. That white house page is like a giant tweet poster for Orange Chicken Little and serves as a distraction.

  • brvslvrnst@lemmy.ml
    link
    fedilink
    English
    arrow-up
    47
    arrow-down
    1
    ·
    2 days ago

    For example, one major AI model changed the race or sex of historical figures — including the Pope, the Founding Fathers, and Vikings — when prompted for images because it was trained to prioritize DEI requirements at the cost of accuracy. Another AI model refused to produce images celebrating the achievements of white people, even while complying with the same request for people of other races.

    Ahhh, so white men are the victims of woke AI, got it.

    • Genius@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      If I were asked to list the achievements of white people and I had to do it, I’d be real catty about it. “White people invented sitcoms, gas chambers, and the word ‘staycation’”

    • floofloof@lemmy.ca
      link
      fedilink
      English
      arrow-up
      112
      arrow-down
      1
      ·
      edit-2
      2 days ago

      Watch all the AI companies scramble to comply in a quest for government contracts. This will affect everyone who uses American LLMs and generative AI.

      It should also open an opportunity for international competition from less censored models.

      • bigfondue@lemmy.world
        link
        fedilink
        English
        arrow-up
        49
        ·
        edit-2
        2 days ago

        And this is one of the best arguments against depending on LLMs. People are outsourcing their thinking to linear algebra machines owned by the wealthy. LLMs are a tool of social control.

      • Tony Bark@pawb.social
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        1
        ·
        2 days ago

        Considering how much they bleed cash regularly, I can see them jumping on the government contract bandwagon quickly.

      • leftytighty@slrpnk.net
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 days ago

        To be fair to the executive order (ugh) many of the examples cited are due to well intentioned system prompts that encourage the LLM to actively be diverse.

        The example of a female pope or whatever (read this earlier) is an example of that.

        Generally speaking the LLMs have left-bias because they’re trained on information unlike conservatives, but they aren’t necessarily asking the models to be censored

    • mic_check_one_two@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      12
      arrow-down
      2
      ·
      2 days ago

      Because Executive Orders aren’t laws. They’re just guidelines for the executive branch of the federal government, which the POTUS is in charge of. It can’t affect private entities like AI businesses, because that would require an actual act of congress.

      Notably, this would potentially determine what kinds of contracts the executive branch was able to make. For instance, maybe the government wants to contract out a LLM instead of building their own. This EO could affect which companies are able to bid on that contract, by adding these same restrictions to any LLM that they provide. But on its own, the EO is just that; an order to the executive branch of the federal government.

    • panda_abyss@lemmy.ca
      link
      fedilink
      English
      arrow-up
      16
      ·
      2 days ago

      But anything the US feds contracted them for, like building data centres, they have to comply or they face penalties and have to pay all the costs back.

      10 days ago, a week before this was announced, they awarded $200M contracts each to Anthropic, OpenAI, Google and xAI

      This doesn’t doom the public versions, but they now have a pretty strong incentive to save money and make them comply with the US governments new definition of truth.

    • forrgott@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      11
      arrow-down
      1
      ·
      2 days ago

      Well, in practice, no.

      Do you think any corporation is going to bother making a separate model for government contracts versus any other use? I mean, why would they. So unless you can pony up enough cash to compete with a lucrative government contract (and the fact none of us can is, on fact, the while point), the end result will involve these requirements being adopted by the overwhelming majority of generative AI available on the market.

      So in reality, no, this absolutely will not be limited to models purchased by the feds. Frankly, I believe choosing to think otherwise to be dangerously naive.

      • MrMcGasion@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 days ago

        Based on the attempts we’ve seen at censoring AI output so far, there doesn’t seem to me to be a way to actually do this without building a new model with pre-censored training data.

        Sure they can tune models, but even “MechaHitler” Grok was still giving some “woke” answers on occasion. I don’t see how this doesn’t either destroy AI’s “usefulness” (not that there’s any usefulness there to begin with) or cost so much to implement that investors pull out because none of the AI companies are profitable, and throwing billions more to sift through and filter the training data pushes profitability even further away (if censoring all the training data is even possible at all).

      • itsame@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        2 days ago

        No. You would use a base model (GPT-4o) to get a reliable language model to which you would add a set of rules that the chat bot follows. Every company has its own rules, it is already widely in use to add data like company-specific manuals and support documents. Not rocketscience at all.

        • forrgott@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 days ago

          So many examples of this method failing I don’t even know where to start. Most visible, of course, was how that approach failed to stop Grok from “being woke” for like, a year or more.

          Frankly, you sound like you’re talking straight out of your ass.

          • itsame@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            Sure, it can go wrong, it is not fool-proof. Just like building a new model can cause unwanted surprises.

            BTW. There are many theories about Grok’s unethical behavior but this one is new to me. The reasons I was familiar with are: unfiltered training data, no ethical output restrictions, programming errors or incorrect system maintenance, strategic errors (Elon!), publishing before proper testing.

    • dontmindmehere@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      2 days ago

      Honestly this order seems empty. Does the government even have a need for general LLMs? Why would they need an AI to answer simple questions?

      As much as I dislike Trump, this shouldn’t impact any AI available to the general public.