• Shayeta@feddit.org
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    5
    ·
    23 hours ago

    It doesn’t matter if you need a human to review. AI has no way distinguishing between success and failure. Either way a human will have to review 100% of those tasks.

    • MangoCats@feddit.it
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      11 hours ago

      I have been using AI to write (little, near trivial) programs. It’s blindingly obvious that it could be feeding this code to a compiler and catching its mistakes before giving them to me, but it doesn’t… yet.

      • wise_pancake@lemmy.ca
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 hours ago

        Agents do that loop pretty well now, and Claude now uses your IDE’s LSP to help it code and catch errors in flow. I think Windsurf or Cursor also do that also.

        The tooling has improved a ton in the last 3 months.

    • Outbound7404@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      3
      ·
      13 hours ago

      A human can review something close to correct a lot better than starting the task from zero.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        5
        ·
        11 hours ago

        In University I knew a lot of students who knew all the things but “just don’t know where to start” - if I gave them a little direction about where to start, they could run it to the finish all on their own.

        • MangoCats@feddit.it
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 hours ago

          harder to notice incorrect information in review, than making sure it is correct when writing it.

          That depends entirely on your writing method and attention span for review.

          Most people make stuff up off the cuff and skim anything longer than 75 words when reviewing, so the bar for AI improving over that is really low.

        • loonsun@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          11 hours ago

          Depends on the context, there is a lot of work in the scientific methods community trying to use NLP to augment traditionally fully human processes such as thematic analysis and systematic literature reviews and you can have protocols for validation there without 100% human review

    • jsomae@lemmy.ml
      link
      fedilink
      English
      arrow-up
      10
      ·
      23 hours ago

      Right, so this is really only useful in cases where either it’s vastly easier to verify an answer than posit one, or if a conventional program can verify the result of the AI’s output.

      • MangoCats@feddit.it
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 hours ago

        It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.

        I’m envisioning a world where multiple AI engines create and check each others’ work… the first thing they need to make work to support that scenario is probably fusion power.

        • zbyte64@awful.systems
          link
          fedilink
          English
          arrow-up
          2
          ·
          10 hours ago

          It’s usually vastly easier to verify an answer than posit one, if you have the patience to do so.

          I usually write 3x the code to test the code itself. Verification is often harder than implementation.

          • jsomae@lemmy.ml
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            6 hours ago

            It really depends on the context. Sometimes there are domains which require solving problems in NP, but where it turns out that most of these problems are actually not hard to solve by hand with a bit of tinkering. SAT solvers might completely fail, but humans can do it. Often it turns out that this means there’s a better algorithm that can exploit commanalities in the data. But a brute force approach might just be to give it to an LLM and then verify its answer. Verifying NP problems is easy.

            (This is speculation.)

          • MangoCats@feddit.it
            link
            fedilink
            English
            arrow-up
            2
            ·
            10 hours ago

            Yes, but the test code “writes itself” - the path is clear, you just have to fill in the blanks.

            Writing the proper product code in the first place, that’s the valuable challenge.

            • zbyte64@awful.systems
              link
              fedilink
              English
              arrow-up
              1
              ·
              edit-2
              4 hours ago

              Maybe it is because I started out in QA, but I have to strongly disagree. You should assume the code doesn’t work until proven otherwise, AI or not. Then when it doesn’t work I find it is easier to debug you own code than someone else’s and that includes AI.

              • MangoCats@feddit.it
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 hours ago

                I’ve been R&D forever, so at my level the question isn’t “does the code work?” we pretty much assume that will take care of itself, eventually. Our critical question is: “is the code trying to do something valuable, or not?” We make all kinds of stuff do what the requirements call for it to do, but so often those requirements are asking for worthless or even counterproductive things…

                • zbyte64@awful.systems
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  edit-2
                  3 hours ago

                  Literally the opposite experience when I helped material scientists with their R&D. Breaking in production would mean people who get paid 2x more than me are suddenly unable to do their job. But then again, our requirements made sense because we would literally look at a manual process to automate with the engineers. What you describe sounds like hell to me. There are greener pastures.

                  • MangoCats@feddit.it
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    ·
                    3 hours ago

                    Yeah, sometimes the requirements write themselves and in those cases successful execution is “on the critical path.”

                    Unfortunately, our requirements are filtered from our paying customers through an ever rotating cast of Marketing and Sales characters who, nominally, are our direct customers so we make product for them - but they rarely have any clear or consistent vision of what they want, but they know they want new stuff - that’s for sure.