Pretty freaky article, and it doesn’t surprise me that chatbots could have this effect on some people more vulnerable to this sort of delusional thinking.

I also thought this was very interesting that even a subreddit full of die-hard AI evangelists (many of whom have an already religious-esque view of AI) would notice and identify a problem with this behavior.

  • @givesomefucks@lemmy.world
    link
    fedilink
    English
    115 days ago

    The paper describes a failure mode with LLMs due to something during inference, meaning when the AI is actively “reasoning” or making predictions, as opposed to an issue in the training data. Drake told me he discovered the issue while working with ChatGPT on a project. In an attempt to preserve the context of a conversation with ChatGPT after reaching the conversation length limit, he used the transcript of that conversation as a “project-level instruction” for another interaction. In the paper, Drake says that in one instance, this caused ChatGPT to slow down or freeze, and that in another case “it began to demonstrate increasing symptoms of fixation and an inability to successfully discuss anything without somehow relating it to this topic [the previous conversation.”

    They don’t understand why the limit is there…

    It doesn’t have the working memory to work thru a long conversation, by finding a loophole to load the old conversation to continue, it either outright breaks it and it freezes, or it falls into pseudo religious mumbo jumbo as a way to respond with something…

    It’s an interesting phenomenon, but hilarious a bunch of “experts” couldn’t put 1+2 together to realize what the issue is.

    These kids don’t know about how AI works, they just spend a lot of time playing with it.

    • CorganaOP
      link
      fedilink
      English
      75 days ago

      Absolutely. And to be clear, the “researcher” being quoted is just a guy on the internet who self-published an official looking “paper”.

      That said- I think that’s partly why it’s so interesting that this particular group of people identified the problem, because this group of people are pretty extreme LLM devotees and already ascribe unrealistic traits to LLMs. So if they are noticing people “taking it too seriously” then you know it must be bad.

      • @givesomefucks@lemmy.world
        link
        fedilink
        English
        35 days ago

        They didn’t identify any problem…

        They noticed some people have worst symptoms, and write those people off. While not even second-guessing their own delusions.

        That’s not rare either, it’s default human behavior.

        You’re being awfully hard on them for having so much in common…

        • CorganaOP
          link
          fedilink
          English
          2
          edit-2
          3 days ago

          In the article they quoted the moderator (emphasis mine):

          This whole topic is so sad. It’s unfortunate how many mentally unwell people are attracted to the topic of AI. I can see it getting worse before it gets better. I’ve seen sooo many posts where people link to their github which is pages of rambling pre prompt nonsense that makes their LLM behave like it’s a god or something,” the r/accelerate moderator wrote. “Our policy is to quietly ban those users and not engage with them, because we’re not qualified and it never goes well. They also tend to be a lot more irate and angry about their bans because they don’t understand it.”

          It seems pretty clear to me that they view it as a problem. Why ban something if they don’t see it as a problem?

          • @givesomefucks@lemmy.world
            link
            fedilink
            English
            1
            edit-2
            4 days ago

            It seems pretty clear to me that they view it as a problem

            Then I’m shocked you didn’t make it to the second sentence:

            They noticed some people have worst symptoms,

            Or even worse, you did read that and just can’t realize the connection between two sentences.

            But I’ll never understand why people want to argue, you could have asked and I’d have explained it, you’d have learned something.

            Instead you wanted a slap fight because you didn’t understand what someone said.