Pretty freaky article, and it doesn’t surprise me that chatbots could have this effect on some people more vulnerable to this sort of delusional thinking.
I also thought this was very interesting that even a subreddit full of die-hard AI evangelists (many of whom have an already religious-esque view of AI) would notice and identify a problem with this behavior.
"Based on the numbers we’re seeing on reddit, I would guess there are at least tens of thousands of users who are at this present time being convinced of these things by LLMs. As soon as the companies realise this, red team it and patch the LLMs it should stop being a problem. But it’s clear that they’re not aware of the issue enough right now.”
I like the part where you trust for profit companies to do this on their own.