Large Language Models and other affiliated algorithms are not AI and no amount of marketing will convince me otherwise. As a result I refuse to call them AI when talking to people about them.
Something with a mind. The term floating around now is “general artificial intelligence.” My primary objection is that a giant pile of poorly understood machine learning trained on garbage scraped from social media bears no resemblance to a thinking mind and calling it “AI” makes the term practically useless. Where do we draw the line between a complex algorithm and an “AI?” What makes it an “AI” vs. a simple algorithm?
As someone with published papers about machine learning, LLMs are artificially intelligent systems. At least according to the agreed-upon industry and academic definitions. I don’t really care about your head canon definition. I just want to be clear for anyone else who comes across this comment and doesn’t know otherwise.
What do you say about LLM’s being better at diagnosing diseases than real doctors? It may not be intelligence, but it’s more than simply regurgitating information.
Large Language Models and other affiliated algorithms are not AI and no amount of marketing will convince me otherwise. As a result I refuse to call them AI when talking to people about them.
Thanks, been arguing this for ages.
Will you differentiate your understanding of what AI is from LLMs?
Something with a mind. The term floating around now is “general artificial intelligence.” My primary objection is that a giant pile of poorly understood machine learning trained on garbage scraped from social media bears no resemblance to a thinking mind and calling it “AI” makes the term practically useless. Where do we draw the line between a complex algorithm and an “AI?” What makes it an “AI” vs. a simple algorithm?
As someone with published papers about machine learning, LLMs are artificially intelligent systems. At least according to the agreed-upon industry and academic definitions. I don’t really care about your head canon definition. I just want to be clear for anyone else who comes across this comment and doesn’t know otherwise.
What do you say about LLM’s being better at diagnosing diseases than real doctors? It may not be intelligence, but it’s more than simply regurgitating information.
You should know that the article that headline is from glosses over the multiple choice nature of the data.
Chat gpt didn’t do an examination and get it right, it answered multiple choice questions correctly more often than mds.