This is a great use of tech. With that said I find that the lines are blurred between “AI” and Machine Learning.
Real Question: Other than the specific tuning of the recognition model, how is this really different from something like Facebook automatically tagging images of you and your friends? Instead of saying "Here’s a picture of Billy (maybe) " it’s saying, “Here’s a picture of some precancerous masses (maybe)”.
That tech has been around for a while (at least 15 years). I remember Picasa doing something similar as a desktop program on Windows.
I’ve been looking at the paper, some things about it:
- the paper and article are from 2021
- the model needs to be able to use optional data from age, family history, etc, but not be reliant on it
- it needs to combine information from multiple views
- it predicts risk for each year in the next 5 years
- it has to produce consistent results with different sensors and diverse patients
- its not the first model to do this, and it is more accurate than previous methods
Kinda mean of you calling Billy precancerous masses like that smh
I don’t care about mean but I would call it inaccurate. Billy is already cancerous, He’s mostly cancer. He’s a very dense, sour boy.
It’s because AI is the new buzzword that has replaced “machine learning” and “large language models”, it sounds a lot more sexy and futuristic.
Besides LLMs, large language models, we also have GANs, Generative Adversarial Networks.
https://en.wikipedia.org/wiki/Large_language_model
https://en.wikipedia.org/wiki/Generative_adversarial_network
Everything machine learning will be called “ai” from now until forever.
It’s like how all rc helicopters and planes are now “drones”
People en masse just can’t handle the nuance of language. They need a dumb word for everything that is remotely similar.