• @SomeGuy69@lemmy.world
      link
      fedilink
      English
      169 months ago

      It’s really difficult to clean those data. Another case was, when they kept the markings on the training data and the result was, those who had cancer, had a doctors signature on it, so the AI could always tell the cancer from the not cancer images, going by the lack of signature. However, these people also get smarter in picking their training data, so it’s not impossible to work properly at some point.

    • @FierySpectre@lemmy.world
      link
      fedilink
      English
      105
      edit-2
      9 months ago

      Using AI for anomaly detection is nothing new though. Haven’t read any article about this specific ‘discovery’ but usually this uses a completely different technique than the AI that comes to mind when people think of AI these days.

    • @earmuff@lemmy.dbzer0.com
      link
      fedilink
      English
      29 months ago

      That’s the nice thing about machine learning, as it sees nothing but something that correlates. That’s why data science is such a complex topic, as you do not see errors this easily. Testing a model is still very underrated and usually there is no time to properly test a model.

    • @ricecake@sh.itjust.works
      link
      fedilink
      English
      119 months ago

      It’s worse than that.

      This is a different type of AI that doesn’t have as many consumer facing qualities.

      The ones that are being pushed now are the first types of AI to have an actually discernable consumer facing attribute or behavior, and so they’re being pushed because no one wants to miss the boat.

      They’re not more profitable or better or actually doing anything anyone wants for the most part, they’re just being used where they can fit it in.

      • @Hackworth@lemmy.world
        link
        fedilink
        English
        0
        edit-2
        9 months ago

        This type of segmentation is of declining practical value. Modern AI implementations are usually hybrids of several categories of constructed intelligence.

  • @yesman@lemmy.world
    link
    fedilink
    English
    499 months ago

    The most beneficial application of AI like this is to reverse-engineer the neural network to figure out how the AI works. In this way we may discover a new technique or procedure, or we might find out the AI’s methods are bullshit. Under no circumstance should we accept a “black box” explanation.

    • @MystikIncarnate@lemmy.ca
      link
      fedilink
      English
      109 months ago

      IMO, the “black box” thing is basically ML developers hand waiving and saying “it’s magic” because they know it will take way too long to explain all the underlying concepts in order to even start to explain how it works.

      I have a very crude understanding of the technology. I’m not a developer, I work in IT support. I have several friends that I’ve spoken to about it, some of whom have made fairly rudimentary machine learning algorithms and neural nets. They understand it, and they’ve explained a few of the concepts to me, and I’d be lying if I said that none of it went over my head. I’ve done programming and development, I’m senior in my role, and I have a lifetime of technology experience and education… And it goes over my head. What hope does anyone else have? If you’re not a developer or someone ML-focused, yeah, it’s basically magic.

      I won’t try to explain. I couldn’t possibly recall enough about what has been said to me, to correctly explain anything at this point.

      • @homura1650@lemm.ee
        link
        fedilink
        English
        189 months ago

        The AI developers understand how AI works, but that does not mean that they understand the thing that the AI is trained to detect.

        For instance, the cutting edge in protein folding (at least as of a few years ago) is Google’s AlphaFold. I’m sure the AI researchers behind AlphaFold understand AI and how it works. And I am sure that they have an above average understanding of molecular biology. However, they do not understand protein folding better than the physisits and chemists who have spent their lives studying the field. The core of their understanding is “the answer is somewhere in this dataset. All we need to do is figure out how to through ungoddly amounts of compute at it, and we can make predictions”. Working out how to productivly throw that much compute power at a problem is not easy either, and that is what ML researchers understand and are experts in.

        In the same way, the researchers here understand how to go from a large dataset of breast images to cancer predictions, but that does not mean they have any understanding of cancer. And certainly not a better understanding than the researchers who have spent their lives studying it.

        An open problem in ML research is how to take the billions of parameters that define an ML model and extract useful information that can provide insights to help human experts understand the system (both in general, and in understanding the reasoning for a specific classification). Progress has been made here as well, but it is still a long way from being solved.

        • @Tryptaminev@lemm.ee
          link
          fedilink
          English
          39 months ago

          Thank you for giving some insights into ML, that is now often just branded “AI”. Just one note though. There is many ML algorithms that do not employ neural networks. They don’t have billions of parameters. Especially in binary choice image recognition (looks like cancer or no) stuff like support vector machines achieve great results and they have very few parameters.

          • @0ops@lemm.ee
            link
            fedilink
            English
            29 months ago

            Machine learning is a subset of Artificial intelligence, which is a field of research as old as computer science itself

            The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics.[a] General intelligence—the ability to complete any task performable by a human on an at least equal level—is among the field’s long-term goals.[16]

            https://en.m.wikipedia.org/wiki/Artificial_intelligence

    • @CheeseNoodle@lemmy.world
      link
      fedilink
      English
      179 months ago

      iirc it recently turned out that the whole black box thing was actually a bullshit excuse to evade liability, at least for certain kinds of model.

        • @CheeseNoodle@lemmy.world
          link
          fedilink
          English
          129 months ago

          This ones from 2019 Link
          I was a bit off the mark, its not that the models they use aren’t black boxes its just that they could have made them interpretable from the beginning and chose not to, likely due to liability.

      • @Tryptaminev@lemm.ee
        link
        fedilink
        English
        39 months ago

        It depends on the algorithms used. Now the lazy approach is to just throw neural networks at everything and waste immense computation ressources. Of course you then get results that are difficult to interpret. There is much more efficient algorithms that are working well to solve many problems and give you interpretable decisions.

  • @wheeldawg@sh.itjust.works
    link
    fedilink
    English
    16
    edit-2
    9 months ago

    Yes, this is “how it was supposed to be used for”.

    The sentence construction quality these days in in freefall.

  • @earmuff@lemmy.dbzer0.com
    link
    fedilink
    English
    119 months ago

    Serious question: is there a way to get access to medical imagery as a non-student? I would love to do some machine learning with it myself, as I see lot’s of potential in image analysis in general. 5 years ago I created a model that was able to spot certain types of ships based only on satellite imagery, which were not easily detectable by eye and ignoring the fact that one human cannot scan 15k images in one hour. Similar use case with medical imagery - seeing the things that are not yet detectable by human eyes.

  • @MonkderVierte@lemmy.ml
    link
    fedilink
    English
    11
    edit-2
    9 months ago

    Btw, my dentist used AI to identify potential problems in a radiograph. The result was pretty impressive. Have to get a filling tho.

  • ALoafOfBread
    link
    fedilink
    English
    77
    edit-2
    9 months ago

    Now make mammograms not $500 and not have a 6 month waiting time and make them available for women under 40. Then this’ll be a useful breakthrough

    • 𝓔𝓶𝓶𝓲𝓮
      link
      fedilink
      English
      1
      edit-2
      9 months ago

      Honestly with all respect that is really shitty joke. It’s god damn breast cancer, opposite of hot

      I usually just skip them mouldy jokes but like cmon that is beyond the scale of cringe

      • @PlantDadManGuy@lemmy.world
        link
        fedilink
        English
        29 months ago

        Terrible things happen to people you love, you have two choices in this life. You can laugh about it or you can cry about it. You can do one and then the other if you choose. I prefer to laugh about most things and hope others will do the same. Cheers.

        • 𝓔𝓶𝓶𝓲𝓮
          link
          fedilink
          English
          2
          edit-2
          9 months ago

          I mean do whatever you want but it just comes off as repulsive. like a stain of shit on the new shoes.
          This is public space after all, not the bois locker room so that might be embarrassing for you.

          And you know you can always count on me to point stuff out so you can avoid humiliation in the future

  • @cecinestpasunbot@lemmy.ml
    link
    fedilink
    English
    659 months ago

    Unfortunately AI models like this one often never make it to the clinic. The model could be impressive enough to identify 100% of cases that will develop breast cancer. However if it has a false positive rate of say 5% it’s use may actually create more harm than it intends to prevent.

    • @CptOblivius@lemmy.world
      link
      fedilink
      English
      149 months ago

      Breast imaging already relys on a high false positive rate. False positives are way better than false negatives in this case.

      • @cecinestpasunbot@lemmy.ml
        link
        fedilink
        English
        49 months ago

        That’s just not generally true. Mammograms are usually only recommended to women over 40. That’s because the rates of breast cancer in women under 40 are low enough that testing them would cause more harm than good thanks in part to the problem of false positives.

        • @CptOblivius@lemmy.world
          link
          fedilink
          English
          39 months ago

          Nearly 4 out of 5 that progress to biopsy are benign. Nearly 4 times that are called for additional evaluation. The false positives are quite high compared to other imaging. It is designed that way, to decrease the chances of a false negative.

          • @cecinestpasunbot@lemmy.ml
            link
            fedilink
            English
            19 months ago

            The false negative rate is also quite high. It will miss about 1 in 5 women with cancer. The reality is mammography is just not all that powerful as a screening tool. That’s why the criteria for who gets screened and how often has been tailored to try and ensure the benefits outweigh the risks. Although it is an ongoing debate in the medical community to determine just exactly what those criteria should be.

    • @ColeSloth@discuss.tchncs.de
      link
      fedilink
      English
      119 months ago

      Not at all, in this case.

      A false positive of even 50% can mean telling the patient “they are at a higher risk of developing breast cancer and should get screened every 6 months instead of every year for the next 5 years”.

      Keep in mind that women have about a 12% chance of getting breast cancer at some point in their lives. During the highest risk years its a 2 percent chamce per year, so a machine with a 50% false positive for a 5 year prediction would still only be telling like 15% of women to be screened more often.

    • @Vigge93@lemmy.world
      link
      fedilink
      English
      409 months ago

      That’s why these systems should never be used as the sole decision makers, but instead work as a tool to help the professionals make better decisions.

      Keep the human in the loop!

    • ???
      link
      fedilink
      English
      59 months ago

      How would a false positive create more harm? Isn’t it better to cast a wide net and detect more possible cases? Then false negatives are the ones that worry me the most.

      • @cecinestpasunbot@lemmy.ml
        link
        fedilink
        English
        129 months ago

        It’s a common problem in diagnostics and it’s why mammograms aren’t recommended to women under 40.

        Let’s say you have 10,000 patients. 10 have cancer or a precancerous lesion. Your test may be able to identify all 10 of those patients. However, if it has a false positive rate of 5% that’s around 500 patients who will now get biopsies and potentially surgery that they don’t actually need. Those follow up procedures carry their own risks and harms for those 500 patients. In total, that harm may outweigh the benefit of an earlier diagnosis in those 10 patients who have cancer.

  • @gmtom@lemmy.world
    link
    fedilink
    English
    199 months ago

    This is similar to wat I did for my masters, except it was lung cancer.

    Stuff like this is actually relatively easy to do, but the regulations you need to conform to and the testing you have to do first are extremely stringent. We had something that worked for like 95% of cases within a couple months, but it wasn’t until almost 2 years later they got to do their first actual trial.

  • @bluefishcanteen@sh.itjust.works
    link
    fedilink
    English
    179 months ago

    This is a great use of tech. With that said I find that the lines are blurred between “AI” and Machine Learning.

    Real Question: Other than the specific tuning of the recognition model, how is this really different from something like Facebook automatically tagging images of you and your friends? Instead of saying "Here’s a picture of Billy (maybe) " it’s saying, “Here’s a picture of some precancerous masses (maybe)”.

    That tech has been around for a while (at least 15 years). I remember Picasa doing something similar as a desktop program on Windows.

    • @AdrianTheFrog@lemmy.world
      link
      fedilink
      English
      139 months ago

      I’ve been looking at the paper, some things about it:

      • the paper and article are from 2021
      • the model needs to be able to use optional data from age, family history, etc, but not be reliant on it
      • it needs to combine information from multiple views
      • it predicts risk for each year in the next 5 years
      • it has to produce consistent results with different sensors and diverse patients
      • its not the first model to do this, and it is more accurate than previous methods
    • @Lets_Eat_Grandma@lemm.ee
      link
      fedilink
      English
      39 months ago

      Everything machine learning will be called “ai” from now until forever.

      It’s like how all rc helicopters and planes are now “drones”

      People en masse just can’t handle the nuance of language. They need a dumb word for everything that is remotely similar.

      • @Comment105@lemm.ee
        link
        fedilink
        English
        39 months ago

        I don’t care about mean but I would call it inaccurate. Billy is already cancerous, He’s mostly cancer. He’s a very dense, sour boy.

    • @pete_the_cat@lemmy.world
      link
      fedilink
      English
      59 months ago

      It’s because AI is the new buzzword that has replaced “machine learning” and “large language models”, it sounds a lot more sexy and futuristic.

  • Moah
    link
    fedilink
    English
    309 months ago

    Ok, I’ll concede. Finally a good use for AI. Fuck cancer.

      • @0laura@lemmy.dbzer0.com
        link
        fedilink
        English
        17 months ago

        machine learning is a type of AI. scifi movies just misused the term and now the startups are riding the hype trains. AGI =/= AI. there’s lots of stuff to complain about with ai these days like stable diffusion image generation and LLMs, but the fact that they are AI is simply true.

        • @blackbirdbiryani@lemmy.world
          link
          fedilink
          English
          17 months ago

          I mean it’s entirely an arbitrary distinction. AI, for a very long time before chatGPT, meant something like AGI. we didn’t call classification models ‘intelligent’ because it didn’t have any human-like characteristics. It’s as silly as saying a regression model is AI. They aren’t intelligent things.

      • @medgremlin@midwest.social
        link
        fedilink
        English
        19 months ago

        I once had ideas about building a machine learning program to assist workflows in Emergency Departments, and its’ training data would be entirely generated by the specific ER it’s deployed in. Because of differences in populations, the data is not always readily transferable between departments.

    • @ilinamorato@lemmy.world
      link
      fedilink
      English
      249 months ago

      It’s got a decent chunk of good uses. It’s just that none of those are going to make anyone a huge ton of money, so they don’t have a hype cycle attached. I can’t wait until the grifters get out and the hype cycle falls away, so we can actually get back to using it for what it’s good at and not shoving it indiscriminately into everything.

      • @Tja@programming.dev
        link
        fedilink
        English
        29 months ago

        Those are going to make a ton of money for a lot of people. Every 1% fuel efficiency gained, every second saved in an industrial process, it’s hundreds of millions of dollars.

        You don’t need AI in your fridge or in your snickers, that will (hopefully) die off, but AI is not going away where it matters.

        • @ricecake@sh.itjust.works
          link
          fedilink
          English
          39 months ago

          Well, AI has been in those places for a while. The hype cycle is around generative AI which just isn’t useful for that type of thing.

          • @Tja@programming.dev
            link
            fedilink
            English
            09 months ago

            I’m sure if Nvidia, AMD, Apple and Co create npus or tpus for Gen ai they can also be used for those places, thus improving them along.

            • @ricecake@sh.itjust.works
              link
              fedilink
              English
              39 months ago

              Why do you think that?

              Nothing I’ve seen with current generative AI techniques leads me to believe that it has any particular utility for system design or architecture.

              There are AI techniques that can help with such things, they’re just not the generative variety.

              • @Tja@programming.dev
                link
                fedilink
                English
                -19 months ago

                Hardware for faster matrix/tensor multiplication leads to faster training, thus helping. More contributors to your favorite python frameworks leads to better tools, thus helping. Etc.

                I am aware that chatbots don’t cure cancer, but discarding all the contributions of the last two years is disingenuous at best.

        • @ilinamorato@lemmy.world
          link
          fedilink
          English
          1
          edit-2
          9 months ago

          Those are going to make a ton of money for a lot of people.

          Right, but not any one person. The people running the hype train want to be that one person, but the real uses just aren’t going to be something you can exclusively monetize.

          • @Tja@programming.dev
            link
            fedilink
            English
            19 months ago

            Depends how you define “a ton” of money. Plenty of startups have been acquired for silly amounts of money, plenty of consultants are making bank, make executives are cashing big bonuses for successful improvements using AI…

            • @ilinamorato@lemmy.world
              link
              fedilink
              English
              19 months ago

              I define “a ton” of money in this case to mean “the amount they think of when they get the dollar signs in their eyes.” People are cashing in on that delusion right now, but it’s not going to last.

      • @bluewing@lemm.ee
        link
        fedilink
        English
        59 months ago

        The hypesters and grifters do not prevent AI from being used for truly valuable things even now. In fact medical uses will be one of those things that WILL keep AI from just fading away.

        Just look at those marketing wankers as a cherry on the top that you didn’t want or need.

        • @ilinamorato@lemmy.world
          link
          fedilink
          English
          19 months ago

          The hypesters and grifters do not prevent AI from being used for truly valuable things even now.

          I mean, yeah, except that the unnecessary applications are all the corporations are paying anyone to do these days. When the hype flies around like this, the C-suite starts trying to micromanage the product team’s roadmap. Once it dies down, they let us get back to work.

        • @medgremlin@midwest.social
          link
          fedilink
          English
          29 months ago

          People just need to understand that the true medical uses are as tools for physicians, not “replacements” for physicians.

          • @bluewing@lemm.ee
            link
            fedilink
            English
            29 months ago

            I think the vast majority of people understand that already. They don’t understand just what all those gadgets are for anyway. Medicine is largely a ''blackbox" or magical process anyway.

            • @medgremlin@midwest.social
              link
              fedilink
              English
              19 months ago

              There are way too many techbros trying to push the idea of turning chat gpt into a physician replacement. After it “passed” the board exams, they immediately started hollering about how physicians are outdated and too expensive and we can just replace them with AI. What that ignores is the fact that the board exam is multiple choice and a massive portion of medical student evaluation is on the “art” side of medicine that involves taking the history and performing the physical exam that the question stem provides for the multiple choice questions.

              • @bluewing@lemm.ee
                link
                fedilink
                English
                19 months ago

                And it has gone exactly nowhere either hasn’t it. Nor do those techbros want the legal and moral responsibilities that come with an actual licence to pass the boards.

                • @medgremlin@midwest.social
                  link
                  fedilink
                  English
                  19 months ago

                  I think there are some techbros out there with sleazy legal counsel that promises they can drench the thing in enough terms and conditions to relieve themselves of liability, similar to the way that WebMD does. Also, with healthcare access the way it is in America, there are plenty of people who will skim right past the disclaimer telling them to go see a real healthcare provider and just trust the “AI”. Additionally, there’s enough slimy NP professional groups pushing for unsupervised practice that they could just sign on their NP licenses for prescriptions, and the malpractice laws currently in place would be difficult to enforce depending on outcomes and jurisdictions.

                  This doesn’t get into the sowing of discord and discontent with physicians that is happening even without these products existing in the first place. Even the claims that an AI could potentially, maybe, someday sorta-kinda replace physicians makes people distrust and dislike physicians now.

                  Separately, I have some gullible classmates in medical school that I worry about quite a lot, because they’ve bought into the line that chat GPT passed the boards, so they take its’ hallucinations as gospel and argue with our professor’s explanations as to why the hallucination is wrong and the correct answer on a test is correct. I was not shy about admonishing them and forcefully explaining how these “generative AIs” are little more than glorified text predictors, but the allure of easy answers without having to dig for them and understand complex underlying principles is very alluring, so I don’t know if I actually got through to him or not.

        • @ilinamorato@lemmy.world
          link
          fedilink
          English
          19 months ago

          That’s not what this is, though. This is early detection, which is awesome and super helpful, but way less game-changing than an actual cure.

            • @ilinamorato@lemmy.world
              link
              fedilink
              English
              19 months ago

              It sure is. But this is basically just making something that already exists more reliable, not creating something new. Still important, but not as earth-shaking.

        • @ricecake@sh.itjust.works
          link
          fedilink
          English
          39 months ago

          It’s a money saver, so it’s profit model is all wonky.

          A hospital, as a business, will make more money treating cancer than it will doing a mammogram and having a computer identify issues for preventative treatment.
          A hospital, as a place that helps people, will still want to use these scans widely because “ignoring preventative care to profit off long term treatment” is a bit too “mask off” even for the US healthcare system and doctors would quit.

          Insurance companies, however, would pay just shy of the cost of treatment to avoid paying for treatment.
          So the cost will rise to be the cost of treatment times the incidence rate, scaled to the likelihood the scan catches something, plus system costs and staff costs.

          In a sane system, we’d pass a law saying capable facilities must provide preventative screenings at cost where there’s a reasonable chance the scan would provide meaningful information and have the government pay the bill. Everyone’s happy except people who view healthcare as an investment opportunity.

          • @ilinamorato@lemmy.world
            link
            fedilink
            English
            19 months ago

            A hospital, as a business, will make more money treating cancer than it will doing a mammogram and having a computer identify issues for preventative treatment.

            I believe this idea was generally debunked a little while ago; to wit, the profit margin on cancer care just isn’t as big (you have to pay a lot of doctors) as the profit margin on mammograms. Moreover, you’re less likely to actually get paid the later you identify it (because end-of-life care costs for the deceased tend to get settled rather than being paid).

            I’ll come back and drop the article link here, if I can find it.

            • @ricecake@sh.itjust.works
              link
              fedilink
              English
              29 months ago

              Oh interesting, I’d be happy to be wrong on that. :)

              I figured they’d factor the staffing costs into what they charge the insurance, so it’d be more profit due to a higher fixed costs, longer treatment and some fixed percentage profit margin.
              The estate costs thing is unfortunately an avenue I hadn’t considered. :/

              I still think it would be better if we removed the profit incentive entirely, but I’m pleased if the two interests are aligned if we have to have both.

              • @ilinamorato@lemmy.world
                link
                fedilink
                English
                19 months ago

                Oh, absolutely. Absent a profit motive that pushes them toward what basically amounts to a protection scam, they’re left with good old fashioned price gouging. Even if interests are aligned, it’s still way more expensive than it should be. So yes, I agree that we should remove the profit incentive for healthcare.

                Sadly, I can’t find the article. I’ll keep an eye out for it, though. I’m pretty sure I linked to it somewhere but I’m too terminally online to figure out where.