• @VirtualOdour@sh.itjust.works
    link
    fedilink
    English
    15 months ago

    It’s ai and cheaper healthcare or no ai and spiraling costs to healthcare - especially with falling birthrate putting a burden on the system.

    AI healthcare tools are already making it easier to provide healthcare, I’m in the uk so it’s different math who benefits but tools for early detection of tumors not only cuts costs but increases survivability too, and its only one of many similar tech already in use.

    Akinator style triage could save huge sums and many lives, especially in underserved areas - as could rapid first aid advice, we have a service for non-emergency medical advice, they basically tell you if you need to go to a&e, the doctor, or wait it out - it’s helped allocate resources and save lives in cases where people would have waited out something that needs urgent care. Having your phone able to respond to ‘my arm feels funny’ by asking a series of questions that determines the medically correct response could be a real life saver ‘alexia I’ve fallen and can’t get up’ has already save people’s elderly parents lives ‘clippy why is there blood in my poop’ or ‘Hey Google, does this mole look weird’ will save even more.

    Medical admin is a huge overhead, having a 24/7 running infinite instances of medically trained clerical staff would be a game changer - being able to call and say ‘this is the new situation’ and get appointments changed or processes started would be huge.

    Further down the line we’re looking at being able to get basic tests done without needing a trained doctor or nurse to do it, decreasing their workload will allow them to provide better care where it’s needed - a machine able to take blood and run tests on it then update the GP with results as soon as they’re done would cut costs and wasted time - especially if the system is trained with various sensors to perform healthchecks of the patient while taking blood, it’s a complex problem to spot things out of the ordinary for a patient but one ai could be much better at than humans, especially rover worked humans.

    As for them owning everything that can only happen if the anti ai people continue to support stronger copyright protections against training, if we agreed that training ai is a common good and information should be fair use over copyright then any government, NGO, charity, or open source crazy could train their own - It’s like electricity, Edison made huge progress and cornered the market but once the principles are understood anyone can use them so as tech increased it became increasingly easy for anyone to fabricate a thermopile or turbine so there isn’t a monopoly on electricity, there are companies who have local monopolies by cornering markets but anyone can make an off grid system with cheap bits from eBay.

    • @Rekorse@lemmy.dbzer0.com
      link
      fedilink
      English
      15 months ago

      Thats all well and good but here in America thats just a long list of stuff I can’t afford, and won’t be used to drive down costs. If it will for you, then I’m happy you live in a place that gives a shit about its populations health.

      I know there will be people who essentially do the reverse of profiteering and will take advantage of AI for genuinely beneficial reasons, although even in those cases a lot of the time profit is the motive. Unfortunately the American for profit system has drawn in some awful people with bad motives.

      If, right now, the two largest AI companies were healthcare nonprofits, I dont think people would be nearly as upset at the massive waste of energy, money, and life current AI is.

    • @iAvicenna@lemmy.world
      link
      fedilink
      English
      15 months ago

      I feel like all the useful stuff you have listed here is more like advanced machine learning and different than the AI that is being advertised to the public and being mostly invested in. These are mostly stuff we can already do relatively easily with the available AI (i.e highly sophisticated regression) for relatively low compute power and low energy requirements (not counting more outlier stuff like alpha fold which still requires quite a lot of compute power). It is not related to why the AI companies will need to own most of the major computational and energy innovations in the future.

      It is the image/text generative part of AI that looks more sexy to the public and that is therefore mostly being hyped/advertised/invested on by big AI companies. It is also this generative part of AI that will require much bigger computation and energy innovations to keep delivering significantly more than it can now. The situation is very akin to the moon race. It feels like trying to develop the AI on this “brute force” course will deliver relatively low benefits for the cost it will incur.

    • @iAvicenna@lemmy.world
      link
      fedilink
      English
      1
      edit-2
      5 months ago

      For instance, I would be completely fine with this if they said “We will train it on a very large database of articles and finding relevant scientific information will be easier than before”. But no they have to hype it up with nonsense expectations so they can generate short term profits for their fucking shareholders. This will either come at the cost of the next AI winter or senseless allocation of major resources to a model of AI that is not sustainable in the long run.

        • @iAvicenna@lemmy.world
          link
          fedilink
          English
          15 months ago

          I mean the person who said this is the CTO of OpenAI and an engineer working in this project. I would imagine she could be considered an expert.