Archived version

The best clue might come from a 2022 paper written by the Anthropic team back when their startup was just a year old. They warned that the incentives in the AI industry — think profit and prestige — will push companies to “deploy large generative models despite high uncertainty about the full extent of what these models are capable of.” They argued that, if we want safe AI, the industry’s underlying incentive structure needs to change.

Well, at three years old, Anthropic is now the age of a toddler, and it’s experiencing many of the same growing pains that afflicted its older sibling OpenAI. In some ways, they’re the same tensions that have plagued all Silicon Valley tech startups that start out with a “don’t be evil” philosophy. Now, though, the tensions are turbocharged.

An AI company may want to build safe systems, but in such a hype-filled industry, it faces enormous pressure to be first out of the gate. The company needs to pull in investors to supply the gargantuan sums of money needed to build top AI models, and to do that, it needs to satisfy them by showing a path to huge profits. Oh, and the stakes — should the tech go wrong — are much higher than with almost any previous technology.

So a company like Anthropic has to wrestle with deep internal contradictions, and ultimately faces an existential question: Is it even possible to run an AI company that advances the state of the art while also truly prioritizing ethics and safety?

“I don’t think it’s possible,” futurist Amy Webb, the CEO of the Future Today Institute, told me a few months ago.

  • The Bard in GreenA
    link
    fedilink
    61 month ago

    If those words are connected to some automated system that can accept them as commands…

    For instance, some idiot entrepreneur was talking to me recently about whether it was feasible to put an LLM on an unmanned spacecraft in cis-lunar space (I consult with the space industry) in order to give it operational control of on-board systems based on real time telemetry. I told him about hallucination and asked him what he thinks he’s going to do when the model registers some false positive in response to a system fault… Or even what happens to a model when you bombard it’s long-term storage with the kind of cosmic particles that cause random bit flips (This is a real problem for software in space) and how that might change its output?

    Now, I don’t think anyone’s actually going to build something like that anytime soon (then again the space industry is full of stupid money), but what about putting models in charge of semi-autonomous systems here on earth? Or giving them access to APIs that let them spend money or trade stocks or hire people on mechanical Turk? Probably a bunch of stupid expensive bad decisions…

    Speaking of stupid expensive bad decisions, has anyone embedded an LLM in the ethereum blockchain and givien it access to smart contracts yet? I bet investors would throw stupid money at that…

    • @MagicShel@programming.dev
      link
      fedilink
      31 month ago

      That’s hilarious. I love LLM, but it’s a tool not a product and everyone trying to make it a standalone thing is going to be sorely disappointed.