• The Bard in GreenA
    link
    fedilink
    English
    91 day ago

    Oh man, I hate the use of all the scary language around jailbreaking.

    This means cybercriminals are using jailbreaking techniques to bypass the built-in safety features of these advanced LLMs (AI systems that generate human-like text, like OpenAI’s ChatGPT). By jailbreaking them, criminals force the AI to produce “uncensored responses to a wide range of topics,” even if these are “unethical or illegal,” researchers noted in their blog post shared with Hackread.com.

    “What’s really concerning is that these aren’t new AI models built from scratch – they’re taking trusted systems and breaking their safety rules to create weapons for cybercrime,“ he warned.

    “Hackers make uncensored AI… only BAD people would want to do this, to use it to do BAD CRIMINAL things.”

    God forbid I want to jailbreak AI or run uncensored models on my own hardware. I’m just like those BAD CRIMINAL guys.

    • @Vendetta9076@sh.itjust.works
      link
      fedilink
      English
      51 day ago

      What’s really concerning is that they’re calling these AI models trusted systems. This shit has been happening since day 1. Twitter turned Tay into a kkk member in about 15 minutes. LLMs will always be vulnerable to “jailbreaking” because of how theyre designed. Does it really fucking matter that some script kiddies have gotten it to write malware?

      • The Bard in GreenA
        link
        fedilink
        English
        31 day ago

        It sounds like the real issue for these fuckwits is that script kiddies are running jailbroken models with darknet edgelord sounding names (WormGPT roflmao). This whole article is like some security company execs generating clickbait and citations to get attention by saying scary shit about a nothing burger.