• 0laura@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    13
    ·
    1 year ago

    The people here don’t get LLMs and it shows. This is neither surprising nor a bad thing imo.

    • krashmo@lemmy.world
      link
      fedilink
      arrow-up
      22
      arrow-down
      1
      ·
      1 year ago

      In what way is presenting factually incorrect information as if it’s true not a bad thing?

    • Comrade Rain@lemmygrad.ml
      link
      fedilink
      arrow-up
      2
      ·
      1 year ago

      People who make fun of LLMs most often do get LLMs and try to point out how they tend to spew out factually incorrect information, which is a good thing since many many people out there do not, in fact, “get” LLMs (most are not even acquainted with the acronym, referring to the catch-all term “AI” instead) and there is no better way to make a precaution about the inaccuracy of output produced by LLMs –however realistic it might sound– than to point it out with examples with ridiculously wrong answers to simple questions.

      Edit: minor rewording to clarify