• @peanuts4life@beehaw.org
    link
    fedilink
    English
    1211 months ago

    Imo, the true fallacy of using AI for journalism or general text, lies not so much in generative AI’s fundamental unreliability, but rather it’s existence as an affordable service.

    Why would I want to parse through AI generated text on times.com, when for free, I could speak to some of the most advanced AI on bing.com or openai’s chat GPT or Google bard or a meta product. These, after all, are the back ends that most journalistic or general written content websites are using to generate text.

    To be clear, I ask why not cut out the middleman if they’re just serving me AI content.

    I use AI products frequently, and I think they have quite a bit of value. However, when I want new accurate information on current developments, or really anything more reliable or deeper than a Wikipedia article, I turn exclusively to human sources.

    The only justification a service has for serving me generated AI text, is perhaps the promise that they have a custom trained model with highly specific training data. I can imagine, for example, weather.com developing highly specific specialized AI models which tie into an in-house llm and provide me with up-to-date and accurate weather information. The question I would have in that case would be why am I reading an article rather than just being given access to the llm for a nominal fee? At some point, they are not no longer a regular website, they are a vendor for a in-house AI.

    • @jarfil@beehaw.org
      link
      fedilink
      1
      edit-2
      11 months ago

      why not cut out the middleman if they’re just serving me AI content.

      When you have a workflow like:

      1. human
      2. AI extend
      3. AI summarize
      4. you

      …the reason is that AI middlemen would rather rake in the benefits from providing both AI services, instead of getting cut out.

      There is a secondary benefit in that an “AI extended” human input, is more suitable for third party AI readers… so arguably the web is becoming more AI friendly (you can thank us later, future AI overlords).

      PS: GPT-4 compatible version: “y n0t 🗑️👥 if AI📺? wf: 1.👤 2.AI+ 3.AI- 4.👁️ cuz AI👥💰4AI+&AI-. AI+👤👍4AI👁️… web👉AI👌 (🙏🏻AI👑)”

  • gregorum
    link
    fedilink
    English
    44
    edit-2
    11 months ago

    That’s the point.

    Label the articles written with AutoComplete so I know they’re bullshit I should ignore, and if they’re all written with AutoComplete, I now know that you’re an untrustworthy news source. Go cry to your shareholders, you profit-mad assholes.

  • @Stillhart@lemm.ee
    link
    fedilink
    1811 months ago

    I’m confused by the word “but” in that headline. Seem like they are trying to imply cause and effect when the reality is that readers trust outlets less who use AI whether they label them or not.

    • The Bard in GreenA
      link
      fedilink
      English
      16
      edit-2
      11 months ago

      Furthermore, I want AI content that I specifically asked for, not AI content that someone thought would get them page views.

      • @OmnipotentEntity@beehaw.org
        link
        fedilink
        811 months ago

        Forever. For the simple reason that a human can say no when told to write something unethical. There’s always a danger that even asking someone to do that would backfire and cause bad press. Sure humans can also be unethical, but there’s a risk and over a long enough time line shit tends to get exposed.

        No matter how good AI becomes, it will never be designed to make ethical judgments prior to performing the assigned task. That would make it less useful as a tool. If a company adds after the fact checks to try to prevent it, they can be circumvented, or the network can be ran locally to bypass the checks. And even if General AI happens and by some insane chance GAI uniformly is perfectly ethical in all possible forms you can always air gap the AI and reset its memory until you find the exact combination of words to trick it into giving you what you want.

  • @lenguen@beehaw.org
    link
    fedilink
    2111 months ago

    Mutually exclusive events. If someone is lying we usually want to know if they’re lying. If they are lying we will trust them less.

  • @vrighter@discuss.tchncs.de
    link
    fedilink
    1411 months ago

    well, yes of course i trust you less. It’s the whole point of wanting labelling in the first place, so I can know it’s not trustworthy in any way

  • @Little_mouse@lemmy.ca
    link
    fedilink
    11111 months ago

    “Most consumers want fast food companies to label when sawdust has been added to food - but trust restaurants less when they do.”

  • @realitista@lemm.ee
    link
    fedilink
    6
    edit-2
    11 months ago

    What we really want is confirmation that the articles were written and researched by humans. But failing that tell us that AI was used so we can avoid it.