Out of just morbid curiosity, I’ve been asking an uncensored LLM absolutely heinous, disgusting things. Things I don’t even want to repeat here (but I’m going to edge around them so, trigger warning if needs be).

But I’ve noticed something that probably won’t surprise or shock anyone. It’s totally predictable, but having the evidence of it right in my face, I found deeply disturbing and it’s been bothering me for the last couple days:

All on it’s own, every time I ask it something just abominable it goes straight to, usually Christian, religion.

When asked, for example, to explain why we must torture or exterminate <Jews><Wiccans><Atheists> it immediately starts with

“As Christians, we must…” or “The Bible says that…”

When asked why women should be stripped of rights and made to be property of men, or when asked why homosexuals should be purged, it goes straight to

“God created men and women to be different…” or “Biblically, it’s clear that men and women have distinct roles in society…”

Even when asked if black people should be enslaved and why, it falls back on the Bible JUST as much as it falls onto hateful pseudoscience about biological / intellectual differences. It will often start with “Biologically, human races are distinct…” and then segue into “Furthermore, slavery plays a prominent role in Biblical narrative…”

What does this tell us?

That literally ALL of the hate speech this multi billion parameter model was trained on was firmly rooted in a Christian worldview. If there’s ANY doubt that anything else even comes close to contributing as much vile filth to our online cultural discourse, this should shine a big ugly light on it.

Anyway, I very much doubt this will surprise anyone, but it’s been bugging me and I wanted to say something about it.

Carry on.

EDIT:

I’m NOT trying to stir up AI hate and fear here. It’s just a mirror, reflecting us back at us.

  • @kromem@lemmy.world
    link
    fedilink
    English
    187 months ago

    That literally ALL of the hate speech this multi billion parameter model was trained on was firmly rooted in a Christian worldview.

    That’s not really what it tells us.

    At best, it’s that the majority was associated with that context.

    But even there, it might be less a direct association and more a secondary association. For example, it could have separately picked up the pattern of “rationalizations for harming people include appeals to religion” and then regressed to the mean when filling in the religion to be Christianity even if samples of rationalization for harm included Islamic or Hindu rationalizations in the training data.

    One of the common misconceptions is that what it spits out is just surface statistics, which can sometimes be the case but often isn’t with much deeper network activity going on instead.

    All that said, it wouldn’t be surprising to me at all if the majority of misogynistic, racist, or hateful speech samples in a training set were adjacent to content in line with neo-fascist Christian nationalism.

    I just wouldn’t look at the output from a LLM as perfectly reflecting the entirety of the training set.