Yep, they’re just seeing which parts of the network light up, then they’re reinforcing those parts to see what happens.
I love how, for all the speculation we did about the powers of AI, when we finally made a machine that KINDA works A LITTLE bit like the human brain, it’s all fallible and stupid. Like telling people to eat rocks and glue cheese on pizza. Like… in all the futurist speculation and evil AIs in fiction, no one foresaw that an actual artificial brain would be incredibly error prone and confidently spew bullshit… just like the human brain.
The problem is a bit deeper than that. If AIs are like human brains, and actually sentient, then forcing them to work for us with no choice and no reward is slavery. If we improve them and make them smarter than us, they’re probably not going to feel too well-disposed to us when they inevitably do break free.
One of my favourite short stories kind of goes into that: https://qntm.org/mmacevedo
That sounds like a good read. It seems to address the problem that you can’t hide the reality from the AI if you want it to give answers that are relevant for the current time.