Even though we all knew it was coming, it’s sad to see such a prominent figure explicitly reject reality in order to custom-design his own echo chamber.
I have a strong feeling I’ve read this plot in a William Gibson novel.
The neuromoron.
He’s been doing that for years, with a big acceleration when he bought Twitter.
We are going to make shit up and feed it to the truth machine.
Oh great wise Truth-bringer, praytell what caused the american civil war.
Answer: Flesh-bag the civil war was caused by not enough slavery.
This idiot clown can’t even decide what number to give it, well, garbage in, garbage out. Go ahead and rewrite all of humanity you doofus. I hope he chokes to death on a grape.
Two words: model collapse.
Training AI on AI outputs leads to just gibberish.
deleted by creator
👆
Elon is kind of stupid, definitely not as smart as he has presented himself. Reading his tweets, however, I have very serious concerns about the critical thinking levels of his fans.
He’s a college dropout and hasn’t actually created anything, he buys into successful innovator’s and annoys every other owner until they leave.
Use first five Game of Thrones books to autocomplete the sixth one.
Where is my 100 billion dollars of venture capital.
You say this like it isn’t basically guaranteed to happen at some point if GRRM doesn’t finish it.
“I reject reality and substitute my own”
And yet the new model will still tell him that he will die alone and unloved.
I guess even an llm gets things right sometimes.
Ah yes, good old corrected data. Wouldn’t want you to read something inappropriate now, would we.
Isn’t it a well known fact that training on other AI output data leads to complete collapse of the newly trained AI models?
Not quite, actually. It is moreso training recursively on the output without any changes, i.e., Data -> Model A -> Data (generated by Model A) -> Model B -> Data (generated by Model B -> …, that leads to (complete) collapse. A single step like this can still worsen performance notably, though, especially when it makes up the sheer majority of the data. [source]
And if they train using little data, you won’t get anywhere near the chatbots we have now. If they fine-tune an existing model to do as they wish, it would likely have side effects. Like being more likely to introduce security bugs in generated code, generally give incorrect answers to other common sense questions, and so on. [source]
From what he wrote it feels like it will majorly be existing data with substitutions/corrections made in places where they deem necessary. Like when you ask about Elon it will probably spew sth along the lines of the greatest inventor of the last century, a polymath and a very successful path of exile 2 player.
Hapsburg AI
Just let the students edit the course assignments before completing them. Everyone gets an A and teaching has never been easier.
Why has no one thought of this before?
“We will add errors and delete valid information…”
Only time I used gr*k, I asked it how good of a gamer elon is. It wont respond if you use his name so I asked again with “elongated muskrat” and it replied. Then I asked it to just give me a score out of ten, and it said 3/10.
What a fucking idiot
Garbage in, garbage out






