I remember when compression was popularized, like mp3 and jpg, people would run experiments where they would convert lossy to lossy to lossy to lossy over and over and then share the final image, which was this overcooked nightmare
I wonder if a similar dynamic applies to the scenario presented in the comic with AI summarization and expansion of topics. Start with a few bullet points have it expand that to a paragraph or so, have it summarize it back down to bullet points, repeat 4-5 times, then see how far off you get from the original point.
In my experience, LLMs aren’t really that good at summarizing
It’s more like they can “rewrite more concisely” which is a bit different
Summarizing requires understanding what’s important, and LLMs don’t “understand” anything.
They can reduce word counts, and they have some statistical models that can tell them which words are fillers. But, the hilarious state of Apple Intelligence shows how frequently that breaks.
I used to play this game with Google translate when it was newish
There is, or maybe was, a YouTube channel that would run well known song lyrics through various layers of translation, then attempt to sing the result to the tune of the original.
Gradually watermelon… I like shapes.
Twisted translations
Sounds about right to me.
🎵Once you know which one, you are acidic, to win!🎵
If it isn’t accurate to the source material, it isn’t concise.
LLMs are good at reducing word count.
i was curious so i tried it with chatgpt. here are the chat links:
- first expansion
- first summary
- second expansion
- second summary
- third expansion
- third summary
- fourth expansion
- fourth summary
- fifth expansion
- fifth summary
- sixth expansion
- sixth summary
overall it didn’t seem too bad. it sort of started focusing on the ecological and astrobiological side of the same topic but didn’t completely drift. to be honest, i think it would have done a lot worse if i made the prompt less specific. if it was just “summarize this text” and “expand on these points” i think chatgpt would get very distracted
Interesting. I also wonder how it would fare across different models (eg user a uses chatgpt, user b uses gemini, user c uses deepseek, etc) as that may mimic real world use (such as what’s depicted in the comic) more closely
Doesn’t chatgpy remember the context of the previous question and text?
Maybe a difference in accounts and llms makes a bigget difference.
that’s why i ran every request in a different chat session
People do that with google translate as well
Are humans doing this as well and if they don’t, why not?
Humans do this yes. https://en.m.wikipedia.org/wiki/Telephone_game