Americans can become more cynical about the state of society when they see harmful behavior online. Three studies of the American public (n = 1,090) revealed that they consistently and substantially overestimated how many social media users contribute to harmful behavior online. On average, they believed that 43% of all Reddit users have posted severely toxic comments and that 47% of all Facebook users have shared false news online. In reality, platform-level data shows that most of these forms of harmful content are produced by small but highly active groups of users (3–7%). This misperception was robust to different thresholds of harmful content classification. An experiment revealed that overestimating the proportion of social media users who post harmful content makes people feel more negative emotion, perceive the United States to be in greater moral decline, and cultivate distorted perceptions of what others want to see on social media. However, these effects can be mitigated through a targeted educational intervention that corrects this misperception. Together, our findings highlight a mechanism that helps explain how people’s perceptions and interactions with social media may undermine social cohesion.

  • TehPers@beehaw.org
    link
    fedilink
    English
    arrow-up
    16
    ·
    2 months ago

    these effects can be mitigated through a targeted educational intervention that corrects this misperception.

    Let me make sure I understand this. The solution to “users who post harmful content” on moderated platforms is to educate others that those users are the minority?

    Sure, education would be good here, but I’m sure someone out there can think of another solution to this problem.

    • Kissaki@beehaw.org
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 months ago

      “these effects” refers to the sentence right before it, “overestimating the proportion of social media users who post harmful content makes people feel more negative emotion, perceive the United States to be in greater moral decline, and cultivate distorted perceptions of what others want to see on social media”.

      In other words, the overstimation of >40% when it’s <10% and the effects resulting from this overestimation can be mitigated through education (that it’s in fact much lower).

      • TehPers@beehaw.org
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 months ago

        My point was that you can also reduce the overestimation by properly moderating these platforms. If there are fewer (or no) posts containing harmful content, then people will naturally estimate that the percentage is less. Plus, you have less harmful content.

  • TheAlbatross@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    9
    ·
    2 months ago

    I think we gotta shift social media to be wildly hostile to every and any company. We gotta bully companies such that it’s risky business to advertise on social media.

    Social media can never truly belong to the people as long as advertisers wanna use it for their own means. We gotta get mean and indiscriminate.

    Every corporations post and page should be filled with hatred and disdain, toxicity heretofore unfound online.

    • RobotToaster@mander.xyz
      link
      fedilink
      arrow-up
      5
      ·
      2 months ago

      I think we gotta shift social media to be wildly hostile to every and any company.

      Especially the social media companies.

      • TheAlbatross@lemmy.blahaj.zone
        link
        fedilink
        arrow-up
        3
        ·
        2 months ago

        I thought about that, and while there’s probably some value in diversifying the approaches, I’d worry about the potential backlash of people thinking anyone trashing a company is a bot.

        I think the key to success is making it memetically hilarious to be incredibly toxic towards any company.

        • Tyrq@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          3
          ·
          2 months ago

          It’s interesting, the language models are typically good enough to fool most people in an off hand way, it seems like it’s worked well for the propaganda machine, in so far that it might encourage actual people to think the opinion is more popular than it is.

          And this all runs into the idea of forcing people to have verified online identities to limit the harm the dead internet can do to actual people. Not that I like that, but that’s the genie I see being out of the bottle with or without this route.

          Either way, you’re probably right by pointing out the toxicity of these corps with our own toxicity, so I guess it’s still fire with fire, just lke any memevent, it just needs to catch on with the right few people