Breaking News
More () »

Disinformation expert: Facebook groups are destroying America

We saw misinformation spread at alarming rates during the Oregon wildfires. Experts blame the spread of disinformation, in part, on private Facebook groups.

PORTLAND, Ore. — People who study disinformation say private Facebook groups are fertile breeding grounds for spreading false and misleading stories.

Nina Jankowicz, a disinformation fellow at the Wilson Center, co-authored an op-ed in Wired, called "Facebook Groups are Destroying America."

Jankowicz said a shift happened in 2016, when Facebook pivoted from a digital town square to a digital living room, with an emphasis on these private groups. 

The private nature helps people feel more comfortable sharing information and believing what's posted in those groups.

"These communities that are centered around that emotion provide this prime attack surface for anybody who wants to spread this information," Jancowicz said. "And they don't even have to invest in advertising or bots and trolls or inauthentic accounts in order to spread it. They just drop it in the group."

Some of these private Facebook groups have tens of thousands of members and virtually no oversight about what gets posted.

But even the private groups with a few hundred people are susceptible to bad actors spreading disinformation.

Oregon representative Julie Fahey, a Democrat who represents West Eugene and Junction City, has posted a lot about misinformation on twitter.

Fahey said a person joined her neighborhood Facebook group of a few hundred people a few weeks ago and immediately started posting wrong information about the wildfires.

The poster claimed, falsely, that the Oregon Attorney General was warning people about protestors starting fires in neighborhoods.

There were some red flags it was a fake account, like the punctuation mistakes and the fact that he referred to the Oregon Attorney General Ellen Rosenblum as a man.

The neighbors reported the account and it was taken down. 

"I was very disappointed to see it," Fahey said. "You know, I had heard about a lot of the misinformation about the fires and how that was sort of ricocheting throughout social media and having very real-world consequences."

During the Oregon wildfires, several online rumors lead to real-world actions.

RELATED: 'Weaponization of context': Unprecedented misinformation plagues Oregon's wildfire response

Some people refused to leave their homes during evacuation orders, citing rumors that antifa was in the area. The Clackamas County sheriff said that was not true. 

Armed men set up illegal checkpoints to stop people in Clackamas County, and law enforcement agencies, including the Douglas County Sheriff's Office, begged people to stop inundating 911 operators with false rumors about antifa arrests.

Disinformation experts said it's very easy for bad actors to infiltrate private groups and spread misinformation. The more people are aware and skeptical of information before it's shared, the better. 

Jankowicz said to check the source, author, and contact page of the news organization before you post something. 

You can also copy the text of an article and put it into google of the search function on Facebook to see where else it's been posted.

"Another good indication is if you find yourself getting really emotional in response to a piece of content, that should set off your Spidey sense for you to take a step back and start doing these checks of, like, 'Is this a real news outlet,'" said Jankowicz.

Facebook leaders have said they are taking steps to fight disinformation across the platform and in private groups.

The company announced it hired more than 30,000 people across safety and security teams.

In a 2019 blog post, the company said in part: 

One of the main ways we keep people safe is proactively identifying and removing posts and groups that break our rules. This is a main area of focus for the Safe Communities Initiative, and it runs across private and public groups.

Increasingly, we can use AI and machine learning to proactively detect bad content before anyone reports it, and sometimes before people even see it. As content is flagged by our systems or reported by people, trained reviewers consider context and determine whether the content violates our Community Standards. We then use these examples to train our technology to get better at finding and removing similar content. Just as we used proactive detection in public, closed and secret groups before, this process will continue to apply to all public and private groups under our new simplified privacy model.

Researchers, like Jankowicz, said the company could do more including making groups that reached a certain number of people public. 

Do you have something you want us to investigate? Email us at CallCristin@kgw.com