As lawmakers on Capitol Hill sought reassurances from a trio of major tech platforms on their efforts to better recognize and remove extremist content online, Facebook unveiled new steps it is taking to rid the platform of harmful posts, videos, and photos.
The social media giant announced Tuesday it has updated its definition of terrorism to include attempts at violence directed toward civilians “with the intent to coerce and intimidate,” in addition to acts of violence undertaken with a political or ideological goal.
It also plans to work with law enforcement to expand the use of artificial intelligence and improve its detection of first-person footage of violent events, such as the attack in Christchurch, New Zealand, this year, where 50 were killed. The suspected gunman livestreamed the shooting on Facebook, and while a user flagged the video, it remained on the site for roughly 30 minutes before it was taken down.
Additionally, Facebook told Rep. Max Rose, the chairman of the Homeland Security Subcommittee on Intelligence and Counterrorism, in a letter it is blocking links to pages on 8chan and 4chan that distribute extremist content. 8chan and 4chan have come under scrutiny as havens for extremists, and at least three deadly acts have been linked back to the platforms in recent months.
The New York Democrat said in a statement he is “encouraged” by Facebook’s willingness to ban the links and urged Congress and tech companies to “do more to combat the spread of terrorism and keep our communities safe.”
To oversee Facebook’s decisions involving content removal, the Menlo Park, California-based company established an independent oversight board, the details of which were rolled out this week. The panel will be made up of a “diverse and qualified” group from Facebook’s board of directors, who will serve three-year terms.
Facebook CEO Mark Zuckerberg bills the panel as an “advocate for the community” that will ensure the company fulfills its “responsibility to keep people safe.” The board will serve as the adjudicator of disputes involving Facebook’s content decisions.
While the panel will hear just a small number of cases at first, Zuckerberg said, but he hopes it’ll expand its scope and “potentially include more companies across the industry.”
“Building institutions that protect free expression and online communities is important for the future of the internet,” he added.
Facebook’s latest efforts came as representatives from it, Twitter, and Google testified before the Senate Commerce Committee on Wednesday about the proliferation of extremist content online and what steps the companies are taking to block and remove violent postings, videos, and photos.
The three tech companies identified the steps they’re taking to better their processes for identifying and removing such content faster, though they were pushed by committee members to ramp up their efforts.
“I would suggest even more needs to be done,” Sen. Richard Blumenthal said, “and it needs to be better, and you have the resources and technological capability to do more and better.”
Facebook and Twitter, the favored social media platform of President Trump, have both faced increasing pressure to devote more resources toward removing hateful and violent content.
Nick Pickles, Twitter’s director of public policy strategy, told senators the company has suspended more than 1.5 million accounts in a three-year span for violations related to the promotion of terrorism. More than 370,000 accounts were suspended in 2018 alone.
Meanwhile, Derek Slater, director of information policy at Google, said nearly 90% of the 9 million videos YouTube removed in the second quarter of 2019 were flagged by its automated systems. Of those videos detected, more than 80% were removed before they were viewed once.
Facebook relies on artificial intelligence and 15,000 people worldwide to identify and block harmful content. The social media giant also enacted new restrictions for its livestreaming service following the Christchurch attack but has urged the federal government to take action, too.