Facebook
is unwittingly auto-generating content for terror-linked groups that its
artificial intelligence systems do not recognize as extremist, according to a
complaint made public on Thursday.
The
National Whistleblowers Center in Washington carried out a five-month study of
the pages of 3,000 members who liked or connected to organizations proscribed
as terrorist by the US government.
Researchers
found that the Islamic State group and al-Qaeda were "openly" active
on the social network.
More
worryingly, the Facebook's own software was automatically creating
"celebration" and "memories" videos for extremist pages
that had amassed sufficient views or "likes."
The
Whistleblower's Center said it filed a complaint with the US Securities and
Exchange Commission on behalf of a source that preferred to remain anonymous.
"Facebook's
efforts to stamp out terror content have been weak and ineffectual," read
an executive summary of the 48-page document shared by the center.
"Of
even greater concern, Facebook itself has been creating and promoting terror
content with its auto-generate technology."
Survey
results shared in the complaint indicated that Facebook was not delivering on
its claims about eliminating extremist posts or accounts.
The
company told AFP it had been removing terror-linked content "at a far
higher success rate than even two years go" since making heavy investments
in technology.
"We
don't claim to find everything and we remain vigilant in our efforts against
terrorist groups around the world," the company said.
Facebook
and other social media platforms have been under fire for not doing enough to
curb messages of hate and violence, while at the same time criticized for
failing to offer equal time for all viewpoints, no matter how unpleasant.
Facebook
in March announced bans at the social network and Instagram on praise or
support for white nationalism and white separatism.
No comments:
Post a Comment