By Henry Ridgwell September 18, 2020
Social media companies are taking down videos and images that could be vital in prosecuting serious crimes, including war crimes, according to a new report from Human Rights Watch.
In its investigation titled “‘Video Unavailable’: Social Media Platforms Remove Evidence of War Crimes”, Human Rights Watch says companies like Facebook, YouTube and Twitter are increasingly using Artificial Intelligence algorithms to remove material deemed offensive or illegal. Researchers fear vital evidence is being missed or destroyed.
The report cites videos collated from social media by the investigative organization Bellingcat showing the Russian ‘Buk’ missile launcher, which prosecutors say was used to fire the weapon that brought down Malaysian Airlines Flight MH17 in 2014.
The evidence was later used by the Dutch-led multinational Joint Investigation Team conducting the criminal investigation into the shooting down of the jet, which killed 298 people. Russia denies involvement.
Russia Rejects Report Blaming Russia for Downing of Civilian Airliner
The Netherlands, which lost 193 citizens in the attack, informed Moscow on Friday that it held the Russian state legally responsible and would pursue compensation
“What we know from Bellingcat is that in the later stages of the investigation, they went back to look at the sources, some of the social media posts that they’d used to substantiate their investigations, in order to provide that to judicial authorities in the Netherlands. And at that point the content had come down,” says Belkis Wille of Human Rights Watch, who co-authored the report.
Wille told VOA that evidence from social media plays a central role in many of the organization’s investigations.
“What we started to notice in the last few years, particularly since 2017, is that we would see a video of let’s say soldiers executing someone, or an ISIS (Islamic State) propaganda video, and if fifteen minutes or an hour later we went back to look at a video again, it was suddenly gone,” Wille he said in an interview Wednesday.
Governments are putting increasing pressure on internet companies to remove offensive, illegal or dangerous material. Social media firms pledged to do more to block extremist content following the live-streaming on Facebook of a terror attack in on two mosques in Christchurch, New Zealand in 2019, which killed 51 people.
Social media companies told Human Rights Watch they are required by law to remove material that could be offensive or incite terror, violence or hatred. As well as human moderators, many also use artificial intelligence algorithms to take down material.
“Nowadays these algorithms are so effective that they are taking down content the minute it gets posted. So, no user actually gets to see that content before it comes down,” Wille said.
The International Criminal Court issued its first arrest warrant based largely on evidence collected from social media in 2017, for the Libyan commander Mahmoud Al-Werfalli, who accused of shooting dead several captives in Benghazi. He remains at large.
Meanwhile civil society groups like ‘Syria Archive’ are using social media videos to document potential war crimes, including the use of chemical weapons. The group has also complained of vital evidence being deleted from social media before it can be logged and analyzed.
Belkis Wille of Human Rights Watch says there could be a solution.
“What we’re calling for is the creation of some kind of global mechanism, sort of an archive or library, where content that got taken down from social media companies would be transferred to. And then it would be up to this body to sort and store this content and then to figure out a system of granting access to people seeking accreditation to get access to that content – not for them to replicate and post it online, but actually to use it for these investigative purposes.”
Human Rights Watch says it is in dialogue with social media companies about creating such an archive. Twitter told the group that it is unable to provide civil society organizations access to users’ content or data without an appropriate legal warrant, adding that it is ‘supportive of efforts through the Global Internet Forum to Counter Terrorism (GIFCT)’s working group on legal frameworks to consider potential avenues to allow greater access to content for appropriate uses.’ Twitter declined to offer further comment to VOA.
Facebook and YouTube had not responded to Human Rights Watch or VOA requests for comment by the time of publication.
Join the GlobalSecurity.org mailing list