The widow of an American killed in a shooting attack at a Jordanian police training center has sued Twitter, blaming the social media company for making it easier for Islamic State to spread its message.
Tamara Fields, a Florida woman whose husband Lloyd died in the 9 November attack, accused Twitter of having knowingly let the militant Islamist group use its network to spread propaganda, raise money and attract recruits. She said the San Francisco-based company had until recently given Isis an “unfettered” ability to maintain official Twitter accounts.
“Without Twitter, the explosive growth of Isis over the last few years into the most-feared terrorist group in the world would not have been possible,” says the complaint filed on Wednesday in the federal court in Oakland, California.
Fields says that at the time of her husband’s death, Isis had an estimated 70,000 Twitter accounts, posting 90 tweets per minute.
The case is the latest episode in the collision between the global technology industry and its increasing role in complex geopolitical issues, including terrorism.
Silicon Valley executives have continued to maintain that they only provide the platform for communication, and that they can’t be held accountable for what happens on their platforms.
Meanwhile, Isis increasingly uses its sophisticated social media operation, using well-produced viral videos and charismatic Twitter accounts, to recruit both in the US and abroad.
“While we believe the lawsuit is without merit, we are deeply saddened to hear of this family’s terrible loss,” said a Twitter spokesperson. “Like people around the world, we are horrified by the atrocities perpetrated by extremist groups and their ripple effects on the internet. Violent threats and the promotion of terrorism deserve no place on Twitter and, like other social networks, our rules make that clear.
“We have teams around the world actively investigating reports of rule violations, identifying violating conduct, partnering with organizations countering extremist content online, and working with law enforcement entities when appropriate.”
This lawsuit is unique in using the US Anti-Terrorism Act, which has mostly been used by Americans to sue Hamas, Hezbollah and other alleged foreign terrorist organizations, said Harmeet Dhillon, a lawyer and vice-chairman of the California Republican party.
“The same argument could be used against phone companies for allowing alleged terrorists to place phone calls, or Fedex for allowing alleged terrorists to mail pamphlets,” Dhillon said.
“There are many viewpoints held by Americans that other Americans would say are offensive, or support terrorism. I’ve seen the occupation in Oregon described as terrorism. By allowing tweets that support the Oregon occupiers, is Twitter providing material support to terrorists?”
The federal courts of appeals have been split on the issue of just how broad the Anti-Terrorism Act is. A couple of years ago there was a lawsuit, Rothstein v UBS AG, against UBS for aiding and abetting Hamas by carrying out routine banking transactions that ultimately benefited terrorists – the lawsuit was dismissed by the trial court. On the other hand, in a landmark 2014 case, BNP Paribas pleaded guilty and agreed to forfeit $9bn for moving about that amount through the US on behalf of known terrorist groups.
“Twitter is really in the spotlight right now,” said Chenxi Wang, chief strategy officer at the security firm Twistlock. “At some point the supreme court will need to step in.”
For the social media giants, the biggest challenge might not just be an ethical issue but a practical one; it’s very difficult to find terrorists and requires enormous amounts of resources.
“If you’re trying to monitor Isis you need people who speak Farsi, Russian, Uzbek and many different Arabic dialects. Some of these are very narrow sets of languages where even the CIA has trouble recruiting people with these language skills,” says Daniel O’Connor, vice-president of public policy at the Computer & Communications Industry Association.
At the recent White House terrorism summit, a meeting between security heads and tech executives, some on the Silicon Valley side wondered if they could create algorithms to find terrorists.
“People seem to think algorithms can do this,” O’Connor said. “But when you realize speech is context dependent, you can’t just write an algorithm to filter it.”
One of the only forms of media content for which companies are liable if it passes through their servers is child abuse images.
“Child [abuse images are] different. The speech itself is completely categorically illegal and you can’t possess it. So if it’s on their servers they would be personally liable,” says David Green, a senior staff attorney at the EFF. “Terrorism speech is not going to be necessarily unprotected speech, in fact it’s almost never going to be unprotected.”
Widow of American killed in Jordan attack sues Twitter over growth of Isis | Technology | The Guardian