Tech giants are under fire for facilitating terrorism

VİEWS: 111 Özəl POLITICS

AFTER last weekend’s terrorist attack in London Theresa May, Britain’s prime minister, declared that "enough is enough.” She was not suggesting that some reasonable amount of terrorism had now been exceeded; rather, that extremism had been too readily tolerated in the past. She specifically criticised the big internet firms.

"We cannot allow this ideology the safe space it needs to breed,” she said, adding that Britain and its allies needed to "regulate cyberspace to prevent terrorist and extremist planning”, AzVision.az reports citing the Economist.

The threats Mrs May and other political leaders identify online are twofold. The first is the extremist material that spews from jihadist websites and chat-rooms and spreads across social media. The second is terrorists’ ability to communicate via encrypted messaging apps. Together, they create an online echo chamber that amplifies anti-Western messages and helps propel a few individuals on their journey towards murder.

The three men who stabbed and rammed Londoners in the latest attack were not classic "lone wolves”, radicalised online and invisible to the security services until they acted. They were part of a London-based group that supports Islamic State (IS) and is linked to Al Muhajiroun, a banned Islamist organisation. At least one of them was known to law enforcement. According to the Henry Jackson Society, a British think-tank, a quarter of all those convicted in Britain for offences related to Islamist terrorism between 1998 and 2015 were affiliated to Al Muhajiroun.

But evidence is emerging of the role the internet played in reinforcing the three men’s extremism and helping them to plan their attack. One, Khuram Butt, had links to Mohammed Shamsuddin and Abu Haleema, two extremist preachers. The latter has a hefty following on YouTube and is thought to have been partly responsible for the online radicalisation of an Australian teenager convicted last year of plotting to behead a police officer. Abu Haleema was arrested on suspicion of encouraging terrorism, then released on condition that he stopped using social media to spread his views. His Twitter account was closed at the request of MI5, Britain’s security service; YouTube is reported to have refused to take down his videos.

Mr Butt was also reportedly influenced by the online videos of Ahmad Musa Jibril, an American preacher and IS recruiter, which YouTube still carries. According to the London-based International Centre for the Study of Radicalisation and Political Violence, more than half of a sample group of foreigners who had gone to Syria to fight for IS followed Mr Jibril on Twitter.

The London attackers probably also used jihadist websites to help them plan. Instructional videos showing how to kill as many people as possible by driving into them are not hard to find. And judging by past attacks, the perpetrators may well have communicated through an "end-to-end” encrypted messaging app such as WhatsApp or Telegram.

Radicalisation superhighway

Fears that the internet is promoting and enabling Islamist terrorism are not new. But they have become sharper since 2014, when IS established its "caliphate” in parts of Syria and Iraq. It has put much more effort than its older rival, al-Qaeda, into creating sophisticated online propaganda, which it uses to recruit, promote its ideology and trumpet its social and military achievements. It puts as much attention into digital marketing as any big company, says Andrew Trabulsi of the Institute for the Future, a non-profit research group. "It’s a conversion funnel, in the same way you would think of online advertising.”

At first, IS’s aim was to recruit foreign fighters to Syria and Iraq, where they would help build the caliphate. Around 30,000, including some 6,000 from Europe, heeded its call. But as the tides of war have turned (western Mosul, its last big redoubt in Iraq, is about to fall, and Raqqa, its "capital” in Syria, is under assault by the American-backed Syrian Democratic Forces), it is turning its energies to creating mayhem in the West, in particular Europe. Through its various outlets, including Rumiyah, an online English-language magazine, it is asking supporters not to travel to Syria or Iraq, but to kill people at home.
IS’s media operation was portrayed in a report published in 2015 for the Quilliam Foundation, a counter-extremism think-tank in London. "Documenting the Virtual Caliphate” described an outlet that released nearly 40 items a day, in many languages, ranging from videos of battlefield triumphs and "martyrdom” to documentaries extolling the joys of life in the caliphate. Each wilayat or province of the caliphate has its own media team producing local content.
Unlike al-Qaeda, which aims its messages at individual terror cells, IS uses mainstream digital platforms to build social networks and "crowdsource” terrorist acts. Its Twitter supporters play whack-a-mole with moderators, setting up new accounts as fast as old ones are shut down. Some accounts broadcast original content; others promote the new accounts that replace suspended ones; others retweet the most compelling material.

When IS releases a new recruitment video, its supporters spring into action. Rita Katz of the SITE Intelligence Group, a Washington-based firm that tracks global terror networks, analysed what happened to "And You Will Be Superior”, a 35-minute video released in March that follows suicide-bombers, from a doctor to a disabled fighter to a child. Translators, promoters, social-media leaders and link-creators joined together to promote it across the internet. One of these groups, the Upload Knights, creates hundreds of links daily across streaming and file-sharing sites. Ms Katz found that in the two days after the film’s release, it distributed the video with 136 unique links to Google services (69 for YouTube, 54 for Google Drive and 13 for Google Photos).

Network effects

There is no doubt that the way IS uses the internet adds greatly to the fear that terrorists set out to foster. But security experts differ in their assessment of its overall impact. "If there is a message that resonates, it will get out there,” says Nigel Inkster, a former intelligence officer now with the International Institute for Strategic Studies in London. What the internet has changed, he says, is the speed at which the message travels, and its ubiquity.

A counter-terrorism expert at Britain’s Home Office agrees: "The internet has allowed the process of radicalisation to evolve, but it has not revolutionised it.” Although online jihadist content can trigger or reinforce radicalisation, it is rarely enough on its own. Creating a terrorist usually requires grooming through offline social networks that provide the camaraderie of shared purpose and the personal bonds which create feelings of obligation.

There is, however, broad agreement that the internet both amplifies the impact of terrorism and launches some disaffected youths on the path to jihad. The violent images they view desensitise them. Propaganda validates their extremist ideology, provides them with the support of a community and primes them to act by emphasising purification through sacrifice.
All this puts the big internet firms in a bind. They have no interest in helping users spread extremism, and already ban pro-terrorist content in their terms and conditions. But they have been slow to police fake news and extremist propaganda, lest they be accused of making editorial judgments about what can be shared on their platforms. They have mostly relied on reporting systems, whereby users flag extremist content and companies decide whether to remove it after reviewing it. This is cumbersome, slow and costly. Facebook recently announced that it plans to double its workforce of content moderators, hiring another 3,000.

In the 1990s, under pressure from governments, internet firms cleared most child pornography from their platforms. But it is easier to write a program that recognises an image of a child in a sexual act than one that can distinguish extremist content. An algorithm might spot and block images of beheadings, but that would censor some news articles and documentaries.

Mark Zuckerberg, Facebook’s boss, has said he wants to invest in artificial intelligence to root out terrorist propaganda, but that it will take many years to develop new tools. In the meantime, the social network and other platforms must rely on human moderators, who have to make difficult judgments. Facebook’s guidelines, which were recently leaked, show how hard it is to distinguish posts that should be removed from those that are offensive but permissible. For example, posting "I’m going to destroy the Facebook Dublin office” is allowed, but posting "I’m going to bomb the Facebook Dublin office” is not, because it is more specific in suggesting a weapon.

Some firms are experimenting with new tactics. Jigsaw, a sister company of Google, has tested a "redirect method”, showing ads and videos that counter IS propaganda to people who search for extremist material on Google and YouTube. Microsoft is trying something similar for its search engine, Bing. Last year Google, Facebook, Twitter and Microsoft agreed to work together on a database, where they mark terrorist content with a unique identifier. Other companies can spot tagged content and remove it from their own platforms. But the database is at an early stage and includes only the worst material.

Further progress will require joint action by internet firms and governments. Unfortunately, relations have been strained in recent years. The firms used to give some discreet help to authorities on both sides of the Atlantic, says a former British intelligence officer. But they stopped when their co-operation was revealed in the classified material leaked by Edward Snowden in 2013. Some of their customers were horrified to learn that their privacy, however notionally, was being compromised by what they saw as collusion with government spooks.

Commercial interests combined with west coast libertarianism to create a dialogue of the deaf. Security services accused firms of ignoring public safety and their legal responsibilities; Apple, Google and others retorted that what they were being asked to do was either impossible or would threaten their profits. After Mrs May’s speech, some felt they were being scapegoated. "Politicians aren’t blaming the car-rental companies for renting white vans, or telecoms firms for offering phone and internet services to bad guys, but they are blaming internet platforms for allowing them to do bad stuff,” grumbles an executive at an American internet firm.

Even so, firms are waking up to the fact that if they do not find ways to work with governments, they will be forced to do so. They fear laws along the lines of one recently proposed in Germany that would see them fined vast sums unless they speedily remove any content that has been flagged as hate speech. They also have a growing commercial interest in cracking down on terrorist content, which hurts their brands and could cut revenue. In recent months some of YouTube’s clients pulled their ads after realising that they were appearing next to extremist videos.

Quietly, co-operation between governments and internet firms is picking up once more. In Britain a specialist anti-terror police unit that trawls the web for extremist material removed 121,000 pieces of content last year with the help of some 300 companies around the world. Getting around encryption poses greater technical challenges. Weakening it would not be in the public interest, says Robert Hannigan, who ran GCHQ (Britain’s signals-intelligence agency) until January this year.

The idea of forcing firms to put "back doors” into their software that authorities could use to spy on terrorists has been largely abandoned. It would make the software less secure for all its users, might violate free-speech protections in America and would anyway be impossible, since some messaging apps, including Telegram (developed by a Russian, Pavel Durov, now a citizen of St Kitts and Nevis) are beyond the reach of Western laws.

The authorities do, however, have other options. Once an intelligence agency has access to a target’s phone or laptop, almost anything is possible. These devices’ built-in cameras and microphones make them excellent for bugging. Or the spooks can install covert monitoring software to see what is being displayed on the screen and to log a user’s keystrokes. Since messages must be decrypted before their recipients can read them, this makes it possible to bypass even the strongest encryption.

Governments and tech firms now broadly accept that they have a common interest in establishing global standards for exchanging data across borders. A bilateral agreement that Britain and America reached during Barack Obama’s administration is before Congress and awaiting legislation. It would not permit Britain to get data on American citizens or residents; and access would be limited to targeted orders relating to the prevention or investigation of serious crime and terrorism.

This could become a template for other international agreements. In testimony before the Senate Judiciary Committee in Washington in May, Brad Smith, the president of Microsoft, argued for a change in the legal framework, which he said "impedes America’s allies’ legitimate law-enforcement investigations” and exposes American tech firms to potential conflicts of jurisdiction. Greater legal certainty, less confrontation and more co-operation between governments and firms will not drive jihadist propaganda off the internet altogether. But they should clear the worst material from big sites, help stop some terrorists—and absolve tech firms from the charge of complicity with evil.

COMMENTS