Stories
Mouthpiece of Violence: How Facebook Helped the New Zealand Terrorist
Next Post

Press {{ keys }} + D to make this page bookmarked.

Close
Photo: independent.co.uk/PrtSc

Mouthpiece of Violence: How Facebook Helped the New Zealand Terrorist

526

CALIFORNIA - March 27, 2019

It's been 13 days since the New Zealand massacre that not only set a new record for number of victims, but is also a hallmark in the history of the internet community and the world of mass communications, being the first high-profile terrorist attack to be broadcast online from beginning to end, showing the murder of dozens of people.

The terrorist will likely be remembered as “Anders Breivik Southern Hemisphere.” Australian-born 28-year-old Brenton Tarrant carefully prepared, organized accomplices, and explained in detail the reasons for his act in a large manifesto, full of rhetoric about the “white race” and “its replacement by migrants,” although, in general, the number of Muslims in New Zealand is negligible and many times less than in the European Union. America remains on the sidelines because it is a migrant’s country where people come from all over the world, but the bulk includes Latinos. The terrorist claimed revenge for the 2017 Stockholm terrorist attack as his main motivator.

Then that day, on March 15, armed with a shotgun, the terrorist and his accomplices went to two mosques in Christchurch where they opened fire on praying Muslims. The shocking video of the murder was broadcast immediately on Facebook. Thus, the world's largest social network turned out to be the mouthpiece of terrorism.

The main weapon of “terror” is found in the meaning of the word which in Latin means “fear” or “horror.” Terrorists commit their crimes for the sake of maximum publicity. No publicity-no fear, and therefore, the goal is not achieved. In this context, the media has a special responsibility. In addition to what is self-evident (not publishing terrorist appeals, their self-promotion, and threats, etc.), the key question remains whether the publication of their actions plays into the murderers’ hands?

It can only be answered in the affirmative. Seeing how easily and quickly one person destroys dozens of others, many other latent extremists may have an ardent desire to “also play such a computer game”--a chest camera could turn a mass shooter into a computer shooter. There is no need to go far for examples, such as ISIS, with their skillful propaganda videos.

Back to the New Zealand massacre, who was the instrument in the hands of terrorists? Facebook. The social network was used as a promoter of the murder and in fact a necessary condition for its commission. After all, if the terrorist and his accomplices hadn’t had any chance to be seen and heard, they likely would not have committed the crime. Terrorism loves and seeks publicity.

And here we are faced with egregious facts: First, why did Facebook, known for its ultra-rigid policy of censorship and moderation, miss the streaming of a mass murder and, secondly, why did it then begin to lie, saying it had no time to remove it?

Here is what CNN wrote with reference to the Director of Policy Australia and New Zealand for Facebook:

cnn.com/PrtSc

The broadcast lasted 17 minutes and completely covered the period of the actual execution. According to Mia Garlick, it can be concluded that the police noticed the terrorist stream in time and Facebook moderators interrupted it in the middle of a word. This is a lie. Tarrant broadcasted his actions for 17 minutes, and the FB moderators were too late.

Consider this: Facebook learned that their own platform broadcasted a monstrous massacre from third-party sources. “The police warned us,” they say. But what if the police had not warned them? In this case, when would the video have been deleted? Maybe in 24 hours? Or two days? How many millions of people would have had time to see it with someone being inspired by it? None of these questions require an answer, and moreover, no one will provide one.

Once again, Rep. Bennie Thompson, chairman of the House Homeland Security Committee, confirmed it calling on tech companies to explain themselves in a briefing on March 27th:

“Studies have shown that mass killings inspire copycats — and you must do everything within your power to ensure that the notoriety garnered by a viral video on your platforms does not inspire the next act of violence,” Thompson wrote.

Moreover, it is shocking that the New Zealand media itself reported the first information immediately.

jpgazeta.ru/PrtSc

Whitney Philips, a professor of communications at Syracuse University, said that the ideas that we choose to tolerate on the internet is a result of the forces of the masses, not just the actions of people on fringe corners of the internet. If the kind of attack we saw at Christchurch could be neatly blamed on a small, white supremacy forum alone, it would be a far less difficult problem to solve. Sadly, the reality is much more complicated.

“The shifting of the Overton window is not the result of just a small group of extremists,” Philips said. “The window gets shifted because of much broader cultural forces.”

The attack began at 13:45 p.m. and 12 minutes later the local New Zealand Herald media reported the incident. Facebook was still silent. The video lasted 17 minutes, and was fully available on Facebook. The police reported this to the head office even later than the event. We won’t discuss the slow police reaction, as at that time it was more important to stop the attacker.

Finally, according to other sources, the video remained on the Facebook public domain for almost three hours. Here the question arises as to whether it was a commercial component to attract views and visitors. It’s no secret that Facebook, like many other similar resources, is primarily just looking for customers. In this case, it makes Facebook look like an accomplice to terrorism, and consciously so.

This all speaks to the irresponsibility and incompetence of the media giant Facebook. The problem is too serious to ignore. It’s no surprise though, since Facebook’s outsourced moderators from Cognizant have repeatedly found obvious errors, with the head office doing nothing in response. If there is a lucrative contract, it can hardly be prevented by some terrorist.

washingtonpost.com/PrtSc

Perhaps it’s time to put the question straight: Why does the social network pretend to be the arbiter of the freedom of speech if it is a violator of its own laws?

And this is not the first time that Facebook or another social network has become involved in a terrorist scandal.

In June 2017, Google, Facebook, Twitter, and Microsoft announced the formation of the Global Internet forum against terrorism (GIFCT). The aim was stated as combating the terrorist exploitation of their services. A year later, the company praised the achievements of the first year of its work but immediately thereafter, Facebook failed miserably. The social network was accused of directly supporting terrorist organizations.

Facebook’s “suggested friends” feature was accused of actively connecting jihadists around the world, allowing them to create new terrorist networks and even attract new members.

Researchers, who analyzed the Facebook activity of a thousand ISIS supporters in 96 countries, found users with radical Islamist sympathies were routinely introduced to one another through the “suggested friends” feature.

Social networks exist to find common contacts, but in this case, users’ activities should be monitored, as they do in the case of “Russian hackers.” In particular, several USA Really reporters have already faced the direct blocking of their personal Facebook pages due to “suspicious activity.” The facts of the report were confirmed by examples of Gregory Waters, one of the authors of the report, who described how he was bombarded by suggestions for pro-ISIS friends after making contact with one active extremist on the site. The problem is that people often don’t know their Facebook and friends nd it’s likely someone could be an ISIS terrorist.

Even more concerning was the response his fellow researcher, Robert Postings, got when he clicked on several non-extremist news pages about an Islamist uprising in the Philippines.

motherjones.com/PrtSc

Within hours he was inundated with friend suggestions for dozens of extremists based in that region.

Postings said: “Facebook, in their desire to connect as many people as possible, have inadvertently created a system which helps connect extremists and terrorists.”

In addition, according to Bloomberg Businessweek, at least a dozen U.S.-designated terror groups maintain a presence on Facebook. About the same number uses Twitter as their main platform, including Hamas and Hezbollah in the Middle East, Boko Haram in West Africa, and the Revolutionary Armed Forces of Colombia (FARC). But most of all, the researchers were amazed that ISIS also uses these platforms, rallying supporters with everything from gruesome photos of death to quotidian news about social services they offer. Several can be found simply by typing their names into Facebook’s search bar in English or, in some cases, in Arabic or Spanish. Some of the groups proudly link to their Facebook pages on their home websites, too.

While a few pages may be blocked, another 100 or 1,000 remain. For example, in 2014, within hours of Bloomberg Businessweek inquiring about pages for Hezbollah, Facebook removed those for Al-Manar, the Hezbollah news site Al-Ahed, and the Islamic Resistance in Lebanon, a charity associated with Hezbollah. All three, however, quickly reappeared with tweaks to make them seem new. At the end of April, Al-Ahed’s website linked to an Arabic Facebook page with more than 33,000 followers. Content on the page included a video of masked snipers targeting Israeli soldiers. Another Al-Ahed Facebook page had more than 47,000 followers, and one in English had 5,000.

Instead, the media giant claims its supposed success in rooting out fake pages and groups. The system works, they say -- only the question remains: Agaisnst whom?

The study did not ignore the question of how quickly the social network identifies potential offenders associated directly with terrorist organizations. It turns out that of the 1,000 ISIS-supporting profiles examined by researchers, less than half of the accounts had been suspended by Facebook six months later. In one case, a British terror suspect had his Facebook account reinstated nine times after complaining, despite being accused of having posted ISIS propaganda videos.

Facebook representatives apparently don’t notice that kind of research and continue to talk steadily: “There is no place for terrorists on Facebook. We work aggressively to ensure that we do not have terrorists or terror groups using the site, and we also remove any content that praises or supports terrorism.”

However, at the same time, they say the problems that could arise are only related to the automated control system.

“Our approach is working – 99 percent of ISIS and Al Qaeda-related content we remove is found by our automated systems. But there is no easy technical fix to fight online extremism. We have and will continue to invest millions of pounds in both people and technology to identify and remove terrorist content.”

The question remains, where do these round sums go? For the direct support of terrorists, or only for maintaining communication with them?

Twitter, in turn, also deleted about a million accounts for terrorist propaganda. In the second half of 2017, YouTube removed 150,000 videos for violent extremism. Almost half of them were removed within two hours of being uploaded.

In response to the disruption of their use of Twitter, ISIS supporters have tried to circumvent content blocking technology by what is known as outlinking, using links to other platforms. The sites most commonly outlinked to include justpaste.it, sendvid.com and archive.org.

theconversation.com/PrtSc

Another major social network problem appeared recently, in Russia, with the Telegram program, which is, according to its creator Pavel Durov, is highly protected from hacking by secret digitization. The program thus turned out to be a paradise for extremists who use encryption to communicate their positions to people.

Other encrypted messaging services, including WhatsApp, have been used by jihadists for communication and attack-planning. Websites have also been relocated to the Darknet -- a hidden part of the internet that is anonymous in nature and only accessible with special encryption software. A 2018 report warned that Darknet platforms have the potential to function as a jihadist “virtual safe-haven.”

In addition to the presence of terrorists and their supporters on social networks, the main threat is the inability to find them among the thousands and millions. As we said above, it is important for terrorists to be seen and known, otherwise there is no point in their existence, so in order to work on the internet, they need to be able to hide their abilities and, most importantly, their ideas.

Most often it’s simple businessmen and young active politicians who create new parties or cells for allegedly carrying out propaganda. They can’t be found through the automated system -- they have long learned to work around blocking and other restrictions. Therefore, it is high time for such media giants to think about ways of doing their work other than via smart computers.

So far, none of this has been done by any of the major social media platforms. Therefore, people just have to wait for the next attack and live in complete fear and ignorance

Finally, Guy Rosen, the vice-president of integrity at Facebook, said in a blog that when users reported the video, they did not use terms or tags that would have prompted the social network to review it more quickly.

“The video was reported for reasons other than suicide and as such, it was handled according to different procedures,” wrote Rosen.

The incident led Facebook to re-examine its reporting system so that it can react more quickly to live videos showing disturbing or graphic content. It also said it was reviewing the way it shared information to make it easier for other organizations and sites to spot copies.

Rosen referred to two main problems that helped to neutralize and remove the video from the platform, but on the other hand it was more difficult than expected

First, he said, a “core community of bad actors” worked together to continually re-upload edited copies of the video that were altered to defeat the detection systems

Second, the way people shared the video, sometimes by recording clips shown on TV, made it harder to spot copies.

If the work on this is carried out as the Facebook representative said, we can only wait for the next new products on the social network. Perhaps in the future, people will be safe from such incidents, and Facebook will be ahead of the police or the media.

Author: USA Really