Major world social media remove shooting video of NZ terror attacks

Evarado Alatorre
Marcha 17, 2019

The New Zealand Department of Internal Affairs said in a statement that the video footage is "likely to be objectionable content under New Zealand law" and that "people who share the video of the shooting today in Christchurch are likely to be committing an offence".

Once a video is posted online, people who want to spread the material race to action.

According to authorities, a shooter appeared to livestream video of the attack on Facebook, documenting the attack on Facebook from the drive to the Al Noor Mosque from a first-person perspective, and it showed the shooter walking into the mosque from the vehicle and opening fire.

Videos and posts that glorify violence are against Facebook's rules, but Facebook has drawn criticism for responding slowly to such items, including video of a slaying in Cleveland and a live-streamed killing of a baby in Thailand.

"While Google, YouTube, Facebook and Twitter all say that they're cooperating and acting in the best interest of citizens to remove this content, they're actually not because they're allowing these videos to reappear all the time", Lucinda Creighton, a senior adviser at the Counter Extremism Project, an global policy organization told CNN. Still, the problem persists.

"It's very hard to remove it", Ionescu said.

Mia Garlick, Facebook's director of policy for Australia and New Zealand, said that they have already suspended the shooter's Facebook and Instagram accounts and have banned the video from the platforms, as well as remove any support or praise for the crime as as soon as they are made aware. In response, YouTube said it's "working vigilantly to remove any violent footage".

Frustrated with years of similar obscene online crises, politicians around the globe on Friday voiced the same conclusion: social media is failing. "I think this will add to all the calls around the world for more effective regulation of social media platforms", she added.

More news: Qualcomm wins $31M verdict against Apple for patent infringement

After Facebook stopped the livestream from New Zealand, it told moderators to delete from its network any copies of the footage.

Facebook also issued a statement saying it had taken down the suspected shooter's Facebook and Instagram accounts and removed the video he posted of the attack.

Users intent on sharing the violent video took several approaches.

In a 15-minute window, Reuters found five copies of the footage on YouTube uploaded under the search term "New Zealand" and tagged with categories including "education" and "people & blogs". Others shared shorter sections or screenshots from the gunman's livestream, which would also be harder for a computer program to identify.

The video's spread underscores the challenge for Facebook even after stepping up efforts to keep inappropriate and violent content off its platform.

Researchers and entrepreneurs specializing in detection systems said they were surprised that users in the initial hours after the attack were able to circumvent Facebook's tools. "Because if you do, sharing this video is exactly how you do it", Moore said. "Can we ever make live videos safe?" "Take some ownership. Enough is enough". "Tech companies have a responsibility to do the morally right thing".

The rampage's broadcast "highlights the urgent need for media platforms such as Facebook and Twitter to use more artificial intelligence as well as security teams to spot these events before it's too late", Ives said. "We will do whatever is humanly possible for it to never happen again".

Britain's interior minister also spoke out.

Otros informes por

Discuta este artículo