Australia has passed legislation that would impose huge fines, even jail time, for social media companies and their executives if “abhorrent violent material” isn’t removed from their platforms.
“Together we must act to ensure that perpetrators and their accomplices cannot leverage online platforms for the purpose of spreading their violent and extreme propaganda — these platforms should not be weaponized for evil,” Attorney General Christian Porter told Parliament.
Australia’s conservative parliament introduced the legislation after a Brenton Tarrant, a 28-year-old Australian white supremacist, allegedly violently attacked worshipers in two Christchurch, New Zealand mosques during Friday Prayer on March 19. Tarrant is accused of shooting and killing 50 people in what was the worst mass shooting in New Zealand history.
How the Law Would Work
Australia’s new social media makes allowing violent content, such as terrorism, attempted murder, murder, rape, torture and kidnapping, illegal. The crime will be punishable by a fine of $10.5 million Australian dollars (US$7.5 million) or 10% of the platform’s annual turnover, whichever is more lucrative. Additionally, it will hold executives liable and subject to three years in prison.
The content must have been recorded by the perpetrator or an accomplice for the law to apply. Furthermore, platforms anywhere in the world can face massive fines if they don’t notify the Australian Federal Police after being made aware of their platform streaming such content within Australia.
Social Media Companies Are Not Happy
“This law, which was conceived and passed in five days without meaningful consultation, does nothing to address hate speech, which was the fundamental motivation for the tragic Christchurch terrorist attacks,” Sunita Bose, managing director of the Digital Industry Group Inc. (DIGI), a non-profit industry association representing Australia’s digital communication platforms including Facebook, Google and Twitter.
“This creates a strict internet intermediary liability regime that is out of step with the notice-and-takedown regimes in Europe and the United States, and is therefore bad for internet users as it encourages companies to proactively surveil the vast volumes of user-generated content being uploaded at any given minute,” Bose said in a statement.
Mark Zuckerberg Defends Facebook’s Live-Streaming
On April 4, Facebook Chairman and CEO Mark Zuckerberg told ABC network’s “Good Morning America” live-streaming is an important part of democracy. He argued that while the company could put a delay on live streams, such a move wouldn’t prevent them.
“[A delay] would fundamentally break what livestreaming is for people. Most people are livestreaming a birthday party of hanging out with friends when they can’t be together,” Zuckerberg told George Stephanopoulos.
Following the Christchurch attack, Facebook acknowledged that its Artificial Intelligence (AI) systems failed to identify the livestream. Once the attack was aired on Facebook, Twitter and Youtube had difficulty stopping re-uploaded versions of the video from popping up.
Will The Law Actually Take Effect?
There are numerous critics of the law that argue it simply doesn’t make sense for today’s world.
Global tech superpowers including Google, Facebook, and Amazon have argued the law could damage Australia’s relations with other countries, as all users worldwide would have to be proactively surveilled. They have also suggested they may invest less in Australia as social media continues to create jobs and opportunities.
Critics also argue the law is too vague and could require videos, like those used as evidence in human rights abuse cases, be deemed illegal.
With the Australian election around the corner in May, the law may be completely derailed or face significant changes.
Regardless of what happens in the future, social media has now been put on alert — they must be proactive in dealing with the live-streaming of violent acts.