An alarming new report from artificial intelligence (AI) startup L1ght found a 70% rise in online hate speech between teens and kids since coronavirus school closures began, much of it including racism. The report’s specifics are even worse. L1ght found a veritable laundry list of hate speech stats, including:
- Instances of hate speech directed at China and Chinese people, including Chinese-Americans, have risen by 900%.
- Traffic to hate sites and specific posts against Asians has risen by 200%.
- A 40% rise in toxicity on popular gaming platforms such as Discord.
“L1ght’s report shows how easily social networks, communication apps, and gaming platforms can become hubs of hate, abuse, and toxicity,” Zohar Levkovitz, CEO and Co-Founder of L1ght, said in a press statement. “It is deeply concerning that instigators of hate are exploiting this time of crisis to reach out to new audiences with their offensive content – including children. “
Ron Porat, co-founder and CTO of L1ght, tells Parentology the expansion of online time since school closures have amplified the online toxicity. “With instigators of hate exploiting the crisis to spread discord and misinformation, uncovering statistics such as a 70% uptick in instances of hate speech between kids and teens across communication channels on social media and chat forums was shocking, but not unexpected.”
L1ght Uses “Deep Learning” To Uncover Sophisticated Hate Speech

All online companies use algorithms to find patterns of use and content, but some are more sophisticated than others. L1ght’s “deep learning” feeds its algorithms massive amounts of data so they can ultimately recognize toxic content online. This means that the artificial intelligence employed by L1ght can recognize nuances and context of online statements.
“We know if a teenager messages his friend, ‘I’m going to kill you,’ online, it’s typically meant as playful ribbing – not a call to real-life action,” Porat explains. “Simple AI doesn’t pick up on that sort of nuance. And identifying keywords and flagging offensive comments is proving insufficient as pedophiles, bullies and other disseminators of harmful content are matching pace and developing means to largely avoid detection.”
L1ght uses context, images, and other actions to determine that online hate is happening. “Our platform analyzes text alongside images and videos to detect toxic content, and we can even monitor actions like if a large number of individuals are kicked out of a group chat at once,” Porat says.
What Should Tech Companies Do to Address Coronavirus Racism?

L1ght provides tracking tools, but it doesn’t meander into the gray areas between hate speech and free speech. That’s left to the tech companies and platforms. “The burden of responsibility to keep children protected online is on developers, not users; yet, these tech leaders have not been able to get these issues under control,” Porat says.
Facebook, TikTok, and other tech companies have begun some moderation policies. But, with the main goal being attracting more users and growing, hate speech isn’t high on priority lists.
Porat bemoans that reality. “In truth, moderation isn’t the most effective solution. Platforms should be using sophisticated technology to predict and prevent toxic content if they truly want to quash hate speech, toxicity, cyberbullying, predatory behavior and other scourges of the Internet age.”
Obviously, Porat would like tracking companies like L1ght to supply tech companies with at least accurate data.

What Can Parents Do?
Mostly, parents need to be aware that their kids are online. Monitoring their use is important, as is just normal engagement. “Psychologists we’ve worked in developing our algorithms tell us that because children take their cues from adults around them, even simple questions like ‘How’s your day been?’ can do a lot to foster trust and understanding,” Porat says.
ConnectSafely.org urges parents to be aware and involved, stating on its site: “Parents should understand the policies of the platforms that their children use, in order to understand the type of content that you’ll find on each. Even companies with strict anti-hate speech policies face challenges in knowing exactly where to draw the line so as to protect diversity of viewpoints while combating hate speech.”
While there are parental tech helpmates like add-ons and apps, many aren’t very effective. Ultimately, Porat feels it’s the tech companies’ responsibility.
“Online toxicity is a major societal issue which has been largely created by tech companies, and we believe that tech companies have to clean up their mess and help to solve the problem they created. Much like the auto industry builds cars with safety belts, we as an industry need to build into our platforms safety precautions,” Porat concluded.