In March 2019, a shooter live-streamed his terrorist attack on two New Zealand mosques. Facebook’s “live” feature allowed the broadcast to run for 29 minutes after the attack before their artificial intelligence (AI) software recognized the violence and removed it from the platform. This left many wondering — why does Facebook continue to rely on AI to keep users safe online? The answer: even when it makes mistakes, AI is still superior to human review.
Why AI Failed
Tom Taulli, the author of Artificial Intelligence Basics: A Non-Technical Introduction, tells Parentology that Facebook explained the reasoning for this lapse in the AI’s functionality as a lack of training for the technology.
Guy Rosen, Facebook’s Vice President of Product Management, said in a statement that the AI training data didn’t contain anything similar to this type of event that occurred in New Zealand. Because the system relies on context, it requires a large amount of data to know what it’s looking for.
“Basically, if there is not enough good data, then the AI models simply will be useless,” Taulli says.
This is why Facebook and other companies are still keeping humans in the loop when it comes to monitoring and addressing activity on the popular social media platform.
How to Make it Better
“First, you need training data,” Taulli says. “This is the information you use with algorithms, which applies sophisticated math to detect patterns.”
This includes machine learning that uses traditional statistics, plus deep learning with its neural network and many layers of analysis and weighting of the data, all before an AI model can be evaluated by using test data.
“From this, you can get a sense of the accuracy and usefulness,” Taulli says. Since the social media platform is so popular, it has ample amounts of data to work with, which should allow the AI to improve rapidly over time.
Is AI the Best ‘Man’ for the Job
Colin Ma, who holds a degree in Computer Science (with concentrations in AI/ML), and serves as a digital marketing consultant, tells Parentology there are always going to be things to consider when relying on an AI system.
“If the AI isn’t trained well you may get a lot of false positives, in this case, ‘acceptable’ posts being censored or banned when they shouldn’t be.”
This was in the news recently involving Instagram, which is owned by Facebook. Parents complained that shirtless beach photos of their sons with long-hair were being flagged as inappropriate because the AI confused them with photos of little girls.
This doesn’t mean the AI software isn’t better than human moderation. Ma explains there are two reasons why Facebook’s AI is superior to human review.
“The most obvious is cost — it scales so well you don’t need people looking at every post,” he says. “The second reason is well-trained AI doesn’t have unconscious bias, whereas every individual does, so AI will be able to find unacceptable posts in instances where an individual might not.”
The problem, Ma says, arises when the AI isn’t well-trained. “If the AI is not given good data, or given data that has unconscious bias, then it won’t perform well.”
This is perhaps why Facebook is so committed to make its AI work instead of reverting back to human review of posts.
Facebook AI: Sources
Colin Ma, Computer Science (with concentrations in AI/ML) expert and digital marketing consultant
Tom Taulli, author of Artificial Intelligence Basics: A Non-Technical Introduction