In a recent hearing with the Independent Inquiry into Child Sexual Abuse (IICSA) for England and Wales, a Facebook rep admitted the company has no way to verify a user’s age. This means there could be children under 13 on Facebook, even though the company’s terms state that no one under 13 is allowed to use the platform. Further, it was revealed that Facebook has no way to determine if someone creating an account is a sex offender — registered or otherwise.
“Do you know the number of accounts attempted to be set up by UK users who are in fact under 13?” asked Jacqueline Carey, IICSA’s lead counsel to the inquiry.
Julie de Bailliencourt, Facebook’s Safety Policy manager for Europe, Middle East and Africa, replied, “I don’t know this number.” When asked if Facebook knew how many of the UK’s estimated 40 million users might be registered sex offenders, De Bailliencourt replied she didn’t, in part because the registry is not open to the public.
It’s a no-win situation for Facebook. With approximately 214 million users in the United States and an estimated 1.8 billion monthly active users, the social media giant can’t police everyone. And yet the public instantly looks to them whenever questions about internet security or child safety arise.
Parents definitely have cause for concern. Dating apps and other social media platforms have come under scrutiny recently for having minors on their sites and apps. A 59-year old man was arrested after hooking up with 15-year-old on Grindr, and law enforcement agencies in the US arrested 16 child predators who were using Grindr, Tinder, MeetMe, and Adam4Adam, among others.
But the big question — besides why parents don’t know their child is using an adult dating app on their phone — was why Grindr and similar companies don’t have safeguards in place to prevent minors from using their services. For those platforms, users must be 18, so a credit card verification would work. However, Facebook isn’t a paid service like a dating app, so this quick fix wouldn’t apply.
“We strongly believe in protecting minors and are always working to improve our tools and protections,” a Facebook rep told Parentology in an email statement. “We do not allow people under 13 on Facebook and have protections in place to prevent them from using our platform.”
Stopping Kids Under 13 on Facebook
- AGE BLOCKING – When a child signs up they have to enter their birth date. If they are under 13 they are blocked, and cookies on the browser prevent them from going back and changing the year. (There are some other tricks Facebook has in place, but we’re choosing not to reveal them in case a kid is reading this story.)
- ACCOUNT HOLD – If Facebook has reason to believe the person is under 13, the account is put on a hold and the person will not be able to use Facebook until they provide proof of age. The company also puts a hold on any account that a Facebook reviewer flags as being under 13, even if that reviewer wasn’t initially reviewing for age. (For example, if they were reviewing the account for harassment, cyberbullying, or other reasons.)
- VERIFICATION PROCESS – A hold means the person will lose access to their account and the account will not be visible on Facebook. The account holder has 28 days to confirm they are 13 or older, which requires them to submit an ID showing their age. A parent would likely have to help them do this. If the person is unable to do so, the account gets deleted.
The Facebook rep added, “We also encourage people to report potential underage accounts and have a specific form for this.” When a profile is reported, Facebook reviews the content on their profile — like text and photos — to try to ascertain the user’s age. If Facebook suspects any issues, they’ll ask for verification. The rep notes, “We delete the accounts of people under 13 when we become aware of them.”
In De Bailliencourt’s interview, she noted that Facebook has 30,000 employees across the globe working on “safety and security.” This includes people reviewing and moderating content — everything from child safety to fighting extremist messages. If any content is flagged indicating child endangerment, it is reported to the National Center for Missing and Exploited Children (NCMEC) and then passed on to law enforcement.
“We routinely report to NCMEC anything that would be indicative of the exploitation of a child. I believe those reports then turn into cyber tips when law enforcement receive them,” De Bailliencourt said. While the NCMEC is based in the United States, it shares information with similar agencies around the globe.