Latest

Parental Controls on Apps Don’t Work. Here’s a Better Way.

0 0

Parental controls have failed—and it’s not parents’ fault.

Parental controls have failed—and it’s not parents’ fault.

For years, I’ve written about the tools that tech companies offer up when parents and lawmakers complain that their apps aren’t safe for children. In the past five years or so, the major players have rolled out software to give parents more say over when kids can use devices and services and what shows up on their screens.

Hi! You’re reading a premium article

For years, I’ve written about the tools that tech companies offer up when parents and lawmakers complain that their apps aren’t safe for children. In the past five years or so, the major players have rolled out software to give parents more say over when kids can use devices and services and what shows up on their screens.

But these tools are optional and often buried, and sometimes broken. Most parents don’t use them, according to a poll conducted last year by the market research firm Ipsos.

Do parents just need more awareness about the tools? Or should tech companies take on more responsibility to protect young people? Facing political pressure, the social-media platforms are building in protections for underage users, and they should continue doing more. Parents need to look to conversations, not controls, to ensure their kids aren’t meeting harm online.

It’s time to reframe the discussion—and why I won’t be recommending parental controls going forward.

What went wrong?

Think about this: Tech companies expect parents to have the time and skills to dig into their kids’ apps. Every app the child uses has different controls and defaults, so it can be confusing and cumbersome. Apple promised a one-stop-shop solution with its Screen Time settings, but that has proven unreliable. When setting time limits and restricting apps and adult content, there have been problems syncing changes across devices. Apple has said it’s working to fix these.

Young people have been harmed by social media in a variety of ways. In addition to the self-harm and eating-disorder content that has gotten a lot of attention in recent Senate hearings and in The Wall Street Journal, there are new dangers cropping up all the time: Criminals posing as young girls online are now extorting money from teenage boys.

Mark Zuckerberg—who, as co-founder of Facebook and the man atop Meta Platforms, rules over a swath of the social landscape—apologized in person to parents of social-media abuse victims at a January hearing. He told senators that Meta has introduced more than 30 different tools and features to help parents and teens. On Instagram, for instance, comments and messages that contain triggering words and phrases can be hidden.

As a Meta spokeswoman confirmed, the company also has added default protections to provide age-appropriate experiences.

It’s true, platforms without parental controls can present even more dangers to young users. But when kids get older, it’s more important to educate them about the realities out there. “When they become teens, parental controls become ineffective,” says Stephen Balkam, founder of the nonprofit Family Online Safety Institute.

What tech companies should do

Social-media feeds don’t have age-based ratings like movies or videogames. And there’s no equivalent of the built-in parental consent that comes with buying a movie ticket or a game download. Some social-media companies have brought default protections to teen accounts—“teen” denoted by the stated birth date of the account holder.

Meta last month said it plans to automatically restrict teen Instagram and Facebook accounts from seeing harmful content, including videos and posts about suicide, graphic violence and eating disorders.

Also last month, Instagram turned off teens’ ability to receive direct messages from people they don’t know or have no connection to on the app. Since 2021, it has blocked direct messages from people ages 20 and older whom teens don’t follow, so this adds protection from other strangers—either fellow teens or people who created accounts to pose as teens.

TikTok, which doesn’t allow direct messaging for users under 16, last year added a tool to help parents filter out videos containing words or hashtags they don’t want their kids to see.

Even though no one under 13 is permitted to have a social-media account, it’s an honor code, and it isn’t policed. Several states have proposed or passed laws requiring social-media companies to verify users’ ages and obtain parental permission to create accounts. But there isn’t currently a system that can do so without raising privacy concerns.

What parents should do

Parents can’t expect teens to set their own safety settings: Many teens don’t know where to begin. In a late-2021 survey of teens and young adults by FOSI, only 56% of respondents said they knew about the settings social-media apps had to offer; the rest weren’t aware of any safety tools.

Balkam recommends shifting from helicopter parenting to what he calls “co-pilot parenting.”

Sitting with your teens and going over the safety settings on their apps help them anticipate the dangers they might face. Balkam adds that when parents and teens work together to devise a set of tech rules—and the consequences for breaking them—teens are more likely to abide by them.

The American Academy of Pediatrics recommends a similar approach. It says parental restrictions and monitoring stifle teens’ ability to solve problems. A number of studies have shown that using parental controls can undermine teen autonomy and harm the parent-child relationship.

The AAP suggests parents and teens work together to define family guidelines for tech use and to develop strategies for what to do when they encounter disturbing content. The organization offers a customizable family media plan to help you get started.

In my house, we have one major rule: No devices in bedrooms at night.

Teaching good tech habits requires repetition—and allowing kids to learn the natural consequences of their actions. If I go to bed earlier than my 13-year-old son, I remind him to get off the screens and go to bed at a decent hour (translation: no later than 10 p.m.). On the few weeknights he stayed up too late, he woke up groggy and had to struggle through the school day.

—For Family & Tech columns, advice and answers to your most pressing family-related technology questions, sign up for my newsletter.

Write to Julie Jargon at Julie.Jargon@wsj.com