|

How do social media platforms’ teen and parental control policies compare?

How do social media platforms’ teen and parental control policies compare?

Social media platforms have been coming under increasing scrutiny around their content and policies aimed at under-18 users.

These younger users have never known a world without the internet and increasingly spend more time on social media compared to other generations, leading parents, guardians and others to want to know more about how to keep their children safe online.

Social media companies have continually been introducing and trialling new products and policies over the last few years aimed at the digital wellbeing of these teen users; from default privacy settings and daily screen limits, to ways for families to see and share information across accounts in a privacy compliant way.

These platforms each have their own community standards and guidelines, with slightly different tools for parents and teens, so The Media Leader has compiled a guide for readers to easily compare them.

Most social media platforms have minimum age requirements of 13 years of age in their community guidelines, but in the App Store the age requirements can range from 12+ to 17+.

All of the apps listed are rated as suitable for those aged 12 and above apart from Twitch, Reddit and Twitter which are classified as appropriate for those older than 17.

Pinterest

Pinterest recently introduced new teen safety and parental control policies on its platform.

Teen accounts under 16 are now private by default, meaning Boards and Pins created by teens will only be visible and accessible to them to reduce unwanted contact from strangers.

Under this “no contact without consent” rules, Pinterest plans to bring in a way for users under 16 to contact people they know like friends and family as long as they give permission.

In the pipeline is also a way for parents to have the ability to need a passcode for related teen accounts to change certain account settings.

Age verification is also part of these policies, as by the end of April 2023 if anyone who previously entered their age as under 18 tries to edit it in the app, then additional information will be needed which will be assessed by a third party partner.

Pinterest has what it describes as “unique policies” of no beauty filters which alter a user’s face and it does not allow people or brands to “body shame” on its platform. This has resulted in weight loss ads have been banned for some time on Pinterest.

Snap

As of August 2022, Snap introduced an in-app tool called “Family Centre” where parents, guardians, or trusted relatives aged 25 and over can be invited to join with related teens between 13 and 18 years old to have more insight into who the teens are friends with and who they have been chatting with in the past seven days, without revealing any of the specific content of those conversations. The same applies for teens to be able to see a mirror view of what other members of the Family Centre see.

The Family Centre also enables users to report abuse directly to Snap’s Trust and Safety Team.

Earlier this month, Snap launched content controls in this Family Centre which allow designated adults to to limit the types of content their teens can watch on Snapchat. For example, parents can filter out Stories from publishers or creators that may have been identified as sensitive or suggestive.

In terms of potentially mature or harmful content, Snap pre-moderates public content, including on its entertainment and content platforms Spotlight and Discover. Discover only features content from approved media publishers and content creators

Across the app, it limits opportunities for “potentially harmful content” to go viral.

Snap also has extra protections in place for under-18s, including not allowing teens to have public profiles, making Friend lists private, and only having teens appear as a “suggested friend” to other users in limited instances like if there are mutual friends in common.

By default, teens have to be mutual friends before they can start communicating with each other.

TikTok

TikTok accounts registered to existing and new users aged between 13 and 15 years old are private by default and direct messaging is only possible for those age 16 and older.

Earlier this year, the platform introduced automatic 60-minute daily screen time limitation for users aged under 18 based on academic research from the Digital Wellness Lab at Boston Children’s Hospital. When this limit is reached, teens will need to enter a passcode if they decide they want to keep watching.

For users under 13, the same screen time limit applies but a parent or a guardian will need to set or enter an existing passcode for an additional 30 minutes of time on the app.

TikTok also has “Family Pairing”, previously called “Family Safety Mode” which was first introduced in 2020 so parents and caregivers can link their account to their teens.

Parents, guardians and teens can customise individual daily screen time limits, access a screen time dashboard and mute notifications at different times.

Currently, accounts for those aged between 13 and 15 also do not receive push notifications after 9pm, while those registered as 16- 17 have push notifications disabled an hour later.

There are limitations on putting comments on or downloading videos created by users aged under 16 and using this content to combine with other TikToks using the Duet and Stitch settings.

TikTok says it “proactively” asks younger users to set privacy settings when they upload their videos.

There is a feature to not pull up results for people searching for potentially harmful challenges and hoaxes, and instead show an in-app guide with a 4-step process for engaging with an online challenge.

An option to automatically filter out videos with words or hashtags users do not want to see in their “For You” or “Following” feeds.

TikTok is building a system to organise content based on thematic maturity which would prevent content with “overtly mature” themes from reaching audiences under the age of 18.

In October 2022, TikTok increased the minimum age requirement to host a livestream from 16 to 18 (from 16).

Alongside this there are safety reminders in livestreams to suggest new keywords users may want to consider adding to their filter list.

Meta (Facebook, Instagram, WhatsApp)

The parent company to Facebook, Instagram and WhatsApp has slightly different teen safety rules per app.

There is a Family Center across Meta’s platforms which gives parents and guardians access to supervision tools and education resources.

Once a parent and teen mutually agree to set up supervision, parents can supervise the teen’s experience on Instagram and in virtual reality (VR via Meta Quest).

On Instagram, parents and guardians can monitor and manage a teen’s time on the app, see their follower and following lists, and get insights in to what is reported.

On Meta Quest, they can see friends and apps in the experience, plus view time spent in VR, block apps and receive notifications about purchases and downloads.

Those under the age of 16 are defaulted to more private settings on Facebook (under 18 in certain countries), and those already with an account in those criteria will be encouraged to choose more private settings.

This controls who can see their friend list, the people, pages and lists they follow, posts they are tagged in, and who can comment on public posts.

Other users cannot tag or mention teens who do not follow them, or include their content in Remixes on reels or guides.

There are restrictions on messages between unconnected adults and teens, and teens do not show up in “People You May Know” recommendations.

Meta defines a “suspicious” account as one that belongs to an adult that may have been recently blocked or reported.

Teen users are also prompted to report accounts to Meta if they block someone, and safety notices are sent with information on what to do if they receive inappropriate messages.

In terms of content, by default, users that are younger than 16 will have the most restrictive setting, “Less” which limits access to potentially harmful or sensitive content in search, Explore, hashtag pages, Reels, Feed recommendations, and suggested accounts.

Meta also has age verification and detection, which use AI to provide age appropriate experiences like restricting teenagers from Facebook Dating.

There are limited ad targeting options for teens, in particular alcohol, financial products, and weight loss products and services are prohibited for under 18 or older in certain countries.

Teens can control the types of ads they see on Facebook and Instagram by setting “Ad Topic Controls”.

Other tools for teens include take a break, quiet mode, nudges to switch topics, restrict and hiding like counts. These allow teens to do things like set reminders to take a break, mute notifications, put on dark mode, and send auto replies.

YouTube

Google’s YouTube has two options for accounts for children; a supervised account on YouTube and the YouTube Kids app.

The former is a parent-managed version of “regular” YouTube where parents can create a Google account for their child and then adjust the type of content available, limit certain features and input digital wellbeing and default profile settings.

Children and teens will not see personalised ads on this supervised account and the auto-play feature will be off by default. Bedtime and screentime limits can also be adjusted on the supervised account.

There are three content settings; explore, explore more, and most of YouTube which will determine what they can watch and search for on the platform.

The YouTube Kids app lets signed-in parents create a profile for their child, choosing a preschool, younger or older option so children can access age appropriate content on their own devices.

Parents can customise what content kids can and cannot see and set a screen time limit on the app.

Twitch*

Children aged between 13 and 18 may only use Twitch under the supervision of a parent or legal guardian who agrees to be bound by Twitch’s terms of service.

More than 70% of Twitch viewers are between the ages of 18 and 34. Twitch users can also be streamers of live gameplay or hosts or moderators of channels.

There are viewer-side filters which block potentially harmful language across any Twitch channels a teen visits, and viewers are served a warning the first time they encounter a stream labelled as containing Mature Content–stating that the channel is intended only for mature audiences–that they must click-through in order to view the content.

Twitch says it is exploring solutions to help streamers better indicate when their content may not be suitable for all audiences.

Teens are encouraged to create a moderation strategy before they start streaming their own gameplay. This can be found in the Creator Dashboard and is to “help you maintain a safe and positive chat”.

Twitch recommends reporting harassment, and also offers an “Ignore Feature” on channels where they are not a streamer or moderator to remove certain messages from their view of the live chat.

There is also the option to block Whispers, or direct private messages, from users. Whispers are opt-in only and switched off as default to all users.

In Security & Privacy Settings it is possible to block another user so you do not see someone’s messages in the public forum of a channel live chat. In these settings, it is also possible to prevent individual users from hosting a user on their channel and to filter out blocked users’ messages in viewed channels.

It is possible to clear up to 200 lines of chat history if it is “overrun with harassing messages”, and channel owners and moderators can see channel-specific information about users in a chat including when an account was created, the user’s chat messages, and a history of bans and timeouts in the channel. Notes can be added as a result for other channel owners and moderators.

Twitch enforces against both the unauthorized sharing of personal information and attempts to defraud users on the service in its Community Guidelines. In addition, content or activity meant to impersonate an individual or organization is also prohibited, including impersonation of Twitch staff, celebrities, companies, or friends. Users are suspended from Twitch for doing this.

Twitch warns users about dares, donation scams, link scams and spam bots on the platform.

Twitch enforces against both the unauthorized sharing of personal information and attempts to defraud Twitch or others on our service under its Community Guidelines. In addition, content or activity meant to impersonate an individual or organization is also prohibited, including impersonation of Twitch staff, celebrities, companies, or friends.

Twitch’s Off-Service Policy allows it to take action against individuals on Twitch based on severe misconduct that occurred entirely off the Twitch service.

Reddit

A Reddit spokesperson told The Media Leader the vast majority of Reddit users – over 90% in the UK – are adults and the site is not marketed to minors.

Reddit has privacy and security settings, and age-based controls to ensure the safety of any minors.

It is classified as a 17+ application in app stores, limiting its download availability to minors and allowing parents to use device-level parental controls. If users self-identify as under the age of 13 or 18 in mature subreddits, they are banned from Reddit. Community moderators are required to tag any mature content, and Reddit in addition employs automated systems that detect mature video or image content to ensure it is gated.

Users can collapse “disruptive” comments that contain rude or disrespectful content while browsing, appear “invisible” online, keep their accounts out of Google search results, turn off personalised content and information collection, implement two-factor authentication to log in, limit who can follow their account, and block their posted content showing up on Reddit’s general feed or r/all.

Reddit signposts any content for 18+ as “NSFW” or “not safe for work”, and users must confirm they are over 18 to view. Mature content is not included in mixed feeds (e.g., r/popular or r/all) and by default does not appear in search. Even for those who have confirmed they are 18, it is possible to turn off this content. Features like SafeSearch are also available.

There are a small number of subreddits or communities specially for teenagers, but these communities cannot be targeted by ads and are not recommended to general users.

Twitter

Twitter requires all users to be at least 13 years old, but in some countries, parents or guardians will need to provide consent on the child’s behalf to process their personal data.

For Twitter users under 18, visibility setting for birth year will be set to “Only you”. Aside from that, your biography, location, website, and picture are normally public by default.

Twitter users can also flag posts with one-time sensitive content warnings to photos and videos across Android, iOS, and web. The social media platform also has tools around reviewing content before posting, labels around potential misinformation on the platform, and controls around who can reply to Tweets.

When signing up to Twitter, all Tweets are automatically public. In profile settings there is the option to “protect” your Tweets so they are only visible to followers and you receive a request when people want to follow you.

*Editor’s note- this section has been updated since publication.

Media Jobs