Reset Tech Research
Risks to Minors:
News from Our Ongoing Research on Platforms’ Risks to Minors
Risks to Minors and the EU’s Digital Services Act
Social media platforms can pose significant risks to minors. These risks include exposure to harmful content such as pro-suicide or eating disorder materials, misleading design features that encourage harmful behavior, and insufficient privacy protections. The European Union’s Digital Services Act (DSA) aims to mitigate these dangers by setting strict rules for online platforms, especially those used by young people. The DSA requires these platforms to prioritize the safety, privacy, and security of minors, ensuring that the digital environment is as safe as possible for vulnerable users.
Executive Summary of Our Report on Loopholes in VLOPs’ Ad Systems Targeting Children
Reset Tech’s audit of Google, TikTok, and Meta revealed significant loopholes in how Very Large Online Platforms (VLOPs) handle data from users under 18 for targeted advertising. Despite regulations, platforms still allow targeting of minors through implicit methods, such as age parameters, identity solutions, and interest-based targeting. Moreover, ad systems exploit dark patterns in age verification and consent. These practices raise serious concerns under the Digital Services Act (DSA) and GDPR. Reset Tech recommends banning inferred data for targeting, enforcing purpose limitations, increasing transparency, and mandating privacy reviews to better protect underage users from targeted advertising. Download the full report on ads and kids
Executive Summary of Our Evaluation Reports Series
This series of reports provides a comprehensive evaluation of the processes and systems on three major social media platforms—Twitter (X), TikTok, and Instagram—specifically examining the risks posed to minors. Each report scrutinizes the platforms’ content moderation, algorithmic recommendations, and user experience design, with a particular focus on compliance with the Digital Services Act (DSA).
Twitter (X): Our research reveals significant shortcomings in Twitter’s approach to moderating harmful content, particularly around pro-suicide, self-harm, and pro-eating disorder materials. The platform’s recommender system frequently promotes violative content, exposing young users to potential harm. Additionally, the user interface includes dark patterns that may mislead younger users during the sign-up process, raising concerns about transparency and safety.
Download the full report on XTikTok: The evaluation of TikTok highlights similar risks, with the platform’s algorithms amplifying harmful content, including dangerous challenges and misinformation. TikTok’s content moderation efforts are inconsistent, often failing to promptly address content that violates community guidelines. The report also uncovers the use of persuasive design tactics that can lead minors to engage with content that is not in their best interest.
Download the full report on TiktokInstagram: Instagram’s processes were found to inadequately protect minors from exposure to harmful content. The platform’s algorithms often recommend content related to self-harm and eating disorders, and its moderation systems are insufficient in curbing the spread of such materials. The report further criticizes Instagram for its complex terms and conditions, which are not easily understandable by younger users.
Download the full report on InstagramEach report underscores the urgent need for these platforms to enhance their content moderation practices, improve the transparency and comprehensibility of their policies, and redesign user interfaces to prioritize the safety and well-being of minors. These findings are critical for stakeholders seeking to ensure that digital environments are safe and supportive spaces for young users.
Get in touch if you want to learn more.