Spain Makes a Stand Against Social Media Platforms’ Disappointing Ethical Failures

2/10/2026

On Tuesday, February 3rd, Spanish Prime Minister Pedro Sánchez announced that Spain will be joining a growing number of countries that are banning social media for children under the age of 16. This trend began last December with Australia, which became the first country to implement a nationwide ban on social media – including TikTok, YouTube, and Instagram – for anyone under 16. Company compliance is enforced through fines of up to the equivalent of almost $35 million USD. Since then, many other countries, including France, Denmark, Italy, the UK, and now Spain, have proposed similar laws. However, these sweeping bans and the wide political support for them globally raise the question of why they are becoming necessary in the first place. 

The correlation between social media and depression in adolescents has been recognized for over a decade. A causal link between social media use and depression was established more than seven years ago by the University of Pennsylvania. Social media has been known for quite a while to cause or at least fuel body image issues, including body dysmorphic disorder. Children and young adolescents are especially at risk of developing these issues because of their increased neuroplasticity, and it’s clear that the impact of social media on their mental health can be severe. That’s without even mentioning that many of these platforms expose children to sexual predators as well as shocking, violent content. Children not old enough to watch R-rated movies are recommended videos of death and grievous injury, and these platforms amplify and normalize extreme misogynistic viewpoints

The engineers behind social media algorithms and user interfaces have known about these issues for years. Responses have been underwhelming. It’s been less than one and a half years since Meta introduced teen accounts for Instagram, and they seem to have missed the mark. One 2025 study found that over the course of just one month, 60% of 13-15-year-olds experienced unsafe content or unwanted messages from strangers. Other individuals have found that options to filter out offensive words or phrases prove largely ineffective. The greater issue here may be that the fundamental mechanics of the apps remain unchanged: content recommendations that boost engagement through amplifying extreme content, inherent promotion of social comparison, and endless feeds optimized for maximum engagement.

Even platforms that appear more innocuous, like YouTube, clearly are not prioritizing real action to safeguard children. According to the Pew Research Center, 85% of parents with young children report that those children watch YouTube. The American Academy of Pediatrics (AAP) highlights a 2020 study that found 95% of early childhood videos targeted towards 0-8-year-olds contained advertisements. 20% of these featured inappropriate content, including violent video games, sexual content, drugs/alcohol, or political issues. 45% of the videos pushed products to children. The AAP noted the importance of these statistics, given that these ads often function to direct children’s attention away from educational content, meaning that age-inappropriate messages can become the sole focus of young children. 

It should go without saying that engineers should hold themselves to higher ethical standards where children are involved. Children are a vulnerable, non-consenting population, and they are shaped by the products and media they are exposed to. The dozens of billions of dollars these platforms rake in every year from minors are not worth the lasting trauma they can inflict, but that revenue does seem to be incentivizing companies to make only cosmetic changes when it comes to child safety. As stated by Bryn Austin, professor in the Department of Social and Behavioral Sciences of Harvard University, these platforms have “overwhelming financial incentives to continue to delay taking meaningful steps to protect children.” In practice, that is taking priority over the ethical standards of engineering. 

In this way, Spain’s decision – and the decisions of all the countries before and to follow – reflects the deep and pervasive failure to uphold ethical standards in this area of software engineering. It’s a disappointing reality that these governments are forced to protect citizens from such poor prioritization.