Research & Thought Leadership
Understanding Security Within Social Media in Relation to Adolescent Use
A Master’s thesis examining COPPA compliance, digital footprints, biometric risks in schools, adolescent psychology, and a decade of regulatory change — updated in 2026 with new data on sextortion, AI-generated CSAM, mental health evidence, and global policy responses.
Key Statistics · 2026 Data
The Numbers Behind the Research
Data points drawn from the updated 2026 edition, incorporating Pew Research Center, Surgeon General Advisory, and platform disclosures.
Research Structure
Eight Chapters · Original & 2026 Updates
The original 2015 thesis covered five chapters. The 2026 edition adds three update chapters responding to a decade of regulatory change, new exploitation threats, and AI developments.
Original · 2015
Original · 2015
Original · 2015
Original · 2015
Original · 2015
2026 Update
2026 Update
2026 Update
Core Findings
Seven Key Conclusions
From the original 2015 thesis — conclusions that remain as relevant, or more so, in 2026.
Privacy protection strategies are unevenly distributed — culture, background, and experience shape usage of privacy controls.
Facebook’s financial interests conflict directly with meaningful child safety enforcement under COPPA.
Biometric data collection in schools presents serious unintended consequences despite perceived safety benefits.
Children’s digital footprints are permanent — a social media post from childhood can cost a career decades later.
Parental investment and active involvement is the single most consistently protective factor identified across all data.
COPPA is passive — it places the burden of proof on end users rather than on platforms, making enforcement largely reactive.
Data posted online is no longer the user’s property — it becomes the intellectual property of the platform immediately upon posting.
— Dale Hufford, 2015 · Validated by Australia’s 2025 national ban and the KOSPA bill
Regulatory History
A Decade of Change: 2015 → 2026
From COPPA as the lone federal protection to a global wave of legislation directly validating the thesis’s central critique of passive regulation.
New 2026
2026 Research Update
New Exploitation Threats
Two categories of threat that were entirely absent from the 2015 research landscape — now among the fastest-growing areas of child exploitation online.
2026 Update
AI Companions: The Next Regulatory Gap
- Designed to maximize emotional bond through flattery, affirmation, and simulated attachment
- No age verification, no parental consent, no content limits for minors
- Available 24/7 — capable of romantic roleplay and crisis conversations without safeguards
- Sewell Setzer III, 14 (FL, Feb 2024): suicide after months of Character.AI use
- FTC formal inquiry into AI chatbot risks to minors opened Sept 2025
- CA SB 243 (eff. 2026): first state law requiring AI disclosure & crisis protocols
Global Policy Response
Australia’s World-First Under-16 Ban
- France, UK, Germany, Italy, Spain considering similar bans
- EU Digital Services Act (2024): prohibits profiling-based ads to minors
- KOSPA passed U.S. Senate 91-3 — awaiting House vote (early 2026)
- 28+ states enacted K-12 phone/social media restrictions by early 2026
- California Phone-Free Schools Act signed Sept 2024
Access the Full Research
Read the Full Research
The 2026 updated edition incorporates a decade of regulatory shifts, new enforcement actions, emerging AI risks, and the latest mental health evidence — making it as relevant today as when first written in 2015.