FTC slams Amazon.com, Meta, YouTube, X, Snap, ByteDance, Discord, Reddit, and WhatsApp over unique risks to children and teens, manipulative algorithms and other 'commercial surveillance'
The blow torch is burning red hot on the bellies of digital giants around the world, and now a new report from the US Federal Trade Commission is turning the dial to maximum, accusing firms including Amazon.com (owner of Twitch), Meta Platforms, YouTube LLC, X, Snap, ByteDance (owner of TikTok), Discord, Reddit, and WhatsApp of posing unique risks to children and teens, and basically lying about the number of children using their platforms. The FTC suggests algorithms are designed to be addictive, and that data abuse is a tool to build market dominance. There is a special shout-out about automated decisioning that will no doubt catch the eye of Australia's Attorney General's Department, given its inclusion in the first tranche of privacy reforms. Indeed, many of the findings of the report reinforce the goal of the Australian Government to implement a social media ban.
What you need to know:
- A new report from the US Federal Trade Commission offers a brutal assessment of the behaviour of a set of nine digital giants and the risk they pose to children and teens on their platforms.
- Social media, video streaming, and communications platforms covered by the study are engaged in commercial surveillance, and their algorithms and data practices are designed both to be addictive and to create market dominance, the report purports.
- Companies that claim children do not use their platforms are lying, the FTC report suggests as all the evidence is to the contrary. Even where laws exist to protect children and teens, platforms do the bare minimum, they say.
- And automated decision-making (ADM) is specifically called out by the report.
- For Australian legislators and regulators, the FTC investigation is more grist for the mill as they seek to enact new controls over digital giants.
- "The status quo is unacceptable," says the FTC.
These practices pose unique risks to children and teens, with the companies having done little to respond effectively to the documented concerns that policymakers, psychologists, and parents have expressed over young people’s physical and mental wellbeing... The status quo is not acceptable."
The world’s largest social media, video streaming services, and gaming communications platforms have built an addictive, opaque, commercial surveillance infrastructure that poses unique risks to children and teens. The outlook for the wider consumer set is hardly any better according to the report by the US Federal Trade Commission, which is as blunt as it is brutal in its conclusions.
"The tech industry’s monetisation of personal data has created a market for commercial surveillance, especially via social media and video streaming services, with inadequate guardrails to protect consumers. The report finds that these companies engaged in mass data collection of their users and – in some cases – non-users," the Commission states.
"It reveals that many companies failed to implement adequate safeguards against privacy risks. It sheds light on how companies use our personal data, from serving hyper-granular targeted advertisements to powering algorithms that shape the content we see, often with the goal of keeping us hooked on using the service. And it finds that these practices pose unique risks to children and teens, with the companies having done little to respond effectively to the documented concerns that policymakers, psychologists, and parents have expressed over young people’s physical and mental wellbeing."
The FTC's report also makes its view clear the companies it's studying are lying when they say there are no child users on their platforms because children cannot create accounts.
There is also an explicit finding on the competition implications: "Data abuses can fuel market dominance, and market dominance can, in turn, further enable data abuses and practices that harm consumers."
"The status quo is not acceptable," according to the FTC.
Australia's social media fight
The issues raised echo the debate recently kicked off in Australia. Prime Minister Anthony Albanese recently triggered a national debate on the issue when he said his Government planned to introduce new laws by year's end to restrict social media use by young people. It is part of a raft of legislative changes designed to address digital harms from misinformation and privacy abuse, as well as from social media.
While the proposed Australian law is perhaps the most well-defined recent initiative for now, it is part of a wider global regulatory clapback against social media, and indeed smartphone use by young people particularly in Europe. China and India already ban a number of social media and communications platforms in their countries.
Regarding this latest FTC report, in the US, Amazon.com, Meta Platforms, YouTube LLC, X, Snap, ByteDance, Discord, Reddit, and WhatsApp were all issued with what are known as 6(b) orders which compel companies to hand over data and documents part of a review of data, algorithm and and privacy practices.
The latest FTC report found companies employed privacy-invasive tracking technologies to facilitate advertising to users based on preferences and interests. It noted the business models of many of these companies incentivise mass data collection for monetisation, especially through targeted advertising, which accounts for most of their revenue. In the preface of the report, Samuel Levine, director of the Bureau of Consumer Protection said: "The amount of data collected by large tech companies is simply staggering".
According to the FTC, the nine companies were asked to provide information on how they collect, track and use personal and demographic information, how they determine which ads and other content are shown to consumers, whether and how they apply algorithms or data analytics to personal and demographic information, and how their practices impact children and teens.
The amount of data collected by large tech companies is simply staggering
Among the key findings from the US FTC report are:
- Mass Data Collection: Companies collected vast amounts of data about users and non-users, including personal, demographic and behavioral information, often without consumers' awareness.
- Inadequate Data Safeguards: Many companies failed to implement adequate safeguards against privacy risks, leading to significant concerns regarding data privacy.
- Targeted Advertising Practices: The report highlighted companies utilised extensive data for hyper-granular targeted advertisements, raising issues of consumer privacy and consent.
- Vulnerable Populations: Data practices pose unique risks to children and teens, with companies not sufficiently addressing concerns about the physical and mental wellbeing of younger users.
- Lack of Transparency: Consumers generally lacked meaningful control over their personal information, with opaque practices regarding how data is collected, shared and used.
- Insufficient Self-Regulation: The report criticised the self-regulatory approach of firms, noting this has failed to protect consumer privacy adequately.
- Competition Implications: Data abuses can fuel market dominance, and dominant companies may further exploit data practices that harm consumers, limiting competition and consumer choice.
- Automated Decision-Making Risks: The use of algorithms and AI raised concerns over bias, discrimination and the lack of transparency in decision-making processes.
The US regulator found these companies collect and indefinitely retain vast amounts of data, including information from data brokers, about both users and non-users of their platforms.
According to FTC Chair Lina M. Khan: "The report lays out how social media and video streaming companies harvest an enormous amount of Americans' personal data and monetise it to the tune of billions of dollars a year.
“While lucrative for the companies, these surveillance practices can endanger people's privacy, threaten their freedoms, and expose them to a host of harms, from identify theft to stalking. Several firms' failure to adequately protect kids and teens online is especially troubling. The Report's findings are timely, particularly as state and federal policymakers consider legislation to protect people from abusive data practices."
The report concluded the social media and video streaming services did not adequately protect children and teens on their sites. It also noted some of the potential competition implications of the companies' data practices, including the possibility of achieving market dominance through significant amounts of user data, which may lead to harmful practices with companies prioritising data acquisition at the expense of user privacy.
Some companies’ unwillingness or inability to adequately explain their use of algorithms, data analytics, or AI calls into question whether they can adequately explain these concepts to users and the public and whether they truly understand the technology they are implementing and its potential effects.
Automated decision-making (ADM) came in for special attention, which will no doubt catch the attention of the Australian Attorney General's Department which has included ADM in the first tranche of privacy reforms.
The FTC report notes: "There has been an explosion in the use of automated decision-making technologies and AI in recent years, raising novel technological, ethical, and legal issues. Many companies rely on these automated technologies to perform key business functions. The Companies covered in this report are no exception. Their responses demonstrate the extent to which (perhaps unknown to consumers) the companies rely on algorithms, data analytics, or AI to carry out their business functions."
The report found that in general, companies did not appear to clearly provide the public with complete, conspicuous and comprehensible explanations about how and why activity, inputs and personal and demographic information translated into particular automated decisions. Nor did they offer explanations of the factors that dictated a particular automated decisional outcome, both at the individual and systemic level, say the authors.
"In fact, some companies claimed it was difficult to explain to the Commission how models prioritised or weighed certain factors. Some companies’ unwillingness or inability to adequately explain their use of algorithms, data analytics or AI calls into question whether they can adequately explain these concepts to users and the public and whether they truly understand the technology they are implementing and its potential effects."
The FTC recommends companies should put users in control of—and be transparent about—the data that powers automated decision-making systems, and should implement more robust safeguards that protect users. "Changing this would require addressing the lack of access, choice, control, transparency, explainability and interpretability relating to their use of automated systems; and implementing more stringent testing and monitoring standards."