ARTICLE AD BOX
![]()
Meta, the company behind Facebook, Instagram and WhatsApp, is under scrutiny after leaked internal documents revealed that it expected a significant portion of its 2024 revenue to come from ads linked to scams, fraud and banned products.
The disclosures raise uncomfortable questions about how the world’s largest social-media advertising platform polices harmful content — and whether its business incentives undermine those efforts.
What the internal documents revealed
According to the leaked material, Meta estimated that around 10 percent of its 2024 ad revenue — roughly $16 billion — could come from ads that violate its own policies. These include fraudulent e-commerce schemes, illegal online casinos, deceptive investment promotions and ads pushing banned medical products.One internal report showed that users were being served an estimated 15 billion “higher-risk” ads every single day. Another document suggested that Meta’s current systems generate about $7 billion annually from ads linked to potentially fraudulent activity.
How Meta’s systems amplified the problem
Meta’s own ad-personalisation engine appears to have deepened the issue. When a user clicked on a scam ad, the system interpreted it as a sign of interest — and then showed the user more of the same.
That turned countless victims into ideal targets for further fraud.Even more concerning were the “High Value Accounts” — big-spending advertisers that were allowed to run ads despite repeated rule violations. In some cases, these accounts accumulated hundreds of “strikes” without being shut down. The more money an advertiser spent, the more leeway they seemed to receive.
Meta’s defence
Meta has rejected the conclusions drawn from the leak, calling the revenue estimates “rough” and “overly inclusive,” and arguing that the documents do not reflect the company’s full anti-fraud efforts.
The company says user reports of scam ads have dropped by more than half over the past year, and insists it is investing heavily in detection tools, human moderation and AI controls.But Meta has not disputed the authenticity of the documents or the scale of the problem outlined in them.
Why this matters
1. Billions riding on harmful contentThe revelation that billions of dollars may come from rule-breaking ads highlights a deep tension: Meta profits from the very content it is supposed to police.2. Real-world harm for usersScam ads are not harmless. Millions lose money to fake cryptocurrency schemes, fraudulent shopping sites and medical scams — many of which begin with a single click on Facebook or Instagram.3. Algorithms built for engagement, not safetyMeta’s advertising engine rewards ads that generate clicks. Scam ads use emotional hooks, outrageous claims and fear tactics that drive engagement — and therefore reach more users.4. Global regulatory scrutiny is intensifyingGovernments worldwide are already pressuring Meta to compensate scam victims and strengthen ad screening.
These leaks will only accelerate calls for tighter regulation, fines, and independent audits.
A recurring pattern
This episode fits into a long-running pattern at Meta. Harmful content — whether political misinformation, pandemic hoaxes or manipulated videos — often goes unchecked until public pressure forces a response. The latest leaks suggest that scam ads are not outliers but have become deeply woven into the company’s advertising ecosystem.For regulators, the challenge now is clear: can a platform of Meta’s scale truly police scam ads without undermining its own revenue model? Or is the system simply too big — and too dependent on automated ad targeting — to clean itself up without external pressure?


English (US) ·