Why are scam ads everywhere online?
Scam ads are flooding Facebook, YouTube and other social media. Tech companies are making billions allowing them. Who’s behind all of these online ads? And what can we do about it?
Guests
Sandeep Abraham, consultant at Risky Business Solutions. Former safety investigator at Meta.
Jeff Horwitz, technology reporter at Reuters.
Also Featured
Rob Leathern, founder of collectivemetrics.org, which aims to bring more transparency to the digital ad space.
The version of our broadcast available at the top of this page and via podcast apps is a condensed version of the full show. You can listen to the full, unedited broadcast here:
Transcript
Part I
MEGHNA CHAKRABARTI: A YouTuber who goes by Jamlab recently did an experiment. He decided to actually click on some of the strange ads he’d been seeing online.
One of them was a YouTube ad that seemed to be for shoes from the brand “Hey Dude.”
JAMLAB: As you can see by the big text here, they’re all under $5. Of course, like most official advertisements, the website they’re linking here is simply called “Sale” with an icon that’s just an H. H for “Hey Dudes,” of course! This seems legit to me. Let’s go check out these Hey Dudes for $5!
CHAKRABARTI: Jamlab clicked on the URL at the bottom of the YouTube ad.
JAMLAB: You go ahead and click that link. It’ll take you to ward shopp.com. That’s “shop” with two Ps. So naturally we’re just greeted with very generic, exclusive deals. “Elevate your wardrobe with the latest trends.” And then we’ve got “hot collection.”
It doesn’t even have a brand or anything, right? Like this is just. It is just a shirt. If we go to the official Hey Dudes website, we can see that they cost, I don’t know, take your guesses, guys. Over or under $5? What do you guys think? Oh, right. $90, $80.
I don’t know. I mean, $5 that, that sounds really good. It’s worth a shot, right? Well, I guess that’s what they’re hoping people will think when they see this ad. But unfortunately, I’m here to break the hard truth to you guys. I don’t think it’s legit.
CHAKRABARTI: Chances are, you’ve seen strange and seemingly scammy ads like this, too. They’re all over social media.
Like this one on Facebook from a company calling itself “America One Retirement.” It asks, “Are you 59+ and worried about rising taxes and RMDs? Have you looked at Roth Conversions?”
Wait, I didn’t know I could convert my IRA into a Roth and avoid thousands in future taxes. But apparently the window to do this closes in 2025. This isn’t a loophole. It’s a limited opportunity.
CHAKRABARTI: If the AI generated voice wasn’t an immediate giveaway, to be clear, this ad is lying. The window to do a Roth conversion does NOT close in 2025. There are currently no age or income limits on who can do a Roth conversion. None of that is expected to change.
And by the way, we do not like advancing misinformation on this show so let me be clear with some actual facts. The ad also doesn’t mention the potential downsides of converting your traditional IRA to a Roth IRA. It is true that you could save thousands in future taxes, since Roth IRA accounts grow tax free.
However, if you do a Roth conversion, you will generate an immediate tax bill for the year you make the conversion. Your money also has to stay in the Roth account for at least five years. It can temporarily put you in a higher tax bracket and can potentially impact other things like Medicare premiums.
Long story short, ignore the ad. Just ignore it and consult an actual expert instead.
Anyway, back to our examples – like this one from YouTube, where an AI deepfake of Elon Musk pushes something called Stock Market Navigator.
You can increase your monthly income by following my stocks. Why not join my WhatsApp group for free? I wanna share a rare opportunity with you. You can join my WhatsApp stock group completely free of charge. In this group, my assistant will privately message you and use the latest AI technology to predict future stock trends.
CHAKRABARTI: Or maybe this one, also from YouTube, promising to ship you an ultra-realistic, AI robotic dog – “Last Day Promotion 49% OFF.”
Wow. What you are seeing right now is an ultra-realistic, AI robotic dog crafted by master engineers in Germany. From its lifelike appearance to its dynamic movements, it boasts an incredible 99.9% similarity to a real dog, powered by advanced AI.
CHAKRABARTI: So who is behind this surge in the past couple of years of scammy online ads, and is there anything that you can do about it? So let’s talk to Sandeep Abraham. He’s a consultant at Risky Business Solutions and a former safety investigator at Meta, and he joins us from Fremont, California. Sandeep, welcome to On Point.
SANDEEP ABRAHAM: Thank you, Meghna. Hi, good to meet you.
CHAKRABARTI: So first of all, let’s get to this straight question. Are we just imagining it or have the number or frequency of scam ads on social media and YouTube really surged in the past couple of years?
ABRAHAM: You’re not imagining it. They definitely have surged for a number, both geopolitical reasons as well as business reasons with these companies.
I can go into more detail on either of those. You just let me know what you’d like to hear more about.
CHAKRABARTI: Let’s hear geopolitics first.
ABRAHAM: I’m sure you’ve heard of the pig butchering scam.
CHAKRABARTI: Oh yeah. We did a whole show on it.
ABRAHAM: Exactly. Yeah. So that is an umbrella scam. It’s not, but what I mean by that is that it covers a lot of different types of scams.
Now it used to be just romance scams where you get a text in your WhatsApp, your iMessage telling you, Hey, we’re in my contact list. Now it covers crypto scams, investment scams, sometimes drop shipping. There’s an entire ecosystem for pig butchering scams coming out of the Golden Triangle or Southeast Asia.
There’s an entire ecosystem for pig butchering scams coming out of the Golden Triangle or Southeast Asia.
Sandeep Abraham
CHAKRABARTI: Can I just jump in here, Sandeep, for people who don’t know what pig butchering is, just the short description is a scammer develops a relationship through, like you said, text or online with a target, usually in like Europe or the United States. And then over the course of developing that relationship, they end up scamming that person out of a lot of money.
Sometimes, the outcomes are actually just quite horrific. People losing their life savings or their retirement money, et cetera. But folks, if you want to hear about this, seriously, go to onpointradio.org or our podcast feed and look for pig butchering, and you’ll find a great show about it there.
Anyway, Sandeep, I didn’t mean to interrupt you, but go ahead. Are you saying that the same groups or the same template is being used for these just online one-off ads?
ABRAHAM: The same business model. And yes, so the ad just played with Elon Musk inviting to a private group that is this form of big butchering, where you are invited to a WhatsApp group, you talk to a quote-unquote assistant who then guides you through a number of investments and stock tips, eventually getting you to sign onto their fake crypto exchange.
And then you keep giving your money into an investment scheme that doesn’t work using AI gen, and they use AI generated stock tips to keep lulling you in further.
CHAKRABARTI: Ah, okay. I was going to ask how much AI is part of what these groups are using to seem legit. It seems more and more every year?
ABRAHAM: A lot, and actually, AI was a big game changer in 2022 when ChatGPT came out. It kills the, sorry, it gets rid of the language barrier entirely. So in the past, they had to either, pig butchering scams in these scam compounds would take in people from South Asia or the Philippines, or English-speaking countries who could bypass these things.
But now they don’t need to do that. If you just write a few prompts, you can generate an entire narrative, entire AI generated personas. The websites that they lead you to are entirely AI generated with images. I’ve seen fake stock gurus and stock geniuses completely fabricated. Yeah, it’s just a wild, scary, weird world out there.
CHAKRABARTI: So tell me more then about the origins of it. If it goes back to pig butchering in terms of the business model or even before. If memory serves, like you said, a lot of that activity is going on in the Philippines, Indonesia, et cetera, but it actually has an even deeper history of like Chinese gangs have a lot to do with this.
ABRAHAM: They do. But before I do that, I just want to expound on the geopolitics here.
CHAKRABARTI: Yes, please.
ABRAHAM: This specific model has also expanded to West Africa, parts of south America. It’s a very lucrative, profitable model. And it’s just using investment scams and the ads on social media to bring people in.
That’s businesses booming as far as frauds goods, very profitable. And you also, you couple this with growing populism, increasing economic uncertainty, the rise and constant falls of crypto. There are just a lot of different forces here driving people to commit these scams.
CHAKRABARTI: Tell me more about the economic populism part.
ABRAHAM: Just creates people, less of a rule of law means people will try to make money in any way they can. It’s a very broad topic. I don’t wanna deviate too far into that, but yeah … people are just trying to save their money and manage as best they can. And which is a big reason a lot of people get tricked into coming to these labor scams as well.
Because, after COVID, people were just looking for jobs and they ended up here.
CHAKRABARTI: Okay. No, sorry, I didn’t mean to interrupt you. Please go ahead.
ABRAHAM: The Chinese gangs you were talking about, I was assigned to WhatsApp at the time when I was at in Meta and I was working on this pig butchering scam.
Trying to understand where it started and what was happening here. 2021 was a inflection point where COVID emptied out all the casinos that were in Southeast Asia, the Myanmar coup happened, which meant a lot of Western Myanmar, Eastern Thailand, Southern Cambodia were all just a lawless zone that just exists.
In fact, there’s Western Myanmar is often called like the Quran state. Sometimes it’s called like little China. … The entire economy is very Chinese. At the same time, Xi Jinping cracked down on scams and fraud in China, sending all these the Chinese scam syndicates, and organized crime down south into this lawless zone.
So you had COVID, the Myanmar coup, and Xi Jinping’s cracked down, all contributed to sending everyone down. They used to be very prolific on WeChat, Weibo, all the Chinese social media. So I speak, study, write, or read Chinese and I researched this. And so if you go on WeChat and stuff, they’ve known about this for many years.
It’s been, you have take down posts, you have people talking about fighting back. It is just a very old way of scamming people. And then, but yeah, as of 2021, it started coming onto Western social media and started getting Western people, often at the romance part of it, often recycling social media photos of Chinese women and men from their social media.
CHAKRABARTI: Wow. Can I just ask you a quick clarification question? When you said take down posts, what is that?
ABRAHAM: Oh, just instructionals on. So you go on … WeChat, there’s entire blog posts on how you avoid this scam, or here’s news on how the government is arresting people, but it’s all that to say it’s not that they’re trying to take down.
There are articles about taking down this kind of scamming and it’s just very old. The part of the reason we don’t learn as much from that is because the great firewall, a lot of the WeChat is gated to China unless you have a Chinese number, someone invites you. Put it in a different way, Western social media and Western social media users didn’t have a vaccine for these scams when they first hit them.
Western social media and Western social media users didn’t have a vaccine for these scams when they first hit them.
Sandeep Abraham
Part II
CHAKRABARTI: Today we are talking about the surge over the past couple of years of scam ads online, almost everywhere you go online. But of course, especially on places like social media platforms and YouTube. And we’re trying to figure out where these are all coming from, why they’ve happened over just the past few years, how much money is there to be made and what can be done to stop them.
This is On Point listener Jacob, who is in Warner Robins, Georgia, and he told us he has seen ads on Facebook for rifle suppressors, also known as silencers.
JACOB: I used to receive ads on my Facebook that were very clearly chest suppressors for rifles being called automotive parts. Pretty weird.
CHAKRABARTI: Sandeep Abraham is with us today.
He’s a former safety investigator at Meta and currently a consultant at Risky Business Solutions, and he’s with us from Fremont, California. Sandeep, just quickly that example that listener Jacob gave us, is that a scam ad or just one that’s trying to get around regulations in terms of selling firearms and firearm accessories online.
ABRAHAM: Hi, Meghna. Yeah, it’d be hard for me to say without actually seeing the ad myself and doing a bit of analysis, but I would lean more toward the latter. Because it is illegal to sell, or not illegal. It is against policy to sell firearms and weapons on platforms. It’s within the safety regulated goods policy.
CHAKRABARTI: Got it. Okay. Because I suppose it’s all within this spectrum of ads not being what they claim to be, but tell us a little bit more, about the business side of this, because you gave us a really excellent background on the recent history that led to the surge of these ads through these crime syndicates, essentially.
Are they, how many do you know are out there or can you estimate, and how much money are they making? Do we have any sense of that?
ABRAHAM: These, the scam syndicates and just scammers overall?
CHAKRABARTI: Yes.
ABRAHAM: Hundreds of thousands of people. If you count the people they’ve essentially enslaved to do, run scams, it’s an enormous amount. I don’t have exact numbers on the number of people and number of syndicates. One interesting thing is you’ll have, like you just, there was a recent arrest of a scam kingpin named Chen Zhi and $15 billion in crypto seizes. That is one massive syndicate. But within that, they’d have smaller groups constantly buying, competing with one another.
I’ve been in a number of these scammer groups where I will join one scam WhatsApp group only to be messaged by someone within that, pushing a different scam. And it’s really hard to say. As far as money, I think $40 billion for the pig butcher, for Southeast Asian scamming was $40 billion annually.
Is the number that I last heard.
CHAKRABARTI: Wow. Okay. So then now let’s connect it to how they end up on screens and phones in the United States. Because my presumption is that companies like Meta, et cetera, they require certain information from the groups wanting to post the ad.
And also presume there’s some kind of analysis on the veracity of the ad. How do so many get through to the United States?
ABRAHAM: To be clear, as far as advertiser verification, there are only five regions that really require it per law. That Meta has to comply with, that being the European Union, India, Australia, Singapore, and Taiwan. The United States, if you are advertising to people in the United States, it is not necessarily required that you verify your identity on Meta’s platforms because there’s no law mandating that.
Advertising to people in the United States, it is not necessarily required that you verify your identity on Meta’s platforms, because there’s no law mandating that.
Sandeep Abraham
Yeah. When I look at Meta’s advertising, sorry, their ad library and look for scam ads. There are a number of just fake accounts created with completely fabricated names, no verification. And they run 14 scam ads all in the last 24 or 48 hours. To your question, this is a business in and of itself too, for these scammers.
This is what I call legitimacy farming. They, on YouTube, on Meta, you will see there’s a difference between the beneficiary of the ad, who is actually paying for the ad and/or who is making the ad and who created the copy and everything. And then the payer who’s actually purchasing the ad on platform.
So you might have an account that purchases the ad, but the actual person designing, benefiting from it is a completely different faceless, nameless person. On YouTube, from my research, you’ve seen basically shell companies, essentially. Like they are created, incorporated in Florida. Some aren’t from China, but they have their LLC just spun up very quickly.
They’re run by either Americans or depending on some states, you don’t even need to show ID or verify ID to start, incorporate a company. And for Meta and for Google, you have to present two forms of verification to be a verified advertiser, one being like … if you’re a business and incorporation document, which if you don’t have to show ID to the state, you’re a completely anonymous person regardless.
And then a utility bill or something like that, which is very easy to fabricate. But yeah, so they obfuscate on that layer as far as developing personas and companies and identities to advertise. They create the ads. They assume some percentage will be taken down. Because when you look at the fake advertisements, you’ll see on YouTube for sure, a lot of them just get taken down very quickly. But they will run for 14 hours, a few days. Enough to pull in people.
CHAKRABARTI: Right. And 14 hours or a few days on places like Facebook or YouTube. That’s many millions of potential views.
ABRAHAM: Actually —
CHAKRABARTI: Oh no, go ahead. Please.
ABRAHAM: I can give you a specific number.
I was, like I said, I was on the advertising ad development flow on Facebook yesterday. If you designate all people ages 18 to 65 of all genders with the interest in, quote, investment, business and finance, the audience that they project for you is 714 million to 840 million people.
CHAKRABARTI: Oh my gosh.
ABRAHAM: That could be reached so —
CHAKRABARTI: Amazing. The scale of anything associated with social media or other global platforms is the thing that continues to blow my mind, to put it that way. One more thing about how these scammers gain legitimacy in the United States. Because you mentioned like you can just choose a state and incorporate a business there, or perhaps sometimes even faster just start up an LLC.
But many states require some kind of human contact, right? As a manager of the LLC, and my understanding is that there are people out there who are just willing to give out their address and even their information as managers of those LLCs.
They may not even know what the company is for, but they might be listed as the manager on many of these businesses. And do they get paid for it too?
ABRAHAM: 100%. There is an entire industry of this in, the notorious states in the scam investigation world are Wyoming.
Wyoming’s great for LLCs. Delaware is great for S-Corps. More recently, I’ve been seeing Colorado come up a lot. The registered agent in some scams is an entirely made up, fabricated name, like a lot of a dead giveaway for Chinese scammers especially, is a Western sounding name that doesn’t.
Like Henriques Maria Boden, I think is one that I’ve seen, which is not a name very typical. And when you look, I use OpenCorporates for any investigators out there. If you look at that same name, you just see a bunch of other scam companies and exchanges that have just been incorporated as well. But yeah, there is an industry for registered agents and they sign on for multiple companies.
CHAKRABARTI: So does it mean as far as the U.S. companies are concerned the platforms or even regulators, it’s very hard to tell if the originator of the ad is legit or not.
ABRAHAM: To an extent. Yeah.
You can require individual identification, documentation, like passports. But they don’t really require that from people, or if they require, like they don’t require it from the beneficiary, the payer would provide that information.
So yes, I’m a real person advertising on Facebook, but it’s not mandatory to advertise in the United States. Why would you do that? I would just use a account I created last week and run my ads today without worrying about verifying who I am.
CHAKRABARTI: Okay. So Sandeep Abraham hung. Hang on here for just a second.
I do want to note for everyone that we did reach out to Meta for a statement or for them to answer some questions. At the time of this broadcast, Meta has not responded. We also reached out to the U.S. Federal Trade Commission about scam ads. We’ll talk about the FTCs role in all this in a few minutes.
The agency did not respond. Either during the government shutdown nor after the shutdown ended, and we also contacted Google, which owns YouTube. They declined our interview request, but a Google spokesperson told us that in 2024, Google blocked or removed some 415 million ads across its platform, including YouTube, for violating policies closely associated with scams.
In 2024, Google blocked or removed some 415 million ads across its platform, including YouTube, for violating policies closely associated with scams.
Google spokesperson
That doesn’t actually tell us how many 415 million is out of the total number of ads that are supported across Google or YouTube. The spokesperson also said that Google is using advanced AI power to power its scam and ad detection.
Okay, let’s bring Jeff Horwitz into the conversation now. He’s a technology reporter at Reuters and has reported extensively on the platform side of the scam ad issue.
Jeff, welcome to On Point.
JEFF HORWITZ: Thank you.
CHAKRABARTI: How much money, let’s just, let’s pick Meta, because they’re big. How much money do we know or were you able to find out that Meta says it might make from scam ads?
HORWITZ: Yeah, so I got access recently to a substantial quantity of internal documents from Meta in which they are looking at scams on the platform and considering policies related to them.
Meta’s own analysis from late last year suggested that they were taking in $7 billion a year just from the portion of scams that their own staff considered to be, quote, ‘higher legal risk.’ So that’s not a complete total, that’s just the ones that like if Meta staffers looked at them with all the available data that Meta has on hand, they would be like, oh yeah, that’s definitely a scam. Would be hard to defend that. They don’t obviously look at them most of the time, but that $7 billion a year should be considered a partial summary of what Meta’s pulling in.
Meta’s own analysis from late last year suggested that they were taking in $7 billion a year just from the portion of scams that their own staff considered to be, quote, ‘higher legal risk.’
Jeff Horwitz
CHAKRABARTI: $7 billion with a B annually?
HORWITZ: Yes. Yes.
CHAKRABARTI: Wow. Okay. And how much is that in comparison to Meta’s overall revenue?
HORWITZ: So Meta last year did a little north of $160 billion worth of revenue. Now I think the rifle suppressor example, you said it earlier, is an example of something that might be a scam but is definitely dodgy. And certainly, a violation of policy.
If you expand out the scam universe to the universe of bad ads from scams to stuff that might include things like banned goods, pornography, illegal casinos. Now we’re talking 10% of Meta’s revenue last year. So that would be $16 billion last year alone.
CHAKRABARTI: My jaw just dropped, Jeff. I can’t imagine any company in the world saying, we’re going to do something about this. We are going to cut off 10% of our revenue stream for the greater good.
HORWITZ: I think industries do this all the time. The banking industry, for example, there’s a lot of business that banks don’t do with, say, like Iranian revolutionary guard. You can’t sell, can’t trade oil with those guys.
Which is from a financial point of view, a big loss.
CHAKRABARTI: But they can’t trade oil with those guys because the U.S. government does not allow them to.
HORWITZ: This is exactly my point, which is that look, there aren’t currently much in the way of rules related to who you can do business with.
Sandeep noted, advertiser verification is not required in most markets and certainly not for most products. And there’s effectively no responsibility on a platform, legally, to police who they’re doing business with and whether those entities are dodgy.
I think one of the things, just to illustrate this, inside Meta, there are thresholds for when you can remove an account that is a likely scammer. The level of certainty that meta requires before allowing a account to be blocked, an advertising account to be blocked, is 95%. In other words, if Meta’s only 90% sure that an advertiser is attempting to defraud its users, it will not take down that advertiser’s account.
Maybe it’ll take down an individual ad. And one thing it will definitely do, this is from the internal documents, is they’ll actually add a penalty to how much they charge the advertiser, that they think is likely scammy, to discourage that sort of business.
But that isn’t, they won’t just say no or hold on. This is likely bad. So we’re going to take a closer look and determine whether it’s innocent or a problem.
CHAKRABARTI: Sandeep, I would love to hear your take on this. Go ahead.
ABRAHAM: To just add to that, to run an ad, if you wanted to run a scam, it’s about 20 bucks a day.
It’s really, so imagine like an extra fee would not really be that much of a deterrent. But from my world where I’ve investigated fraud as a financial crime, you are, whether you think you’re deterring it with an extra fee or not, you are essentially monetizing fraud. You are still paying or taking money from someone who is committing fraud, whether it’s an extra fee or not, and that wouldn’t be legal in any other financial institution. So I don’t know why tech gets to get away with it.
CHAKRABARTI: Can you tell me, Sandeep, tell me a little bit more about what Jeff said regarding the thresholds that a company like Meta sets in order to take action on their side of things. What’s the thinking around how those thresholds are set and for scam ads, for example?
ABRAHAM: Yeah, from the advertising side, there are financial incentives. There are, we have, we need to be really clear that this is violating and violating from both a policy perspective. If it’s something radioactive like CSAM or terrorism content that is, the regulations there are so strict that is just, the thresholds are much lower for taking stuff down for those.
But for scam ads, because it is such an amorphous, strange space, it butts into free speech principles, free speech policies on platform, privacy issues on platform, depending on who you’re taking down. But from the detection side, you have a mix of both content and behavioral signals. So the risk score for a certain advertiser, certain ad will include both the content of the ad.
How many people have reported it, whether the models themselves have detected and classified it as risky or scammy and the behavioral signals of the advertiser. Is there a mismatch in their IP address and their IP address and audience? Are they using a fake account?
Are they consistently signing in with the VPN or a Tor account? These are just examples. Like I have, obviously I haven’t been at Meta in three and a half years, so I’ve not worked.
CHAKRABARTI: We’re going to have to just head into a quick break here. And when we come back, I wanna talk more about the regulation side of things.
But let’s close this segment with another example. This is On Point listener Jeff in Rapid City, South Dakota, who says his wife clicked on what she thought was a Facebook ad for L.L. Bean.
JEFF: Got to what looked exactly like the L.L. Bean webpage with various items on sale dramatically reduced, found a whole bunch of different things that she wanted to buy, put them all in a cart and then bought them, gave them our credit card, and then discovered at some point afterwards that it was a scam.
We had to end up having our credit card canceled and had to get a new card. And an excellent scam if you can call such things excellent.
Part III
CHAKRABARTI: I wanna just quickly play this from Rob Leathern. He used to work at both Meta and Google.
He’s now at collectivemetrics.org. It’s a group that aims to make the online ad space more transparent. And he said one of the loopholes that scammers are able to use is that platforms like Meta and Google also legitimately aim to make it easy for small companies to get started.
ROB LEATHERN: It tries to give small advertisers the ability to very quickly and easily get their ads running, put a credit card down and get going.
All kinds of tools to help you make ads and be able to scale quickly. But those same tools and those same degrees of permissiveness and lack of friction are also used by scammers who may be large entities or may have many different scammy products that they’re trying to run. If you’re going to err on the side of either making it easier for businesses or protecting people, I think you have to protect people.
I don’t want my elderly parents getting scammed because it makes it easy for my pressure washing neighbor’s business to be able to advertise to people in my area.
CHAKRABARTI: We’ll hear from Rob Leathern a little bit more in later in the show. Jeff, let me turn to you on this because, to be fair to all the platforms, they are having to perform a kind of balancing act, right?
I take Rob’s point that he doesn’t want his grandparent being scammed because it’s too easy to get a legitimate businesses ad online. But on the other hand, that legitimate business could be owned by another person’s grandparent who’s like trying to start a business in retirement.
How, what should we expect from the platforms in terms of trying to strike this balance?
HORWITZ: I think … what level of diligence you’d expect, there have been calls made for something similar to what it say requires to open a bank account, which is, you show up usually in person with an ID and verification of your identity.
And then obviously you know what you’re doing with that ad, with that account is sometimes monitored. So for example, if I’m sending money to overseas in ways that are suspicious, maybe that’s gonna get flagged. Those are things that are regulatory required in the U.S. and in many other countries, it’s not necessarily the same for advertising.
It’s a little more freeform in terms of what platforms feel like doing. So right now, the platforms determine what level of diligence they feel is appropriate for themselves. I think one of the things that the Meta documents that I’ve seen do indicate is that the company is considering what the impact on its revenue of various scam prevention efforts are going to be.
And if they don’t fit within a budget, that’s a problem.
CHAKRABARTI: Okay. So here’s another question about what platforms could do. And we talked to, you gave us that example of sort of the 95% certainty that the ad is scammy before they take it down. That’s one thing, but also, the platforms welcome users to report stuff.
Hey, I clicked on this ad and suddenly I was out $10,000. Are they, since you looked at Meta in particular, is Meta working hard enough to act on the intel from those user reports?
HORWITZ: So I’ve been covering Meta for this point going on seven years. And I will say that user reports and how they’re handled is generally not a pretty business.
A lot of them aren’t reviewed at all. Those that are reviewed are frequently not reviewed with a great level of care. Just as an example, this is a Meta internal analysis from a couple years back, on reports of scams on Facebook and messenger and Instagram direct messages. They looked at how they were handling them and realized that they were dismissing 96% of the valid scam reports.
Either because they weren’t looking at them or because they looked at them and wrongly adjudicated them. And this was embarrassing, right? They were only getting one in 25, right? That was the sort of internal read. So Meta decided it wanted to do better and doing better, the new standard that it was gonna try to hit, and this was aspirational, was only discarding 75% of the correct reports of scams.
There’s a lot of leeway for error in these processes internally. It’s obviously pretty opaque. And it is something where, again, just human review isn’t going to be done on most of these things.
CHAKRABARTI: Yeah. The scale is just too big for that. Sandeep, your take on this.
ABRAHAM: I just want to also point to the other side there.
User reports are one thing, but they are only one aspect of detection. And just because something is reported doesn’t mean that is the entirety of every scam that has been found. Like basic human behavior, I’m on the train or on the bus and I’m scrolling on my phone. I’m not going to use all that thumb effort to go and report an ad every time I see it.
I’m just mindlessly scrolling. So the amount of users who are exposed to scams, who see them, who don’t report them. That’s a much higher number. Far more than likely. So relying, putting the onus on users to report the scams and identify them, as Jeff said, it’s not always, one, it doesn’t actually pay off that often.
And two, it’s a little unfair to the users as well, because I had to get a certification to examine fraud and to really understand and really immerse myself with this. I wouldn’t expect my grandmother or any layperson off the street to like really understand the nuances and, oh, is this a fraud business model?
… It just shouldn’t be on the user to do that.
CHAKRABARTI: Yeah. Okay. So fair enough. It shouldn’t be on the user to do that. We can and should expect more from the platforms, especially if, as Jeff is reporting, they’re making billions of dollars off scam ads that they know are out there.
But that leaves like the third leg of the stool here, which is federal regulation, because one of my first questions when we were, my team was discussing this show was like, aren’t there any truth in advertising laws or regulations on online advertising? Sandeep, what’s the answer to that? Quick answer to that question.
ABRAHAM: From a fraud perspective, no, none of the existing fraud laws really cover advertising that way. You have the wire fraud statute, which you know, is pretty broad and talks about anything enabling fraud over the internet or the telephone, but it’s rarely prosecuted in this case.
ABRAHAM: There is one law talking about the business model behind fraud, which is the transparency and the incorporating in U.S. states, the Corporate Transparency Act of 2021, which as of this year, the U.S. Treasury Department decided that it wouldn’t enforce it on quote-unquote domestic corporations.
But if you consider that so many of these frauds and scams and scammy advertisers are incorporating in states that don’t really check identification, you just have a massive amount of scammy companies and organizations that then present their fraudulent documents to Meta, Google, TikTok.
And so if you just go down the line there is that transparency law that’s been defanged here, could if you went back and started enforcing it the way it was meant to be enforced, it might actually help this down the line.
CHAKRABARTI: As I said a little bit earlier, we did reach out to the Federal Trade Commission because this was an area that we wanted direct answers from the regulatory agency in charge of overseeing advertising in this country.
The FTC did not respond, but they do have some information on the FTCs website and it’s a section called Truth in Advertising, and I’m just going to read it to you. It says, quote: “When consumers see or hear an advertisement, whether it’s on the internet, radio or television or anywhere else.
Federal law says that ad must be truthful, not misleading. And when appropriate, backed by scientific evidence.”
By the way, that’s what you see in, say, vitamin supplement advertising. They say they have to use specific words like support, not cure. And they have to say, none of this has been verified by scientific research.
They have to say those things.
Now, the FTCs website goes on to say: “The FTC enforces these truth in advertising laws as it applies the same standards no matter where the ad appears, in newspapers and magazines, online, in the mail, or on billboards or buses.”
This is why we reached out to the FTC to ask them, if this is the case, why are millions, if not billions of scam ads showing up on people’s phones and computers every day?
Again, the FTC did not respond to our requests, but we got a partial answer from Rob Leathern, who I said was formerly from Google and Meta, and now at collectivemetrics.org, and here’s what he said.
LEATHERN: Certainly, there’s state level laws and federal laws around deceptive advertising and so on. But for these companies, there’s also the ability to lean on section 230 of the Communications Decency Act to just say okay, you told us about this content.
We took it down. There’s room to have a discussion about whether there should be a carve out from 230 for paid speech. So paid ads being an example. Where perhaps a greater set of things that platforms would be required to do to, continue to enjoy some of the protections that those laws afford.
CHAKRABARTI: Jeff, what do you think about that? And I specifically want to turn to you on this. Because you covered Meta for so long. Every time Section 230 comes up, it’s exactly as Rob says, where the platforms say through Section 230, we are protected from a lot of the sort of application of federal law.
HORWITZ: Yeah, this is pretty standard and long standing. That’s one example I think from last year is that in a lawsuit over scam impersonation ads by an Australian billionaire, Meta responded to the case by saying that it did not in fact owe its users, including the plaintiff, any duty to remove scam ads.
That wasn’t a thing that was like even a responsibility that it had. And that’s, I think, again, been the standard line from the companies. It is, I think, a little conceptually difficult sometimes to say, Hey, wait a second. You’re taking money from someone to promote their content, and you have no responsibility whatsoever.
For that. And so there are some legal cases that are testing those boundaries, right? That Australian billionaire’s case has not been thrown out, despite Meta’s best efforts. But we haven’t really, dealt with internet regulation in this country for a very long time.
Like 230 dates back to the era of Prodigy bulletin boards. It’s shall we say, there are definitely some parties that would love to see it updated.
CHAKRABARTI: I just feel like I just dated myself by knowing what you meant when you said Prodigy bulletin boards. Okay.
One thing about Rob Leathern, who I have one more clip from him here, is that when he was at Meta he actually worked directly on dealing with scam ads. Because I believe he was the one of a lead or one of the leads of the business integrity unit that was tasked actually with preventing scammers and other bad actors from exploiting Meta’s platforms.
And again, now he works at collectivemetrics.org. And the last comment I wanna play from him is that he did tell us that he thinks the FTC has not actually, no matter what their website says, has not been paying close attention to digital ads.
LEATHERN: My perception of the FTC when it comes to online advertising has been that they have not been very active in the last few years.
Some of the federal enforcement on the ad side has really been lacking, so I think all companies who are selling ads, I think have gotten more and more lax over the last many years for that reason.
CHAKRABARTI: Sandeep Abraham, let me ask you this. I think it was also under Rob Leathern’s tenure that things like verification for political advertising was introduced at Meta/Facebook.
That was a major change then, but it seems like it’s a sensible one also now for just general advertising. What do you think about that? And then are there other things that the platforms can do in lieu of greater regulation?
ABRAHAM: Yeah. In an ideal world, yes, you should verify every advertiser for every scam ad, but as Jeff’s reporting pointed out, these are 15 billion ads shown a day.
I also want to put a sharp plug here. Meta, Google, these companies are not monoliths. They are a hundred thousand plus employee bases with very different teams. And so there the trust and safety teams work very hard and within the constraints from leadership, from other parts of the business.
So just thinking from an ops standpoint, yes, validating and verifying ads for advertisers at that scale would be a huge undertaking, for content moderators are already pretty strained with a bunch of other things. Computational, yes, you could use AI to verify and check these things, but how that would be, I’m not saying it’s not possible, but it can be incorporated, but yeah, so to answer your question, it’s not impossible, but it would be, it would take a lot, a big lift.
To expand political verification to, political ad verification to everything else as well.
CHAKRABARTI: Sandeep, let me be frank, is that lift convincing Mark Zuckerberg that it’s worth doing? Because that was the lift under verification for political ads. He was resistant to it for a long time.
ABRAHAM: I can’t speak for Mark.
I never met the guy, but —
CHAKRABARTI: You were talking about truth correctly, the hundreds of thousands of people that work on this stuff in different units and that they’re constrained by leadership, is what you said.
ABRAHAM: Different leadership. You have different VPs and directors all over the company, but yes, to that point, that is part of the big lift. Is yes, if you make it a legally binding thing and for example, the Bank Secrecy Act, I think there are, for every violation of fraudulent profiting, they could be fined 10,000 per violation, which is, if you applied that to social media, if you applied that to tech, that would be convincing enough to bring on and create a new effort around this.
CHAKRABARTI: Yeah. I’ve only got about 45 seconds left. Jeff, I just wanted to give you the last word on your thoughts on what more could be done.
HORWITZ: I think the documents I’ve seen from inside Meta do demonstrate that there are a lot of levers that could be pulled, right? If the company wanted to reduce scam ads by a designated percentage, let’s say 10, 15, 20%, they could do that very quickly. There would be tradeoffs with their own revenue.
So they would end up, it would cost them money. But this is doable internally. The question is just where the thresholds are set.
The first draft of this transcript was created by Descript, an AI transcription tool. An On Point producer then thoroughly reviewed, corrected, and reformatted the transcript before publication. The use of this AI tool creates the capacity to provide these transcripts.