Disinformation is good business. Spreading lies and outrage tends to be profitable, thanks to programmatic advertising, which cares only about traffic, not truth, and funding by state actors like Russia, which pour money into narratives that undermine democracies. Supporting truth is a tougher commercial prospect, but today’s guest is giving it a credible run.
Gordon Crovitz is the co-founder, with Steven Brill, of NewsGuard – a five-year-old for-profit enterprise that rates news sites for editorial integrity helping news consumers and advertisers avoid sites that spread toxic disinformation. Crovitz comes to NewsGuard after a distinguished career as a journalist and media entrepreneur. He was publisher of the Wall Street Journal, as well as an award-winning columnist for that paper.
Before NewsGuard, he founded or cofounded Factiva and Online Journalism—so he’s no stranger to media startups. Gordon and Eric discuss NewsGuard’s business model, his decision to take up the cause of countering disinformation, the role of advertising in funding lies and the explosion of artificial intelligence in the information ecosystem and what seekers of truth can do about it.
Topics
00:00
Introduction and Background
00:23
The Need for Trustworthy Journalism
01:12
The Problem of Identifying News Sources
02:10
The Role of Advertising in Misinformation
03:40
NewsGuard as a For-Profit Model
04:39
NewsGuard’s Data and Reports
06:34
Ads Supporting Misinformation on Social Media
08:23
News Reliability Ratings and Misinformation Fingerprints
09:33
Examples of News Ratings
12:15
The Importance of Misinformation Fingerprints
16:55
Trust in Media and Political Bias
20:29
Challenges in Steering Ads to Reliable Sources
24:57
The State of Professional Journalism
29:56
Losing the Battle Against Misinformation
31:39
The Need for Regulation and Disclosure
35:50
Approaching Social Media Regulation
38:59
Gordon Crovitz’s News Consumption Habits
44:24
Conclusion
Transcript
Eric Schurenberg (00:01.378)
Gordon, welcome to In Reality.
Gordon Crovitz (00:03.51)
Eric, nice to be on with you.
Eric Schurenberg (00:06.026)
You’ve had a highly distinguished career before NewsGuard, including the role of publisher of the Wall Street Journal, which would be a pinnacle of anyone’s career. What drove you to join forces with Steve Brill and attack misinformation for this phase of your career?
Gordon Crovitz (00:24.668)
You know, I didn’t realize in my several decades as a journalist, including at the Wall Street Journal, I thought that I was in the news business or maybe the information business. I think looking back on it, I might’ve actually been in the trust business. You know, a main issue for quality journalism is it trustworthy? And that’s the reason journalists cite multiple sources, they separate news and opinion, they don’t publish, knowingly publish false claims. If they make a mistake, they correct them. And what we had noticed when we started NewsGuard five years ago, at that time, it had become impossible, really, for news consumers to have any idea who was feeding them the news in their Facebook feeds or in a search result.
I say for people of a certain age, they’ll remember before the internet. Somebody might go to a newsstand and say, I’d like a copy of the Philadelphia Inquirer. I don’t want a copy of National Inquirer. For your younger listeners, National Inquirer remains a sort of gossip tabloid that occasionally discovers aliens on the moon. That’s a very different from a local newspaper like the Philadelphia Inquirer. So the first thing that we set out to do was, to give news consumers more information about sources. So we’ve now rated 30,000 news sources, that’s websites, podcasts, TV, OTT, et cetera. And many of them get high trust ratings from us, but a lot of them don’t. And those ratings are available through platforms like Microsoft. And those ratings are also used by brands by advertisers in a way that we had never anticipated. We didn’t really understand the nature of how much advertising could change. But when you see an ad online now, it’s rare that the advertiser has said, I want to advertise on this website or that website. On average, ads run on more than 40,000 websites. And they’re placed there programmatically, automatically, by computers, by algorithms.
And as a result, we estimate there’s $2.6 billion a year going to misinformation websites, Russian disinformation websites, conspiracy sites, healthcare, hoax sites. It’s a big business and helps explain why there’s so much funding for misinformation. Brands, CMOs have no idea. Even the ad agencies don’t have any idea where the ads run. And that’s become a… significant cause of all the misinformation that is now out.
Eric Schurenberg (03:14.806)
That is certainly a crazy evolution of the business. When I was starting in my career, advertising sales was a relationship business and it was really built around trust, plus a lot of three martini lunches, I’m sure. A number of the guests on this podcast come from the not-for-profit university world. There are researchers in misinformation or polarization. There are also some not-for-profits who’ve appeared on reality who are in the trust rating business for news sites and some that are also attempting to interfere in that transaction between programmatic ads and misinformation sites. But you are unique in being a for-profit model. Why did you go that route?
Gordon Crovitz (04:10.384)
You know, we were thinking, what was it going to take to have companies like Microsoft distribute our ratings, to have the biggest advertisers rely on us, the ad agencies? We have contracts with some of the biggest companies in the world. And from having been on the business side of myself and Steve Brill, of course, also having founded American Lawyer and Court TV and clear registered travel book.
One thing that we knew was the companies like that wanted to be able to rely permanently on a supplier. And we thought the most, the best way to do that would be to operate as a, as a for-profit. We also knew we were going to have to recruit talent and to give them, you know, proper incentives, but you’re right. Um, we are very much a mission driven company.
We license a lot of data to the researchers and the academics and the others who are focused on this misinformation problem. Our data is typically used to measure, is there more or less misinformation on a particular platform over a particular time? It’s also used for helping to figure out what interventions are most effective. We work with dozens of researchers in that way. And we do our own reports. We did two reports just last week.
One was we had our team review the programmatic advertising, the automatically-priced advertising, and we found 349 brands, blue-chip brands, advertising on websites publishing false claims about the Israel-Hamas war. Terribly destructive claims it’s a false flag by Israel, all sorts of… false claims and these are blue chip companies that have no idea that their ads are supporting these sites. And similarly, we issued a report on ads supporting accounts on Twitter, now known as X. We took what we call the 10 super spreaders of Israel Hamas misinformation on X. These are accounts that are highly popular. And we found that there were 200 different ads from 86 blue chip companies supporting these crazy claims on X. And again, the brands have no idea. Although at this point, I think brands are beginning to understand that X isn’t going to protect them. Their current brand safety companies are obviously not protecting them. They need some other tools. And we’re… very happy to be able to provide those. And we in particular encourage advertisers to think about using inclusion lists. In other words, take the time to think through what kinds of sites you really want your ads to be on. And we have thousands of sites that are high quality news sites. They could be serving local audiences, black audiences, Hispanic, Asian, LGBTQ plus communities and those sites are in badly in need of ad revenue. They have just the demographic that many advertisers are looking for, and it turns out that they are more efficient to buy as a brand than appearing on all the misinformation sites that have higher ad rates because there’s more demand for them.
Eric Schurenberg (07:55.886)
Uh-huh. Let us, while we’re talking about the databases that NewsGuard maintains, let’s ground the conversation in what it is that NewsGuard actually compiles in addition to the reports you’ve just mentioned. The company, if I’m framing this correctly, is built around two databases, the news reliability ratings and misinformation fingerprints. Could you explain what they are?
Gordon Crovitz (08:24.816)
The reliability ratings are assessments using, in the case of websites, nine basic apolitical criteria of journalistic practice. Does the website have a corrections policy? Does it disclose ownership? That sort of thing. And there’s not a liberal or a conservative way to disclose ownership. You either do or you do not. And we’ve now rated more than 30,000 news sources.
Eric Schurenberg (08:37.73)
Mm-hmm.
Gordon Crovitz (08:54.044)
Many of the websites get high scores from us, right-wing sites, left-wing sites. But there are many in the United States, almost 40%, get such a low rating from us that our ratings for consumers say, proceed with caution. And we give point scores and we give what we call nutrition labels to consumers that they can read to understand the nature of the source that they’re seeing in their Facebook feed or in a search result or wherever they’re encountering news and information, which I think is really.
Eric Schurenberg (09:32.898)
Gordon, could I stop you right there and just ask of some popular and well-known news sources, what’s an example of the ratings say of Fox News versus CNN versus say the Wall Street Journal?
Gordon Crovitz (09:50.508)
So in the case of Fox News, it gets what we would call a modest score, not a super high score, but its score is higher than the score of the website of MSNBC, for example. Fox News, one of our criteria is if the source has a point of view, does it disclose it to its readers or viewers?
And Fox News not too long ago actually added an about section explaining its conservative point of view, which was the first time they had done that. MSNBC still does not. Sites like the Wall Street Journal and Reuters and your old home Fast Company are among the many sites that get perfect scores from us. There are, yes, there are, however,
Eric Schurenberg (10:45.954)
Glad to hear it.
Gordon Crovitz (10:49.452)
um, sites like Infowars that your listeners may recall. Um, there are sites like naturalnews.com, which is a network of hundreds of websites publishing misinformation about healthcare, everything from measles vaccines to COVID vaccines to colloidal silver being a cure-all for whatever else you, and there are of course an enormous number of websites peddling Russian disinformation, Chinese disinformation, Iranian disinformation. In the case of Russian disinformation, there are a couple that your listeners may be pretty familiar with, RT, formerly Russia Today, Sputnik News, those are both owned by and operated by the government of Vladimir Putin, but our analysts who are expert in that area have identified almost 400.
What we call malign actors. Those are websites publishing Russian disinformation, YouTube channels, ex accounts, and virtually all of the new false claims by the Russian disinformation operation appear first on one of those several hundred sources. So that led us to the second database, Eric, which is our misinformation fingerprints. We were asked actually by a unit of the Pentagon if we had a catalog of false claims. The analysts were particularly interested in false claims and what they call hostile information operations by Russia, China, and Iran. And we realized that in the course of doing our ratings, we did have quite a good catalog of false claims. One of our criteria is, does this source repeatedly publish false content? So this misinformation fingerprint database now has tens of thousands of examples of false claims across multiple languages. And for each of them, we state the claim, we debunk the claim, we cite sources, and this comes in a human readable and a machine readable way. So the machine readable way includes Boolean search terms and hashtags and other ways for machine learning tools to be able to find, for example…
Gordon Crovitz (13:12.608)
all examples of that false claim on a particular social media platform or on the open web. And that database, it turns out, is extremely handy for the generative AI models that, as you know, tend to hallucinate. If people prompt them with topics in the news, the odds are really high that they’ll give a false explanation of topics in the news. It’s just in the nature of the generative AI models. OpenAI, Sam Altman from OpenAI, was famous for saying, don’t trust us on factual matters. But they’re working on it. They know there’s a problem. And Microsoft actually uses our misinformation fingerprints to create what the engineers call guardrails,
Eric Schurenberg (13:53.814)
Mm-hmm.
Gordon Crovitz (14:08.836)
No matter what the prompt is, if you see this claim, mitigate it in some way in the response. So for example, one example of Russian disinformation is that there are NATO troops fighting in Ukraine. You do that search or prompt on mobile.
So I was, yeah, I think I can do it and make it easy to edit. One of the uses of our misinformation fingerprints database turns out to be for the new generative AI models. People will do a prompt and response on ChatGPT or Bard or any of these new chatbots.
Eric Schurenberg (14:56.823)
Mm-hmm.
Gordon Crovitz (15:06.316)
And when it comes to topics in the news, the odds are really high that somebody will get bad information. Even Sam Altman of OpenAI has said, don’t trust us on topics in the news or on factual matters. So AI models like Microsoft’s are now trained with our misinformation fingerprints, these false claims in the news, so that whatever the machine responds with, it won’t simply repeat a false claim. It’ll take steps to mitigate it. It’ll often say, this is among the false claims that news guards identified, and here’s the claim, and here’s the actual situation, and here are the citations. And to us, that’s a pretty good feeder experience. We get context and accurate information. So those are the two databases, the reliability ratings for news sources and the misinformation fingerprints catalog of all of the most significant false claims spreading up.
Eric Schurenberg (16:11.018)
On the misinformation fingerprints, I am aware of a considerable body of research that suggests that respect for factuality or belief in conspiracy theories is asymmetric between the political philosophy. So, you know, to put it broadly, more Republicans are likely to hold ideas or to frequent news sources that I’m guessing news guard would regard as unreliable.
Is that true? And if it is, how you respond to accusations that NewsGuard in its misinformation fingerprinting is just another mouthpiece for the progressive media elite.
Gordon Crovitz (16:56.972)
Yeah, so because our work is apolitical, we have right-wing sites that get very high scores from us and left-wing sites that get very low scores and vice versa. You know, the daily caller has a higher rating from us than the Daily Beast. Daily Wire has a higher rating than Daily Cause. As we discussed earlier, Fox News’ website’s got a higher rating than MSNBC’s. So our ratings are apolitical.
Eric Schurenberg (17:06.7)
Hmm?
Gordon Crovitz (17:26.884)
You asked the question though, in a somewhat different way, which is about trust. And you’re right that trust in media is low across all segments, but currently is particularly low among conservatives. That’s not true in every case. The most dominant form of misinformation on the internet actually is related to healthcare. And one of the most popular sites is the one run by…
Eric Schurenberg (17:31.534)
Mm-hmm.
Eric Schurenberg (17:43.171)
Mm-hmm.
Gordon Crovitz (17:57.104)
Robert Kennedy Jr., who was not a conservative. And those issues are bipartisan, healthcare misinformation. So I think that the news industry has work to do. We all have work to do, but it all comes down to trust. And what we have found, and there’s now a lot of research that backs this up, independent research.
Eric Schurenberg (17:59.776)
Mm-hmm.
Gordon Crovitz (18:26.8)
What people really want is more information about the sources that they’re seeing online.
If they are given information by a diverse apolitical set of journalists about the journalistic practices of a journalistic enterprise, it gives them more information than they had before. And there’s a lot of research that says that people don’t want to engage and spread false claims. Nobody wants to do that. So the more access people have to the nature of the sources that they’re seeing online, the more access they have to understand claims they may have heard about somewhere and a way to find out is that really accurate. I think the more confidence they’ll have in trusted news sites and the less confidence they’ll have in untrustworthy sources of news.
Eric Schurenberg (19:27.658)
I think it is probably a very wise policy to activate people’s desires not to be taken for a ride, not to be anybody’s fool and not to be suckers, regardless of the political point of view.
Gordon Crovitz (19:43.952)
And it’s in the nature of our work. We don’t censor anything. You know, we don’t remove anything. We simply give consumers and brands more information about the sources and about claims in the news and allow them to make their own decisions. Earlier you mentioned a stunning statistic about $2.6 billion going to support misinformation ads and then a comparably stunning figure more recently about the number of blue chip brands that are supporting misinformation around the Gaza conflict. Why is it so hard for those brands, which I would say with confidence are would be horrified to recognize that they’re contributing to inflammation around these controversial issues. Why is it so hard for them to steer ads to reliable sources?
Gordon Crovitz (20:41.956)
You know, most of the advertising in the world now is what’s called programmatic. It used to be that a CEO and CMO would say, let’s advertise on this site and that site, and let’s not advertise on this site and that site. That’s really not the way it works anymore. The way it works is a brand will say, I’d like to reach this particular kind of audience.
Eric Schurenberg (21:00.395)
Mm-hmm.
Gordon Crovitz (21:09.004)
And the algorithms then place ads regardless of the nature of the source. And so the Trade Association for Larger Advertisers, the ANA, recently issued a report saying the average programmatic campaign appears on more than 40,000 different websites. So we have a very common experience, which is when we’re speaking to a CEO and we say,
We know you don’t know this, but your ads are supporting COVID hoax sites, conspiracy sites, sites publishing falsehoods about the Israel Hamas war. The CEO will say, well, that can’t be. Why would I do that? Why would my CMO do that? And very typically, the CEO will bring the CMO into a meeting.
And we’ll ask the CMO, don’t you look at all the websites where our ads appear? Don’t you know where our ads are appearing? And the CMO says, honestly, boss, I don’t. We don’t know. They’re on 40,000 different sites. We really don’t know. And at that point, the CEO says, you mean we could be supporting Russian disinformation sites, and sites publishing false claims about Israel, oh, my awesome. The CMO says, well, you know, we do have a brand safety tools that keep our ads off the pornography sites and some other tools, but we don’t really have one that works for misinformation. That was one of the reasons we started NewsGuard, that we saw this was an enormous problem and one that CEOs and CMOs really didn’t have a solution to. They thought that they were being protected. There are big brand safety companies with names like DoubleVerify and IAS, but they operate… in a way that made sense for the earlier generation of the internet. They’re very good at keeping ads off pornography sites, for example, because they can use machine learning, AI, to know the difference, to spot a pornographic web page. That’s pretty easy. But spotting a false claim about the Israel-Hamas war or about COVID, that requires human beings and requires a different approach. So we are seeing an increasing number of CEOs and CMOs taking steps to make sure that their ads are not supporting these misinformation sources. On the other hand, it’s still $2.6 billion a year going to these sites, helping to explain why there’s so much misinformation on the internet. Let me put it this way, Ira. If the CEOs and CMOs of the top 300 companies took steps to make sure they were no longer supporting misinformation websites. There would be a lot fewer misinformation websites. That’s the business model for so many of them. So this is an area where CEOs can make a big difference. And we’re beginning to see CEOs understand this issue. And you know, we hope that more and more of them will take steps to be part of the solution. And that’s a combination of making sure that ads don’t support, it’s a combination of making sure these ads don’t support misinformation sites, but also using inclusion lists, so that they’re supporting high quality news sites of all kinds, global brands, business brands, but also sites serving local communities, black communities, Hispanic, Asian, LGBTQ plus communities. Those news sites are badly in need of revenue. And as it turns out, we have case studies showing that advertising on sites like that is actually more effective, efficient, lower cost, better result than advertising on info, wars, and Russian disinformation sites.
Eric Schurenberg (25:19.587)
Mm-hmm.
Eric Schurenberg (25:25.042)
Well, it would certainly be a boon to the information ecosystem if more dollars were flowing to high integrity sites. And up to this point, actually, our conversation has been a lot about malign actors. So people who are creating intentionally false narratives, either for profit or for ideology, because they’re aligned with a state adversary. But I’d like to talk for a moment about professional journalism.
As you noted, and as everybody knows, the profession has lost a lot of trust with its audiences and the business model has been challenged, which maybe has given other once trustworthy organizations an incentive to do things that they probably is not proud of and the clickbait and so forth. Or some of the things that you noted before about… well-known news sites that are low rated by NewsGuard. Over the arc of time that you’ve been looking at professional journalism, have you seen a trend to be more trustworthy, less trustworthy in response to business pressures?
Gordon Crovitz (26:42.296)
I think among quality journalism, there is a recognition that trust has become a big issue. And you’ll see trust as part of the marketing campaigns now for several news organizations. When I was publisher of the Wall Street Journal, and this goes back some years before the misinformation problem was as serious as it is now, the bonus for the news department depended in large parts part on how well the journal did in what was then an annual survey by Pew Research that asked how trusted are different brands by liberals and conservatives, Democrats and Republicans. And the bonus was based on how well the journal was perceived on all sides. And for me, that was kind of a stand in to underscore what had always been a remains, I think, for the Wall Street Journal, a keystone, which is… you know, fairness and accuracy and independence and all the other criteria that are so important are the way to earn trust. We have found that about 20% of the new sources that we’ve rated after they were rated by our analysts have taken some steps to improve their ratings. After engaging with us, we think that’s great, you know.
Gordon Crovitz (28:08.088)
We were, the digital platforms also rate new sites in a secret way. Publishers don’t know what their rating is. They say, we can’t tell you because that would ruin our algorithm. We say we’re the opposite of an algorithm. We call it the comment if it looks like you’re gonna fail any of our criteria, but also we’re really happy when people game our system and start disclosing ownership or taking other steps to improve their scores. And the… the brands that have taken steps to improve their scores include The Times of London, Reuters, some big brands, which we’re of course delighted with, but there are quite a number of sites that have become popular, that were started by people who never practiced as journalists. And in some cases, we’ll say they lose points because they don’t have… a corrections policy or any corrections at all. And they’ll ask our analysts, well, what’s a corrections policy? And they’ll look around at highly rated sites and say, oh, we should do that too. And I think that will help over time. I think it will help reestablish trust among those sources that are doing journalism as opposed to doing Russian disinformation or conspiracy.
Eric Schurenberg (29:35.066)
I’d like you to step back. You have a unique view of the information environment. Would you say that, not to maybe use too melodramatic a term, but what the heck, are we losing the battle against misinformation?
Gordon Crovitz (29:57.26)
Oh yes, we certainly are losing the battle against misinformation. There are so many different ways to measure that. Engagement with low trust websites, the popularity of false claims, now the AI enhanced misinformation where we found hundreds of websites that are generated by AI. We actually keep a tracking center. We found hundreds of them.
Gordon Crovitz (30:27.068)
pushing out crazy stories. So yes, I think if you’re a news consumer, it’s actually gotten even harder to know whom to trust and to have a high degree of confidence that your information is trusted and reliable. I think that over time, the result of that will be people increasingly looking to rely on trustworthy sites. That’s why so many
Websites now market themselves as trustworthy and give reasons to readers to explain why they are I think that’s a good trend. I think it’s in the right direction But if you’re asking how are we losing the war on? Trust and is misinformation increasingly popular of I regret to say things were bad and are getting worse On the other hand there are solutions in the market. There are things that media outlets can do and there are things that rating services like ours can do to help restore trust. But we’ve got an awfully long way to go.
Eric Schurenberg (31:39.15)
What scares you the most?
Gordon Crovitz (31:42.108)
I think what scares me the most is people having lost confidence that they understand what’s true and what’s not true. And losing trusted sources, no longer feeling as if they’re getting the news that they need to make up their own minds about what they think. And a large amount of that, more than most people think is the result of hostile foreign disinformation operations targeting the US and American allies. The Russians, the Chinese, the Iranians devote hundreds of millions, maybe billions of dollars to their efforts. And they operate in multiple languages, of course, including English. They target different groups within the US and other democracies.
This is a significant part of their foreign policy. And we are still quite ill-equipped to protect our citizens against disinformation coming from Moscow, Beijing, and Tehran. You may recall, Eric, during the Cold War years, if you ever saw a Soviet publication in the US you would have attached to it a disclosure that says this source had to register with the Department of Justice under the Foreign Agent Registration Act, FARA as it’s called. That law dates from the 1930s when Congress said anybody distributing Nazi propaganda and then Soviet communist propaganda, you can distribute that in the US, the law said. I’m not going to censor it but it has to come with a disclosure that explains the nature of the source. That law is still on the books. It’s not been enforced by either party for years, but why shouldn’t it be? Why shouldn’t TikTok and YouTube and Google and others be required, whenever they’re publishing Russian disinformation, Chinese disinformation, to say… this is from Russia and China, you know, so the consumers will have some idea of who’s feeding them the news. We think that is, you know, would be an appropriate form of regulation to require that Americans get the information they need about hostile foreign information operations targeting them. For many years, RT, Russia Today, was among the top sources of news on YouTube in the United States. They were the first source of news to get to a billion views. A representative of Google actually went on RT to praise them for, you know, being not just propaganda. You know, the Silicon Valley platforms don’t always have the best news judgment. And that was clearly a terrible example of news judgment by Silicon Valley.
Gordon Crovitz (35:06.628)
But these foreign entities take advantage of our open internet. They spend a lot of money on it. They’re very good at it. And American citizens are largely defenseless because they don’t have access to information about who’s feeding them the news.
Eric Schurenberg (35:25.546)
So FARA is a book, a law that’s been on the books for nearly 100 years, as you mentioned. Other regulations that directly address digital misinformation, like the Digital Services Act out of the EU, are also trying to address some of these same issues. Where would you like to see regulation that is targeted at the current information environment?
Gordon Crovitz (35:52.304)
Yeah, I think one of the unusual aspects of the internet is that the digital platforms are the only industry I know that have been granted immunity from the known harms that they cause.
I feel partly to blame in the 1990s when the internet gained that immunity under what’s called Section 230 of the Telecommunications Act. I was writing columns for the Wall Street Journal. I praised that idea. I said, the internet is too young to regulate it. We don’t know how to regulate it. Let it grow and thrive and let’s see. It’s now more than 25 years later. We see the downside of excluding an industry from basic liability. The basic common law of tort liability does not apply to the digital platforms. We should not be surprised if we immunize an industry from responsibility that industry then behaves irresponsibly.
And I think looking back, it might have been the right thing to do at the beginning, let the internet grow and let’s see how it develops. But by now, we can see the harm that is caused without any of the usual constraints. You know, we tell chemical companies they can’t pollute rivers. We tell oil shippers that they should have double-hulled ships. And if they… spill oil, they’re going to be held liable. But when it comes to the digital platforms, we’ve said, do what you want. You’re not going to be held liable. Cause harm. Run misinformation. Get ads from RT. Don’t worry about it. It’s all good. That, I think, is a real mistake. And I think that there needs to be reforms there to restore the basic common law responsibilities of every other industry. And that would make.
Gordon Crovitz (38:01.5)
difference too. I think a lot of the solutions really are about disclosure and transparency and empowering news consumers and others with information about the nature of sources and the claims that they’re seeing. I don’t think regulation is the solution to the problem, but I do think that we have to be more careful about immunizing an entire industry from the basic common law that applies to everybody else.
Eric Schurenberg (38:30.634)
Are you suggesting, Gordon, that the way to approach, say, social media platforms where an awful lot of this information gets shared and where there is, as you said, explicit exemption from some of the ordinary rules that govern other industries, that the way to approach it is sort of like a consumer safety issue rather than censorship or identifying misinformation?
Gordon Crovitz (39:00.368)
Yes, I think that’s well put. You know, the common law basically says, if you are creating a known harm, a predictable harm to others, you need to be held liable for it. Every industry is used to that. It’s so basic, nobody questions it. But one of the reasons that the digital platforms, including the social media platforms, and now, by the way, including the generative AI model, which are acting as if they’re immune also, I think. They may not be, but they’re acting as if they are. That creates such terrible incentives for those companies. Again, imagine a chemical company that was never gonna be held viable for its spills, or an oil shipping company was never gonna be held viable for oil spills. They take all kinds of risks. It wouldn’t be their problem. That’s where we are with social media. And I think… restoring some of that basic protection for consumers would go a long way. Not that it should lead to censorship, I don’t think it should, but it should lead to more and more disclosure as a basic step that can be taken.
There is a proposal that Francis Fukuyama, the political scientist, has been advocating which is quite practical and I think wise, which is that the digital platforms like the social media companies should make it very easy for what he calls middleware solutions. That is software that is in between the digital platform and the consumer. And his idea is that rating services like ours or the others could be chosen by a news consumer so that if they’re on Facebook, for example,
They could tick a box and get ratings of news sources from us or others And that the platforms in order to meet a test of acting responsibly and trying to minimize harm Should be required to open up to third parties through middleware UK government is considered a regulation like that the European Commission where a signatory to the European Commission’s code of practice on disinformation. That’s one of the most practical solutions that there could be in the industry. And I think over time, that kind of reform will be necessary.
Well, if it’s not too late, we do have a number of important elections coming up around the world, including in this country next year. And we will have to see the role that misinformation from abroad and domestically affects that election. Let me ask you one final question, Gordon. Your own newsfeed, what do you do to make sure you’re getting the straight poop?
Gordon Crovitz (42:08.272)
Well, I’ll answer that question in this way, Eric. My oldest son is a third-year in college. And I encouraged him when he was quite young to read the Wall Street Journal. I understood its qualities and its focus on trying to be as straight as possible in the news coverage and as opinionated as possible in the opinion section and delineating the two very carefully.
And I think, you know, if you get into a habit of focusing on the nature of the brand, of the source that’s feeding you the news, that even younger people who have so many choices about where they’re gonna turn to for their news can focus on the nature of the source. Unfortunately, of course, what we see from the data is that TikTok has become a significant source of news and information for young people, even though almost none of that was on TikTok, is from a news brand at all. It’s from people spouting off. We’ve done research at NewsCard that has found that if somebody is looking for news and information on either TikTok or YouTube, look at the topic in the news 20% of the time they get misinformation, which is a pretty shocking figure and helps explain why some people believe what they believe. And I think there’s a real responsibility that companies like TikTok and YouTube, which is owned by Google, should have to do a better job of giving their consumers information about what they’re seeing in their feed.
Eric Schurenberg (44:03.886)
All right, Gordon, well, let us leave it there. The Wall Street Journal over TikTok. I think that is pretty safe advice for any news consumer. Thank you very much, Gordon, for this conversation. It was really interesting. And good luck with the good work you do at NewsGuard.
Gordon Crovitz (44:14.425)
Thank you, Eric.
Gordon Crovitz (44:24.688)
Thank you very much and thank you so much for taking on this topic through this podcast.
Created & produced by: Podcast Partners / Published: Dec 19 2023