Is Truth Dead?

With Steven Brill - Co-Founder of NewsGuard


Episode description:

Welcome to in Reality, the podcast about truth, disinformation and the media. I’m your host Eric Schurenberg, a long-time journalist and media executive, now the founder of the Alliance for Trust in Media.

On previous episodes, we discussed how you can distinguish between reliable news online and unreliable, story by story. An obvious shortcut is simply to only read or view stories from places that you know to be reliable in advance. But these days, how do you know who is reliable?

Today’s guest is long-time journalist, prolific media entrepreneur and author, Steven Brill, whose six-year-old company, NewsGuard, helps readers and advertisers identify trustworthy newsrooms, based on the newsrooms’ adherence to sound journalistic practices. In addition to starting media brands like American Lawyer and Brill’s Content, Brill has written numerous books on American culture—but the one that relates the most to NewsGuard is his most recent, the Death of Truth.

Eric gets Brill’s insights about how social media swamped truth with the unwitting help of respected advertisers and well-intentioned legislators; they talk about his proposed solutions to this mess; and also why non-partisan NewsGuard has suddenly, alarmingly, found itself in the crosshairs of the new Trump administration.

The Death of Truth by Steven Brill

NewsGuard

 

Transcript

 

Eric Schurenberg (00:01.956)
Stephen Brill, welcome to In Reality.

Steven Brill (00:04.938)
Thank you. Glad to be here.

Eric Schurenberg (00:06.98)
Steve, you’re an entrepreneur, a media entrepreneur, founder of American Lawyer, Brill’s Content, author of half a dozen books and an observer and critic of American dysfunction and the media industry in particular. Most recently, you’re curator with Gordon Krovitz of NewsGuard. What brings you to, in reality, is your book, The Death of Truth. But I want to start with NewsGuard, which… is a character in the book weaves throughout the book. So for those who don’t know, explain what NewsGuard does.

Steven Brill (00:42.782)
NewsGuard does is it uses journalists, human intelligence, not artificial intelligence, to rate the overall reliability and credibility of news sources online and their associated social media platforms. We use nine strictly apolitical journalistic standards and assign the 10,000 or so sources that we’ve rated in the United States, Canada, Europe, Australia, New Zealand, on the basis of zero to 100 points based on how they conform to the nine criteria. And what this allows us to do is give readers, advertisers in Europe, some national security officials, a sense of the difference between Russian disinformation, for example, and legitimate local news sources. That might sound bizarre that there wouldn’t be a difference or a discernible difference between Russian disinformation and local news, but for the fact that just in the last year, there’s one guy who is a…

…who is based in Moscow, who’s a fugitive from justice in the United States, who has set up, for example, 170 different news sites, posing as local news sites in the United States, that are promoting Russian disinformation, whether it’s about healthcare or Ukraine or the elections or the wildfires in California, you name it.

The goal is to disorient people, create all kinds of division. And so there’s that. And then you have health care hoax sites that are online promoting phony nutrition supplements or drugs that will supposedly be better than the COVID vaccine for dealing with COVID. we all agree that it’s a cesspool out there.

Steven Brill (03:07.754)
NewsGuard tries to do is make some sense of the cessp-

Eric Schurenberg (03:13.73)
The person you’re referring to is John Dugan, who is basically a tool of Russian disinformation and the kinds of sightsees putting up that are masquerading as high integrity local news sites are referred to as pink slime. Looking back at your history, you’ve exposed misbehavior in the healthcare system and government, of all the things that you could have done, all the companies that you could have started, they’re very entrepreneurial, why was misinformation the thing that you focused on with NewsGuard?

Steven Brill (04:00.214)
Well, it’s a major problem, and it’s a problem that kind of supersedes all the other issues, because if in a democracy, people don’t know what to believe, if they can’t have debates over the same set of facts, the democracy can’t function, because nobody will trust anyone, nobody will trust anything. We’ve lost trust in our institutions. The very idea that healthcare, the COVID pandemic became a political issue is really based on the notion that people have been guided or misguided not to believe in the same set of facts. In my book, The Death of Truth, I wrote about someone who…

A doctor that I interviewed who was treating a COVID patient who didn’t believe in the pandemic, who thought it was a hoax, who thought it was a government conspiracy to take control over people. And in the last minutes of his life, he took off the respirator that he was wearing and told the doctor this whole COVID thing is BS, and then he died…

…and they’ve refused to get a vaccine. You can’t imagine a problem more basic than that. If you don’t trust doctors, you don’t trust healthcare institutions. Now, there are a lot of problems with the healthcare system. I devoted a good portion of my journalistic life to exposing those problems. they certainly, the healthcare system in the United States…

Eric Schurenberg (05:48.45)
Yes. Yes.

Steven Brill (05:55.308)
…certainly opened itself up to a lot of the cynicism and disbelief, but the online social media platforms really maybe inadvertently exploited that by allowing people not to believe in their own doctors, just the way they don’t believe in, you know, poll workers who are counting votes or judges who were deciding cases.

When that happens, everything begins to crumble. And I think that’s what we’ve seen in the United States and we’ve seen it around the world.

Eric Schurenberg (06:34.764)
Yes, one of the things that is incredibly frustrating to those of us who agree with you, Steve, is the fact that there are powerful people who view the efforts of organizations like NewsGuard or some of the other sort of crusaders on behalf of truth that you mentioned in the book, like Rene de Resta, the Stanford Internet Observatory, as being on the side of censorship and that

The news guard standards of trustworthy journalism, for example, are a form of squelching free expression or pointing out misinformation about health, public health is a form of keeping conservative voices from being expressed. And most recently, this point of view was expressed by Trump’s nominee to head the FCC, Brendan Carr who accused NewsGuard of being in on some kind of censorship scheme. What’s your reaction to that?

Steven Brill (07:41.004)
Well, first of all, censorship by definition is only something that is done and can be done by the government. If you decide that this interview is not something that you want to give to the people who follow your podcast because I say something that is stupid or incorrect or irresponsible, and you decide to edit it out or to edit me out…

…That’s not censorship. That’s you exercising your responsibility as the editor of this podcast. If the government decides not to allow this podcast to be broadcast, that’s a different story. So that’s the first thing. The second thing I’ll just say is NewsGuard doesn’t favor censoring anything. What NewsGuard does, NewsGuard has a browser extension for consumers…

…and your listeners can get it if they go to newsguardtech.com. It has a browser extension that puts our numerical rating next to any URL or social media platform that provides the rating. And if you hover over it and click on it, you can read our three or four thousand word nutrition label that transparently explains…

…why we gave this site the rating that we gave it. It’s a Russian disinformation site. Here’s why we know this. Here’s how we can prove it. And you can decide that you believe us or you don’t believe us. Or you can decide, well, it be Russian disinformation, but I still want to read it. That’s fine. We don’t block anything. So we’re not really, you we’re not in the censorship business. We’re in the business of opposing censorship by giving people simply the tools

Eric Schurenberg (09:19.588)
Mm-hmm.

Steven Brill (09:37.162)
to decide for themselves and maybe our tools are wrong and maybe we should have competitive tools. That would be fine too. People can decide for themselves. What is dangerous is the idea that anyone who criticizes someone who is speaking online is trying to censor them. You know, that’s the equivalent of a restaurant critic being accused of trying to put the restaurant out of business. The restaurant critic…

Steven Brill (10:06.506)
…is simply in his opinion saying what he or she thinks of the restaurant. And people can go to the restaurant for themselves and decide if he’s right or wrong. If he’s wrong, then people aren’t gonna pay much attention to that restaurant critic. So the notion of censorship has really been pushed and promoted by two or three. In this case right wing websites in the United States, Newsmax and OAN, who have convinced members of the Freedom Caucus and now incoming members of the President-elect of his administration that they’re being censored by us when they’re not. Now, they may not be getting the advertising revenue that Fox News is getting because we give Fox News, believe it or not, a pretty good rating. That’s what they’re concerned about. So they go to these politicians and they say, need to do something about NewsGuard.

Eric Schurenberg (11:11.544)
Mm-hmm. Mm-hmm.

Steven Brill (11:16.636)
You know, NewsGuard is hurting us. They announced an investigation. And guess what happens? Newsmax or OAN gets the scoop. They get an exclusive interview with them attacking us. That helps the politicians raise money. And it puts in Newsmax’s eyes, you know, pressure on us to improve their rating. And if they want to improve their rating, it’s easy.

We want people to game our system. If they want to improve their rating, they can have a corrections policy. If they want to improve their rating, they can stop saying that the 2020 election was rigged, which they’re still saying.

Eric Schurenberg (11:50.158)
Right.

Eric Schurenberg (12:01.43)
Right, right. The principles that are the, I’m not exactly sure what you call them, but the measurements that you use for NewsGuard to rate news organizations are basically what are great high integrity journalistic practices. I wanna switch now to the book, The Death of Truth. The book, I really appreciated that it…

…took the whole vast chaotic information ecosystem that we now exist in and kind of broke what has happened to us down into four forces that are undermining the nation’s commitment to a shared reality. So let’s talk about that. The first of those forces was the rise of the social media platforms. And you have a very intriguing

metaphor for what it’s like going on a social media platform. The metaphor is you walk into a library to look up some information and rather than being helped by a librarian, you are instead confronted with a blizzard of paper that you have to sort through yourself or that an algorithm sorts through on your behalf. So let’s talk about the role of social media platforms in the death of truth.

Steven Brill (13:30.186)
Sure, well, the library analogy metaphor is this. Right now, if you walk into a library, books are neatly arranged according to subject matter. You can pick up the book. You can look at the book jacket, you know who the publisher is. You get some sense of the background of the author. You know what the author’s credentials are. And best of all, there’s a librarian.

Steven Brill (13:58.452)
And the librarian can say, if you’re interested in this topic, this author is kind of conservative. If you’re going to read him, you should probably read this one who’s kind of liberal and you’ll get a good balance. Now imagine if instead you just walked into a library and there were a million pieces of paper just flying around in the air. And you pluck one out of the air and you start reading. You don’t know who the author is. You don’t know what their credentials are.

You don’t know who’s financing it. You have no idea of what you’re reading. That’s your Facebook feed. And of course, now that Facebook feed is going to be even more chaotic because they’re giving up even the pretense of fact checking, which they really did this kind of window dressing anyway. So you have no idea of who’s feeding you the news.

Eric Schurenberg (14:43.001)
Yes.

Steven Brill (14:52.394)
That is bad enough in and of itself, but if you combine that with what turned out to be the social media platform’s business model, which wasn’t the librarian’s business model, which is here’s reliable information and you can read it for yourself, their business model was to keep your attention as long as they possibly can because that’s what maximizes their ad revenue.

And the sad truth of human nature is that the most inflammable, inflammatory, controversial stuff is what keeps people glued. So it’s a combination of anybody can say anything online with an algorithm that takes the most inflammatory stuff and presents it to you. And that’s pretty lethal.

Eric Schurenberg (15:45.987)
Right.

Steven Brill (15:48.964)
So I tell the story of a woman who you mentioned earlier, Renee DiResta, who has a newborn and is living in California and goes online just to find out about, you know, what’s the vaccination protocol for a newborn. She just wants to get some basic information. And she’s stunned to see this hot debate about, you know, whether a measles vaccine is going to cause autism or something.

And she can’t believe it. And the more she starts reading this stuff, the more they feed her the stuff. And she’s totally confused. And she thought this was like a routine thing. Okay, what month or week should I go get these vaccines? And the more she starts reading it, the more they start sending her other inflammatory posts about other contentious issues. And so…

That’s the first cause that I talk about, the first force which is the advent, so to speak, of social media platforms as a supposed source of information. Now, you combine that with something that is little noticed and little talked about, at least before the book came out, which is programmatic advertising. Most of your listeners have no idea what I’m talking about.

Eric Schurenberg (17:00.256)
Right, right.

Steven Brill (17:16.246)
What programmatic advertising is, is most of the advertising placed in the world today is online. And most of the online advertising is done by algorithm. They’re not interested in, you know, I’d like my ad to be in the New York Times because that’s a reliable, good place. I don’t want my ad to be in the National Choir because no one’s going to believe it. That’s not my audience.

It’s all done programmatically. what programmatic advertising does is it goes for a demographic, a finely tuned demographic. So that if you’re Hertz, Rent-A-Car, you’re looking for people of a certain age in a certain geographic area who’ve stayed in X number of hotels in the last six months who…

…flown on X number of flights and you’ve rented X number of cars and have expressed an interest in electronic vehicles. Let’s just take that as a hypothetical. You can now through programmatic advertising and through all the data that’s been collected on every one of us, you can find a million of those people. And you don’t care where you’re finding. You don’t care if they’re reading a Russian propaganda site…

…the New York Times, and in fact you’d rather it be a Russian propaganda site because that’ll cost less money than advertising on the New York Times. So you end up just going after the person. And what that means is, just to take one example which I have in the book, that a couple years ago the largest single advertiser on Sputnik…

…one of the Russian propaganda sites was Warren Buffett because Warren Buffett’s Berkshire Hathaway owns Geico. Geico is the government employees insurance company. That’s how it was formed. It’s a giant consumer insurance company, auto insurance, home insurance. And Geico was originally formed ironically during World War II to provide insurance for troops…

Steven Brill (19:38.582)
…who were serving in the war, and then for troops who obviously were adversaries of what became the Soviet Union. Now, do you think Warren Buffett intends to finance Russian propaganda? Of course not. He had no idea that that’s where his ad money was going. Certainly no one at Geico did. And the reason no one at Geico did is the typical programmatic advertising campaign by a large advertiser…

…goes to 40,000 websites at a time. And it’s not like there is someone at Geico’s ad agency who is sitting around reading 40,000 websites to see if it’s a suitable place for a Geico ad. So what’s the result of that? Is money goes to the lowest common denominator websites, which means it goes away from legitimate news and finances…

…websites and social media platforms, which I’ll come to in a second, that are, you know, that are set up not to pay journalists to do reporting, but are set up either for propaganda purposes or just to get advertising. So I’ll give you another example, which I have in the book. You’ll remember when Nancy Pelosi’s husband was brutally attacked in their home in San Francisco.

Eric Schurenberg (20:54.052)
Right.

Steven Brill (21:06.036)
That night, a website called the Santa Monica Observer, which is posing as a legitimate local news website serving the Santa Monica community, that night they went up with a story saying that the attack was actually the result of his encounter with a gay prostitute. Totally made up, totally illegitimate. They had that…

…article on their website, but more important, they posted it on their Twitter account. And lots of people saw it immediately on their Twitter account, including Donald Trump Jr. and Elon Musk, who had 80 plus million followers. He retweeted it. So what’s the effect of that? The Twitter account had the Santa Monica Observer URL, which goes to their website.

Eric Schurenberg (21:57.844)
Mm. Mm.

Steven Brill (22:04.746)
So when it gets retweeted, suddenly the cash register rings at the Santa Monica Observer because they get eyeballs like they’ve never had before. Millions of people go to the Santa Monica Observer. That gets them a bonanza in programmatic advertising. And they got all this advertising money for that deliberately false story. And the San Francisco Chronicle

which is a legitimate newspaper in San Francisco, didn’t get that advertising when it was reporting the real story of what happened to Nancy Pelosi’s husband. So that’s the second force at work, which is programmatic advertising.

Eric Schurenberg (22:43.619)
That. There are, I mean, the interplay between the rise of the platforms, their algorithmic enhancement of outrage and indignation and division, and the fact that false narratives can be monetized through programmatic advertising is this kind of feedback loop that really prioritizes falsehood, the death of truth. One of the things that…

…you might observe in a more reasonable society is that the Santa Monica Observer that published this defamatory story would have to, or Twitter for carrying that defamatory story from the Observer, would be liable for defamation. But that’s not the case on the platforms because of Section 230, which is, as you point out in the book…

…a perfect example of unintended consequences behind certain kinds of regulation. You note in the book that section 230 of the Telecommunications Act of 1996 was, which has enabled so much falsehood on the social media platforms, was originally billed as the Good Samaritan regulation. Can you explain how section 230 came to be and how it has played into the death of truth?

Steven Brill (24:09.995)
Right.

Steven Brill (24:15.552)
Well, some of your listeners may remember AOL and CompuServe, and I’m forgetting the other one, which basically proves my point. The original sort non-social media online platforms were basically set up, AOL being the most prominent, so that you could dial up with your telephone and be online and you could…

Eric Schurenberg (24:28.429)
Yeah.

Steven Brill (24:45.61)
…you could get access to the internet to get more engagement. And by the way, these platforms charged you so they knew who you were. They knew who you were and you had to pay a monthly fee of eight or $9 a month, whatever it was. And to keep you as a customer, they set up different chat rooms.

So if you were interested in.

Eric Schurenberg (25:26.5)
Sure. Steve, do you want to take it back to bringing up AOL and CopyServe?

Steven Brill (25:33.93)
I don’t understand.

Eric Schurenberg (25:36.42)
Apparently there was some difficulty in some error in the microphone. So if you could just back up what you were saying about AOL and CompuServe just to the beginning of the conversation. think you led into it by saying your listeners may not remember AOL.

Steven Brill (25:48.524)
Sure is it.

Steven Brill (25:52.812)
Right. Your listeners may not remember, they may never have had the chance to remember. AOL and CompuServe, and I forget what the other one was called, which were these platforms where you dialed up with a telephone, and that’s how you got on the internet, and you pay $809 a month. The platform knew who you were because you had to sign up and pay with a credit card. And to enhance their value to customers, they set up different kinds of chat rooms. So if you were interested in baseball, you could go online and talk to your fellow AOL members about the Yankees versus the Giants. If you were interested in finance, there was a chat room you could go to, ballet, you name it. people could say what they wanted, but there came a problem in 1995…

…because in a couple of cases, people went into these chat rooms and said things that were deemed to be defamatory. And in one case, one of the platforms, think it was CompuServe, said, well, we’re not responsible because we don’t screen anything. People can say whatever they want in our chat room. And in the other case, where something defamatory was said.

The other platform, think, was AOL, had touted the fact that they screen their content to keep harmful content out of their chat rooms. And there were two court decisions that said, well, if you’ve screened the content, then you are responsible and you are liable. But if you unscreen the content, you’re not liable. So several members of Congress thought, well, that was kind of a perverse idea.

The company that’s trying to be responsible is getting hit with liability, and the company that is claiming, you know, that is saying we’re not responsible for anything is being let off scot-free, so let’s pass a law. So they passed, so they added a couple of paragraphs into hundreds of pages of what was the Telecommunications Reform Act of 1996, a massive piece of legislation which just inserted

Steven Brill (28:17.642)
these few paragraphs into what was called Section 230, which said that no interactive platform, and again, these were these relatively small, they had maybe a million, 100,000 customers each, that no interactive platform shall be responsible for the content that people post there. And the logic was…

…you know, the telephone company is not responsible if you and I have a conversation and I say something defamatory because how is the telephone company supposed to screen that? The post office isn’t responsible if I send someone a letter that has something defamatory in it. So why hold these platforms responsible? But more important, the purpose was to encourage the platforms to regulate themselves. So…

…they had been discouraged because the one that claimed to be responsible and was regulating the content had been held liable, whereas the one that wasn’t, wasn’t held liable. So what it did was it freed them up to screen content and not be sued. That was the idea. So that’s why it was called the Good Samaritan Act because you could be a Good Samaritan and screen your content and not be liable…

…if in screening it, you made a mistake, if something sort of slipped through, encourage you to be responsible. So at age 11, Mark Zuckerberg was crowned a good Samaritan. So cut to 10 years later when the social media platforms are burgeoning with millions of people, they’re no longer, they’re not responsible. So to take an obvious example, if Elon Musk,

Eric Schurenberg (29:54.116)
Ha.

Steven Brill (30:13.1)
makes an unsafe car. Tesla turns out to have software that makes the cars crash or explode, he’s responsible. But if X posts something that causes people to seek the wrong healthcare or causes a riot somewhere, he’s not responsible. And it’s the only

Eric Schurenberg (30:41.528)
Right.

Steven Brill (30:42.632)
industry in the world where the proprietors of that industry are in no way responsible for the damage they may cause. So that’s another form here of how we got where we are, which is not only did the social media companies not have any responsibility, but as we’ve discussed earlier, they had an incentive…

…to post the most inflammatory content because their business model gave them the incentive to attract the most advertising, which you get by having the most eyeballs paying maximum attention to what you’re posting.

Eric Schurenberg (31:28.772)
Right, and 30 access, I get out of jail free card for that kind of hateful or harmful or simply misinformed misleading information. You brought up Mark Zuckerberg who got in the news recently as you alluded to a few minutes ago by basically saying that he was getting rid of all content moderation on his side…

…and substituting a form of community notes as is used on Elon Musk platform X. Zuckerberg’s explanation for doing this was that content moderators, people who tried to keep people from spreading or promoting false information yielded too many false positives. And that was because fact checkers are biased and…

So as he characterized it, this was a blow for free expression. Other people are appalled at this movement. What’s your take?

Steven Brill (32:37.654)
Well, it’s nonsense. mean, Facebook has always been a convenient avenue of rampant Russian, Chinese, and Iranian disinformation to take the most harmful examples. Their fact checking, such as it was, was really window dressing. As I say in the book, every year since

Since 2016, Zuckerberg would go in front of Congress and say, you know, I know we have to do better. I know we have to try harder. I’m really, really sorry. And then the next year he’d come in and say, I know we have to do better. I’m really, really sorry. We have to have more fact-checking. So it wasn’t much in the first place. But the idea that it was resulting in censorship is absurd for several reasons. First of all,

The fact checkers were given their assignments by Facebook. So Facebook’s algorithms would spit out some things for fact checkers to look at, again, was never effective. At most, it identify a half a percent of the bad stuff that was online. The bad stuff, as I define, as being factually, provably false and harmful.

At most it could identify that. And then second, because it was inflammatory stuff, it would take the fact checkers three days, five days a week or two weeks to check it. And by then all that stuff had gone viral anyway. So it was really window dressing. Third, all the decisions were Facebook’s decisions. The fact checkers didn’t decide to block or unblock anything. They just…

Eric Schurenberg (34:23.416)
Mm-hmm.

Steven Brill (34:34.438)
fed the results of their findings to people at Facebook who made the decisions. So they’re the ones who blocked or downranked something. And then the last point, again, just to not to beat a dead horse is if a private company called Facebook decides to block some of the content it’s about to publish, that’s called editing. That’s not censorship. And then the last point I’ll make is all of these fact checking organizations who were…

…severely underpaid and understaffed anyway. None of them ever got any feedback from Facebook saying, you’re biased. You’re biased on the left. The first they heard of someone saying they were biased was last week when Zuckerberg made that decision. So it’s total nonsense. And the result is that a platform that has always been an instrument of Russian and Chinese and Iranian disinformation. The floodgates are now totally open.

Eric Schurenberg (35:40.916)
You’ve mentioned a few times that a major source of misinformation on the platforms are these foreign actors who do not have America’s best interests at heart. It also has been pointed out by many people that the call is coming from inside the house right now, that the majority of

misinformation comes from Americans who for either cynical profit motives, a desire to promote some kind of snake oil health cure or for political power or something like that, are using the platforms to spread misinformation. Where do you see that how much of what is online that is false comes from abroad and how much of it is homegrown?

Steven Brill (36:37.674)
I don’t think we know. For example, we’re looking right now at connections that I hadn’t previously thought were there, and I still don’t know that they’re there, but Alex Jones and Infowars seem to have been very sympathetic to Russian causes, shall we say.

Eric Schurenberg (36:56.899)
Mm-hmm.

Eric Schurenberg (37:06.286)
Mm-hmm.

Steven Brill (37:07.368)
I’m not accusing him of that. I don’t know that for a fact, but I do know for a fact that what you as a reader might look at online when you see the Chicago Mirror or the Boston Mirror, whatever these sites are called, and you think you’re reading a homegrown American website, I know those are promoted and financed by the Russians.

So I don’t know what the mix is. I also, you know, what we’re focused on is again, whether what’s on there is provably false and potentially harmful. And it is a mix. you know, there are people in the United States who obviously either, you for their own purposes, they want to sell a phony nutrition supplement or their

They have political purposes in mind who are promoting misinformation and disinformation. But I think an enormous amount of it is coming from our foreign adversaries because it’s so easy. And with generative AI, which we haven’t talked about yet, which is another part of the book, it is easier still. This guy sitting in Moscow who I talked about, John Dugan, he told us in an interview that it took him

Eric Schurenberg (38:20.067)
Mm-hmm.

Steven Brill (38:36.086)
couple of hours to launch those 170 sites in the United States using generative AI. it’s, you know, one of the ways to think about it, which I’ve been thinking about is, you know, we’re very concerned about open borders, right? Well, there are totally open information borders. And, you know, Mark Zuckerberg just weighed in on opening them still more.

Eric Schurenberg (39:09.612)
Yes. Let’s talk about AI. How does that change the equation? Clearly, as your interview with Dugan revealed, it’s just a lot easier now using generative AI to produce misinformation. Are there other things about this new technology that concerns you?

Steven Brill (39:30.262)
Well, that’s the first thing, which is it’s a force multiplier. You don’t have to have two dozen people in Moscow working around the clock to produce two dozen websites. You could do that with one person working a couple of hours. So it’s a force multiplier. It just allows you to produce phony content easily. Second, it’s…

Eric Schurenberg (39:47.16)
Mm-hmm.

Eric Schurenberg (39:51.833)
Mm-hmm.

Steven Brill (39:59.696)
It’s a polluter. What I mean by that is the way a generative AI that is trying to be accurate, that is trying to be responsible, the way it trains its models is it basically, you know, crawls the internet and, you know, just soaks up all the information it can everywhere. We have a service at NewsGuard that

that provides some tools that help them discern what’s reliable and what’s not reliable. But for all intents and purposes, what a generative AI machine does is it’s training on everything that’s out there. So there are certain subjects where the only stuff that’s out there is going to be the false stuff. So for example, we have one example like this. If you ask a…

…one of the chatbots, are there NATO troops fighting in Ukraine? Now everybody says, and most responsible people know, that there are not NATO troops fighting in Ukraine. NATO’s helping Ukraine, the United States is helping Ukraine, but they don’t have troops on the ground. However, various Russian disinformation websites have said that there NATO troops fighting in Ukraine. So if you send out

your generative AI machine to go look for that and to train on that. The only training materials they’re going to have related to NATO troops fighting in Ukraine are going to be the websites that say there are NATO troops fighting in Ukraine, because it’s not like the economist is going to do an article every week saying, hey, there are no NATO troops in Ukraine, because everybody knows there are no NATO troops in Ukraine. So there are all kinds of information voids.

that disinformation networks can create and produce these things. And we’ve done all kinds of tests like that, red teaming, that really indicate that. But the ultimate real problem is the genie’s out of the bottle. And you could have someone from his basement in New Jersey or his basement in Siberia creating.

Steven Brill (42:27.69)
masses of disinformation and we’re sending it out there and again because we have open borders when it comes to information it’s it’s going to affect us it’s going to affect everyone.

Eric Schurenberg (42:31.768)
Yeah.

Eric Schurenberg (42:43.212)
Let’s pivot to solutions to this massive and complex problem. There are a number of regulatory reforms that you suggest in the final chapter of the Death of Truth. Talk about some of them and how you balance protecting truth with the very important principle of freedom of expression.

Steven Brill (43:11.712)
Well, I actually think it’s easy because again, the idea isn’t to have the government control expression or control speech. But the idea also isn’t to have the tech companies control it by being completely non-transparent and completely unaccountable. So one of the solutions I suggest

Eric Schurenberg (43:39.054)
Mm-hmm.

Steven Brill (43:41.936)
is the FTC right now has rules that say if you make a contract with someone, if you’re a consumer company and you have a contract, if you’re found to be violating that contract and deceiving consumers, the FTC can sue you and fine you. So for example, the FTC in the past has sued and fined Facebook…

…on two different occasions for violating its terms of service in terms of the privacy protections it promises Facebook users. Well, the same terms of service when you go to Facebook and sign up for Facebook, the same terms of service have Facebook promising not to allow harmful disinformation…

…and all kinds of harmful speech. That’s Facebook saying, don’t allow this. Now, I don’t know that Zuckerberg has yet amended his terms of service, but at least when I wrote the book, the terms of service say we won’t allow harmful disinformation. We won’t allow hate speech. We won’t allow child pornography. that’s a promise. That’s a contract. Because you’re promising….

…not to do something they’re promising to screen that stuff at. It doesn’t say we have such a high volume of stuff that we can’t possibly screen for all this. So therefore you’re on your own and good luck. Now they could change their terms of service and do that, but when they do that, I think advertisers are gonna have a real problem supporting that platform. again, the FTC doesn’t have to enforce

speech. just has to enforce the contract under which Facebook defines the speech it will allow. Now that has all probably changed since the book came out because Zuckerberg, even if he hasn’t changed his terms of service yet, has abandoned those promises. The last time I looked the other day, they’re still in the terms of service. Let’s see if that changes. That’s one thing. The other thing

Eric Schurenberg (46:05.176)
Mm-hmm.

Steven Brill (46:10.988)
that we could do is… Let me come back that for a second. One of the excuses the platforms have always used is there’s such a mass volume of content we can’t possibly screen out everything that is potentially harmful. Well, let’s think about that for a minute. If you walk into any restaurant or any movie theater, you’ll see a sign on the wall

that says occupancy by more than X number of people is dangerous and unlawful. That’s because if there’s a fire, people have to be able to get out of the restaurant or the movie theater or the stadium safely. So there are capacity rules. You don’t have those capacity rules on the Facebook. Facebook doesn’t say, you know, occupancy, you

by more than X billion people makes us unable to screen our content to keep off hate speech or bad stuff. But you could apply that principle, and again to come back to the terms of service, if the terms of service said, you know, we have so much stuff being faced, being posted on YouTube or Facebook that we can’t screen it, so good luck. That would be fine. Under our constitution, they’d be allowed to do that. But…

…they’re not allowed to promise something that they’re not delivering on. So that would be one thing. The second thing would be that you can do more at the fringes without, know, without violating the First Amendment, you could easily require platforms not to allow people to post without their names. You could…

You and you should be accountable for what you post. Now, the first thing that would do is it would really get rid of all the Russian bots and all that stuff, but it would also make people more accountable for what they say. And you’d have a better sense of who is saying it.

Steven Brill (48:30.23)
The third thing would be that the phony pink slime websites that are operated by political operatives, there’s a whole bunch of them, hundreds of them operated by Democratic political action committees, Republican political action committees, posing again as legitimate news sites, but they favor their candidate and attack the opposing candidate.

There are campaign finance laws that could be applied to those people because they would have to disclose that’s who we are and that’s what we’re doing. It’s pretty simple. Again, it’s not violating anybody’s free speech rights. It’s about disclosure. But the other points I hit in the book is that the technology is part of the problem. The other part of the problem is the kind of division

Eric Schurenberg (49:12.057)
Yeah.

Right.

Steven Brill (49:27.484)
we have in this country and the fact that for a lot of people in the country, spend a fair amount of time profiling people who showed up in the Capitol on January 6th. People who were susceptible to the inflammatory content online, who went down those rabbit holes. And for those people, the system has not been working.

Eric Schurenberg (49:53.848)
Mm-hmm. Mm-hmm.

Steven Brill (49:55.116)
There’s rampant income inequality, rampant lack of opportunity to move up. This is the first generation that doesn’t think their children are going to be better off than they are. And we have to fix that. And one way to fix that, without getting into the politics, liberal and conservative, one way to fix that polarization is to look at two aspects.

of how we organize ourselves politically. One is gerrymandering, which encourages polarization by having legislators group all the Democrats into one legislative district or one congressional district and Republicans in another. Democratic legislatures and Republican legislatures are pretty much equally guilty of that. So the real

electoral contests in most of the country most of the time are Republican primaries or Democratic primaries whether it’s for state legislature or Congress and there you’re competing to be the most appealing to the Democrats who show up and voted primaries or the Republicans who show up and vote in primaries. if there was a control of gerrymandering

which I think is unconstitutional, but the Supreme Court has disagreed with me. But you can do that legislatively. That would help. And the second thing is to come back to primaries is have not have a separate primary for Democrats and Republicans, but have a runoff system as they have in California and a couple of other states where everybody runs.

Steven Brill (51:49.492)
in a preliminary general election, so you have to appeal to everybody. And then the top two vote-getters have a runoff. In both cases, in most of the states in this country, you could achieve those changes by having citizens’ initiatives. You don’t need to get them passed in the state capitol. You could have a citizens’ initiative that gets on the ballot and people would vote for it, just the way there were citizens’ initiatives last year and the year before related to abortion rights.

Eric Schurenberg (52:22.314)
Uh-huh, uh-huh. I think that shows kind of how much the misinformation environment is entangled with the of the political dysfunction as well. If we can’t, you’re right.

Steven Brill (52:37.484)
It’s all one, you know, they feed on each other. And the last element, those are the three elements, the social media platforms, the programmatic advertising, the political polarization. And then we have the advent of the politicians who are all too willing to take advantage of that.

Eric Schurenberg (53:01.078)
Right, right. And who are incented by a polarized political system to do so.

Steven Brill (53:06.23)
Good, good.

Eric Schurenberg (53:11.202)
Let me close this conversation with a question that I often ask on in realities. Are you optimistic or pessimistic about the outlook for the information environment?

Steven Brill (53:27.436)
That’s hard. I wish you hadn’t asked that. I’m a perpetual optimist, but it is hard to sort of look around and be optimistic. I’m sitting here this morning, as soon as we get off this podcast, I’m presiding over a meeting talking about how we put out the nine or 10 different new false narratives we’ve discovered about…

…the California wildfires. And half of them we think of being promoted out of Moscow. And I know I sound like a crazy person saying the Russians are doing this, the Russians are doing that. But it’s hard to be optimistic about the information environment when almost every single bad event now is accompanied by multiple conspiracy theories, multiple charges and counter charges, some of which are true and some of which are not true. The problem is you don’t know which one.

Eric Schurenberg (54:36.226)
Yeah.

Steven Brill (54:38.028)
I’m sitting here today and we had a discussion last night, was one of those reservoirs empty or not? And I thought I saw a video of an empty reservoir, but I thought I saw a video of a full reservoir. Which is which? What is what? You just imagine today if there was a moon landing today.

Eric Schurenberg (55:02.88)
Mmm. Mmm.

Steven Brill (55:03.54)
How many people would believe there was a moon landing today?

Eric Schurenberg (55:07.524)
That is a frightening thought. I would say that you have just made an extremely good case for people to refer to NewsGuard and limit their information intake to those organizations that rate high.

Steven Brill (55:23.212)
Yeah, we have a sub stack that I would love your listeners to sign up for. can go to newsguardtech.com and see it there. And it basically keeps people up to date with exactly the kind of stuff we’re talking about.

Eric Schurenberg (55:41.134)
Great. All right. Well, I refer people to that and also to your book, which is called The Death of Truth. Steve Brill, thank you very much.

Steven Brill (55:47.233)
Thank you.

Happy to do it. Take care.

 


Created & produced by: Podcast Partners / Published: Jan 16 2025


Share this episode:

All episodes are streaming on these platforms: