top of page
Search
Writer's pictureHowie Klein

Can Congress Outlaw Lying Online? Is That Even A Good Idea?



Now that it is almost universally recognized that disinformation is among the biggest threats to democracy, Obama recently told a conference that he underestimated the vulnerability of democracies to false information that is intended to mislead. Conor Friedersdorf wrote in today's Atlantic that the other side of this coin is that "what skeptics call 'Big Disinformation' or 'the Disinformation Industrial Complex' is trendy groupthink that could itself distort national priorities or perceptions of reality-- and perhaps lead to infringements on free speech and freedom of the press. Abroad, disinformation is regularly invoked as a pretext to suppress dissent."


"Our constant connection to internet discourse and the platforms that mediate it," he wrote, "are recent developments, as destabilizing in their own way as the rise of the printing press, television, and radio were in earlier eras. Today’s ever-changing algorithms would probably sow confusion and polarization in civic debates even if we were all consuming exactly the same feeds. But everyone’s digital reality is unique. And foreign governments, scammers, and outrage-entrepreneurs are trying to harm, trick, or manipulate us, taking advantage of powerful new tools such as deepfakes and artificial intelligence as quickly as they advance. How can a free country respond at scale, with due epistemic modesty and without infringing on civil liberties or otherwise doing more harm than good? Obama has some good instincts on the subject. Perhaps cognizant of how 'disinformation' can be invoked to undermine civic deliberation, he prefaced his remarks by emphasizing his unwavering support for a free-speech culture." Think "born in Kenya" and "death panels" for example.


Nonetheless, "reflecting an emerging consensus in the Democratic Party, he called for new laws to be imposed on digital-communications platforms such as Facebook, Twitter, and YouTube. Their designs 'monetize anger, resentment, conflict, division,' he alleged-- yet are opaque, embedding nontransparent editorial choices that sometimes spark violence. He wants us all to understand those choices better. What algorithms do these platforms use? Are botnets gaming them? How do they microtarget ads? 'A democracy can rightly expect them to show us,' Obama insisted, noting, for example, our expectation that meat-processing plants open their doors to food-safety inspectors. The most concerning downsides of anti-disinformation laws arguably disappear if they merely better inform us about the information flows we consume and refrain from infringing on the free exchange of ideas (including obvious misinformation, such as ivermectin being a near-perfect COVID-19 prophylactic). But if Big Disinformation is to benefit Western democracies and justify the resources being lavished on it, rather than merely avoiding the worst harms done in the name of fighting disinformation elsewhere in the world, it must clear additional hurdles, some of which may prove especially difficult in establishment institutions with ideological monocultures." Friedersdorf then lists four of those hurdles:


1. Define terms rigorously. The leaders of nonprofit organizations aimed at combatting disinformation and journalists assigned to cover a “disinformation beat” may be tempted, or perhaps unconsciously inclined, to treat more and more social ills as disinformation problems. The struggle against that distorting tendency requires a clear delineation between objections to falsehoods intended to mislead and various other objections. For example, if in 2024 a foreign government covertly buys YouTube ads telling undecided voters that Kamala Harris was born outside of the United States, that would fall under disinformation. But if the ads instead declared that Harris presided over efforts to block the release of a wrongly convicted man from prison on procedural grounds, that would not be disinformation-- it is true, though one could characterize it as unlawful foreign interference.
Obama is right that social media monetizes anger while making a lot of users angry. But is “disinformation” the right label for that design? Most tweets that make me angry aren’t willful falsehoods. If all false tweets were eliminated from the platform tomorrow, Twitter could still run an algorithm that optimizes engagement and therefore winds up elevating polarizing opinions, profiting off anger every bit as much in the bargain. Conversely, Twitter could presumably elevate factually false tweets that make most people happy.
2. Study alternative accounts of what ails us. Many attendees at the Chicago conference blamed the January 6 insurrection on disinformation spread by tech companies. They noted that Donald Trump’s lies about the 2020 election spread partly through social media, helping to fuel the “Stop the Steal” rally. However, any president who shouted for months that an election was stolen could have rallied a similar number of allies to the capital-- with or without modern social networks. The significant problem was electing an unpatriotic narcissist president, not bad algorithms spreading willful lies on social media (many people spreading false claims about Election 2020 really believed what they were saying). In light of Karen Stenner’s thesis in The Authoritarian Dynamic, it may even be that merely by spreading true but polarizing news and diverse perspectives, social media activates latent predispositions toward authoritarianism-- an account of rising polarization and violence that has very different implications than a disinformation problem.
The more carefully one defines disinformation and analyzes it alongside other factors, the more unclear it becomes that fighting disinformation is a solution to a given ill. Better outcomes may require focusing elsewhere-- for example, on fielding better anti-authoritarian candidates.
3. Earn back trust with a bigger tent. Disinformation seems to be a bigger problem on the right than on the left in the Trump era. The storming of the Capitol, dying from COVID because of lack of vaccination, and the Q phenomenon have no analogues of equal consequence on the left. Still, the left has significant disinformation and misinformation problems too, and any solution to disinformation will require cooperation beyond the center-left.
And many outside the center-left may be skeptical of Big Disinformation because of the dearth of ideological diversity at many anti-disinformation efforts. Diversity of thought would make these efforts less error prone, less vulnerable to ideological capture, and likelier to gain broader buy-in. Skepticism is further fueled by denigrating as “disinformation” assertions, like the New York Post article on Hunter Biden’s laptop, that turn out to be true; supposed fact-checking efforts that fail to rigorously distinguish among facts, analysis, and opinion; and the invocation of subject-area expertise to disguise value judgments, as some in the public-health community did during the George Floyd protests.
The timing of Big Disinformation’s rise is also suggestive of double standards that narrow its appeal. Neither lies nor misinformation nor their distribution at scale is new, so it’s noteworthy that disinformation became public enemy number one not after (say) the absence of Ahmed Chalabi’s promised weapons of mass destruction in Iraq, the CIA torture cover-up, lies about mass surveillance, or mortgage-backed securities dubiously rated AAA, but because of a series of populist challenges to establishment actors. Among the many factors that perhaps help to explain Trump’s election, Brexit, the January 6 insurrection, and vaccine hesitancy, centering “disinformation” implies liars and greed-motivated algorithms are to blame-- so why reckon with establishment failures? If the people knew the truth, this framework implies, they’d have behaved differently! Even now that Big Disinformation is here, you don’t see its adherents talking much about years of deliberately misleading reports from Afghanistan, a flagrant undermining of democracy.
And additional efforts are needed to reassure Americans that the center-left isn’t trying to invoke disinformation in order to narrow democratic debate. Consider an exchange at the conference in Chicago, where a young woman posed this question to Senator Amy Klobuchar:
You introduced the bill today that would punish social-media companies like Facebook and Twitter for having health misinformation on their platforms. And I’m going to ask you, if I were to say that there are only two sexes, male and female, would that be considered misinformation that you think should be banned speech on social-media platforms?
Here is Klobuchar’s answer:
Okay, I’m not going to get into what misinformation-- first of all, I think the bill you’re talking about is different than the one we’ve mostly been talking about, so I want to make that clear. We’ve been talking about the competition bill, but there is another bill that I have on vaccine misinformation-- it is that specific-- in a public-health crisis. You wonder why you get that specific? It’s because we’re trying to find carve-outs. That’s what I did with [U.S. Senator] Ben Ray Luján, that you can’t have immunity as a social-media company if you are broadcasting vaccine misinformation. There is another bill that Mark Warner did that is about just misinformation in general and hate speech and those kinds of things.
And I think one of the things Deval [Patrick] is getting at is that, a lot of times, the content fight-- and Kara [Swisher] was getting at this-- starts to dominate the world here, and one of the things I’ve been so heartened by is some of my Republican co-sponsors on this bill who have different views than me on some of the internet content issues have united that this is a good place to start, and have not turned it into some of these disputes about the internet. So that’s why we have focused on competition policy.
A more reassuring answer would have been, “No, of course I don’t think the government should punish a social-media company for a user arguing that there are only two sexes, male and female. We always want Americans to be freely able to discuss contested issues of our time.”
To overcome all this skepticism and earn broader trust, Big Disinformation should cultivate a reputation for free-speech values, nonpartisanship, and ideological neutrality-- for example, caring as much about willful falsehoods spread in service of outcomes the establishment likes, such as staying in Afghanistan, as about outcomes they don’t, such as vaccine hesitancy. The attitude can’t be, Stop disinformation to stop Trump in 2024. It must be, Stop disinformation as an end in itself, as doing so will be better on the whole.
4. Rebuild a culture of critical thinking. Some Americans are taught to prioritize separating fact from appeals to emotion, looking for evidence to support claims, identifying errors in chains of reasoning, separating the truth of an argument from the identity of the person making it, and evaluating the plausibility of all arguments. Such habits of mind are helpful in staying resilient against disinformation, but competing approaches are more and more preferred. Other young people are acculturated to prioritize moral clarity and outrage at injustice, or “cultural competencies” such as “reading the room,” avoiding microaggressions, and centering the identity of the speaker, perhaps by applying privilege or intersectional analysis and deferring as “allies” to the purportedly marginalized.
The latter outlooks are not without insights, but they are not especially helpful in staying resilient against disinformation-- especially if bad actors pose as marginalized people, which is not an imagined hypothetical but a documented Russian-troll tactic. “These malicious accounts tweeted a mixture of sentiments to cultivate followers and manipulate U.S. narratives about race, racial tensions and police conduct,” The Washington Post reported two summers ago. I’ve wondered if they are partly responsible for the fact that although a couple dozen unarmed Black men are killed by police in a given year, a majority of very liberal people believe that figure is 1,000 or more.
“The Russians built manipulative Black Lives Matter and Blue Lives Matter pages, created pro-Muslim and pro-Christian groups, and let them expand via growth from real users,” Samuel Woolley, the author of The Reality Game: How the Next Wave of Technology Will Break the Truth, told The Economist. “The goal was to divide and conquer as much as it was to dupe and convince.” Anyone engaged in a politics of identity-based solidarity, whether with “Black lives” or “Blue lives” or Christians or Muslims, was presumably likelier to be subject to that disinformation effort and to be vulnerable to it, as allies aren’t supposed to skeptically evaluate claims and demand evidence. Americans should strive to treat everyone with dignity. To make the next generation more resilient to disinformation spread on social media and to short-circuit foreign and domestic attempts to leverage race and religion to divide us, we should also shift back toward prioritizing dispassionate analysis of statements, regardless of the speaker’s perceived identity, as a valuable habit of mind, not a microaggressive example of insensitivity.

Friedersdorf concluded with two examples of public-policy remedies proposed at the conference. The one he opposes is an internet version of the Fairness Doctrine, forcing sites like Twitter and Facebook "to serve randomly chosen or balanced content. He wrote that he mistrusts "any law that would require government or tech companies to categorize content by ideological viewpoint and decide what must be amplified and diminished." What he favors is moving "away from 'Let’s regulate types of speech that are on platforms' towards 'Let’s look at the system, at the design of the technologies, and think about if there are ways to regulate how things get amplified very quickly, whether companies should disclose when things go viral and how they go viral, and give consumers control of that.'... A law forcing transparency to enable meeting bad speech with good would address actual disinformation, in an ideologically neutral manner, inviting critical thinking to override groupthink. More intriguing still is the prospect of new platforms where design transparency is built in from the start-- and preventing or identifying and undermining disinformation is a priority. Are there potential platforms of that sort that people would want to use as much as Facebook or Twitter? The nonprofit sector offers some precedent for hope. 'Fund a wave of experimentation in building social networks that we govern, that we control, that are noncommercial, that are non-surveillant, and actually work to benefit us as individuals and citizens,' the media scholar Ethan Zuckerman said in Chicago. Perhaps Big Disinformation should attempt to create rather than to regulate.'"


103 views

1 kommentti


dcrapguy
dcrapguy
22. huhtik. 2022

pointless.


the way the nazis selectively enforce the first and the democraps refuse to enforce it at all, the following applies:


1) lies in the pursuit of a nazi reich or to persecute hated demos is golden

2) truth is illegal, especially when it validates a progressive idea.


next topic!

Tykkää
bottom of page