The so-called Twitter Files, written by a group of independent journalists given access to internal company documents, offer a behind-the-scenes glimpse at how the federal government shaped the flow of information on one of the world’s largest social media platforms.

Some tech pundits say that the Twitter Files contain no secrets: they knew about the thousands of takedown requests the company receives every month from law enforcement agencies and the courts, or they had already opined about the immense challenges of content moderation. However, the Twitter Files have brought important new information to light. They show that the company stifled debate over important policy issues by shadowbanning certain accounts for no good reason and then misleading the public. They show that Twitter was routinely strongarmed by the White House and the FBI into complying with frivolous takedown requests. And they provide evidence that the intelligence community likely influenced the decision to suppress the Hunter Biden laptop story during Joe Biden’s 2020 presidential campaign.

“Almost every conspiracy theory that people had about Twitter turned out to be true,” Elon Musk said on the All-In Podcast in late December. “Is there a conspiracy theory about Twitter that didn’t turn out to be true?”

Conspiracy theorists are often sloppy with the facts and exaggerate what actually happened. But the information brought to light by the Twitter Files should be alarming to anyone who cares about free speech and a free society. Is the government meddling similarly with YouTube, Facebook, Instagram, and Google search? How can we prevent the internet from becoming a centralized apparatus through which state actors shape and censor public debate? Here are three major takeaways from the Twitter Files:

#1 Twitter distorted the conversation and misled the public

Twitter had a system of “whitelists” that allowed its algorithms and human moderators to turn engagement dials up and down based on what a user said. It used this power to limit the ability of certain groups and individuals to reach an audience, including conservative commentator Dan Bongino , Stanford economist and medical school professor Jay Bhattacharya , mRNA vaccine critic Alex Berenson, and the Libs of TikTok account.

The company regularly tap-danced around the meaning of “shadowbanning” to maintain plausible deniability. In a 2018 blog post, Twitter’s Trust and Safety team wrote , ” We do not shadow ban. You are always able to see the tweets from accounts you follow (although you may have to do more work to find them, like go directly to their profile).”

Needless to say, making Tweets so hard to find that digging through someone’s profile is the only way to unearth them is what’s commonly known as “shadow banning,” or, as Twitter employees termed it with an Orwellian flair, “visibility filtering.”

The Twitter Files show that company staff became increasingly comfortable using these tools to manage the flow of information and political discourse around the 2020 election, regularly deploying filters to limit the visibility of Trump’s tweets and many others pertaining to election results in the weeks preceding the January 6 riot and the decision to evict the president from the platform.

Of course, Twitter is a private company, and it has every right to label the tweets of Harvard epidemiologist Martin Kuldorff as “misleading” when he tweets statements such as, “Thinking that everyone must be vaccinated is as scientifically flawed as thinking that nobody should.”

But Twitter is still worthy of our condemnation. Stanford physician and economist Jay Bhattacharya was shadowbanned despite being a respected epidemiologist from a prestigious university, and many of his warnings during the pandemic turned out to be correct.

And you can acknowledge serious problems with the work of former New York Times reporter Alex Berensonwho, for instance, badly misinterpreted data to infer a spike in “vaccine-caused mortality”while still believing it’s preferable to have a public airing of controversial and deeply flawed arguments.

A better way to deal with speech you disagree with is to respond to it, as Derek Thompson attempted to do in The Atlantic when he called Berenson “The Pandemic’s Wrongest Man .” Ironically, Twitter raised Berenson’s profile by allowing him to inhabit the role of the oppressed truth-seeker.

#2 The government is secretly policing speech.

The most troubling thing about the Berenson de-platforming isn’t Twitter’s decision per se, but whether it made that decision freely. Was it done at the behest of the federal government? The Twitter files provide circumstantial evidence that the White House played a role.

“When the Biden admin took over, one of their first meeting requests with Twitter executives was on Covid,” writes journalist David Zweig in the Twitter files. “The focus was on ‘anti-vaxxer accounts.’ Especially Alex Berenson.”

Berenson was suspended hours after Biden said to a reporter that social media companies were “killing people” by failing to police pandemic-related misinformation.

Zweig also revealed that a series of meetings took place last December in which an “angry” Biden team excoriated Twitter executives because they were “not satisfied ” with its “enforcement approach” and wanted “Twitter to do more and to de-platform several accounts.”

In Twitter Files 6, Matt Taibbi described Twitter as an ” FBI subsidiary . ”

Agents from a dedicated task force would regularly send lists of accountssome with fewer than 1,000 followersfor Twitter to look at for terms-of-service violations, such as this left-leaning account jokingly telling Republicans to vote a day late.

Former Twitter’s former head of Trust and Safety, Yoel Roth, was in weekly meetings with the FBI, the Department of Homeland Security, and the Office of the Director of National Intelligence. In return for the company’s work handling FBI requests, Twitter received $3.4 million between October 2019 and early 2021.

Complying with somewhere in the range of 8,000 requests would have required significant resources from Twitter, and there’s no reason the government shouldn’t have to pick up the tab. But should this arrangement exist in a free society, given the mission creep that the Twitter Files exposed?

Another alarming secret revealed by the Twitter Files: what led Twitter to block users from sharing a major New York Post story about the contents of Hunter Biden’s laptop. The files reveal that Jim Baker, the former FBI lawyer then working at Twitter, leaned on Roth to treat the laptop as the likely result of a Russian hack-and-leak operation, despite little evidence for that claim. The FBI had told Roth to expect just such a foreign operation to drop in October and that Hunter Biden would be a likely target. A month before the laptop story broke, Roth even participated in a tabletop simulation at the Aspen Institute about handling a Hunter Biden data dump.

Publicly, 51 former intelligence officials, including James Clapper, Michael Hayden, and John Brennan, published a letter claiming that the laptop story “has all the classic earmarks of a Russian information operation.” The twist? The New York Post story turned out to be completely true.

With all that pressure coming from supposedly reputable and knowledgeable sources, how independently was Twitter acting when it suppressed the story?

Giving the government unfettered access to exert pressure behind the scenes turned a forum for free discussionwith all the unavoidable messiness and misinformation that free speech entailsinto something much worse: a state-approved narrative generator.

#3 Twitter permitted covert state propaganda on its platform.

The U.S. ran sock-puppet accounts on Twitter and then may have tried to shut them down secretly when it looked like it was caught in the act.

After Trump won the 2016 election, and Hillary Clinton blamed Russia for her loss, Congress began to focus intently on the role of foreign misinformation on the internet.

Twitter began tansparently labeling accounts associated with any government, whether it be a politician or a state-run media outlet. It also pledged to Congress to “rapidly identify and shut down all state-backed covert information operations & deceptive propaganda.”

But apparently, that didn’t always extend to propaganda disseminated by the U.S. government .

As The Intercept’s Lee Fang revealed, the U.S. Central Command (CENTCOM) asked Twitter to “whitelist” several Arab language accounts spreading messages in support of the U.S-backed Yemen War so that they would get the same special treatment that verified accounts receive. Twitter complied.

One whitelisted account described “accurate” U.S. drone strikes that “only hit terrorists.” At first, the accounts had an attached disclosure noting that they were linked to the U.S. government. But that disclosure was eventually dropped from many accounts, and at least one “whitelisted” government sockpuppet account used an AI-generated image as a profile picture.

An internal email thread shows that the Department of Defense wanted to meet with Twitter’s legal team in a secure facility. One member of Twitter’s team speculated that the agency wanted to classify its work with Twitter to “avoid embarrassment.” Baker suspected that the fake accounts were set up using “poor tradecraft” and that they want to wind down the operation without revealing the accounts’ “connection to the DoD.”

So what should we demand of Congress in light of the Twitter Files revelations?

Lawmakers should rein in the FBI and other executive agencies with stricter reporting requirements and defund any federal task force whose mission includes fighting the slippery concept of “misinformation.” And yet that will probably never happenthe new omnibus bill just increased the FBI’s funding. Unfortunately, if a tool of centralized speech control can be abused by a government, it’s a virtual certainty that it will be abused eventually.

We should follow the advice of Julian Assange and the cypherpunk movement that shaped his thinking, which maintains that technology, not government policy, is the only effective check on authoritarian tendencies in the long run.

“You cannot trust a government to implement the policies it says it’s implementing,” Assange said in a filmed discussion with his fellow cypherpunks in 2017. “We must provide the underlying tools, cryptographic tools that we control, as a sort of use of force.”

The good news is that many people are gradually waking up to the dangers of allowing a company to maintain centralized control over public conversation. After Trump’s de-platforming, many conservatives migrated to the encrypted network Telegram. Since Elon Musk’s takeover of Twitter, many progressives have joined the decentralized platform Mastodon. Form er Twitter CEO Jack Dorsey has invested in Nostr, an open protocol that allows users to transmit a post over a decentralized network of relays using cryptographic keys. Dorsey and published an article calling for “a native internet protocol for social media,” arguing that “Social media must be resilient to corporate and government control” and “moderation is best implemented by algorithmic choice.”

And suppose Elon Musk wants to keep Twitter relevant and competitive in this continually fracturing landscape and live up to his promise of bringing more free speech to the platform. In that case, he will have to take these developments seriously and think about ways that Twitter 2.0 can avoid the fate of its predecessor by once again falling victim to state interference.

He should aim to design a platform that makes this kind of meddling impossible so that we don’t have to trust any tech executive, including Elon Musk, not to censor speech on behalf of the government.

Produced by Zach Weissmueller; edited by Regan Taylor; sound editing by John Osterhoudt.

Photo credits: PIERRE VILLARD/SIPA/Newscom; Adrien Fillon/ZUMAPRESS/Newscom; Al Drago—Pool via CNP/SIPA/Newscom; Alisdare Hickson, CC BY-SA 4.0, via Wikimedia Commons; CelebrityHomePhotos/Newscom; Daniel Oberhaus, CC BY-SA 4.0, via Wikimedia Commons; Dominick Sokotoff/ZUMAPRESS/Newscom; Dylan Stewart/Image of Sport/Newscom; Gage Skidmore, CC BY-SA 3.0, via Wikimedia Commons; I, Aude, CC BY-SA 3.0, via Wikimedia Commons; Image of Sport/Newscom; imageBROKER/Markus Mainka/Newscom; Jarek Tuszy?ski / CC-BY-SA-3.0 & GDFL, CC BY-SA 3.0, via Wikimedia Commons; Kyodo/Newscom; Kris Tripplaar/Sipa USA/Newscom; Lordalpha1, CC BY 2.5, via Wikimedia Commons; LiPo Ching/TNS/Newscom; Michael Ho Wai Lee/ZUMAPRESS/Newscom ;Ministrio Das Comunicaes, CC BY 2.0, via Wikimedia Commons; New Media Daysderivative work:—Cirt, CC BY-SA 2.0, via Wikimedia Commons; Oliver Contreras—Pool via CNP/picture alliance / Consolidated News Photos/Newscom; Paul E Boucher/ZUMA Press/Newscom; picture alliance / Frank Duenzl/Newscom; Ron Sachs/picture alliance / Consolidated News Photos/Newscom; Ser Amantio di Nicolao, CC BY 3.0, via Wikimedia Commons; Tesla Owners Club Belgium, CC BY 2.0, via Wikimedia Commons; Tom Williams/CQ Roll Call/Newscom; Whoisjohngalt, CC BY-SA 4.0, via Wikimedia Commons; Yichuan Cao/Sipa USA/Newscom; ZUMAPRESS/Newscom

Music Credits: “Fleeting Wave,” by Palm Blue via Artlist; “Quiet Pull,” by Tamuz Dekel via Artlist; “Ripples,” Tamuz Dekel via Artlist; “Particles,” by Nobou via Artlist; “From Above,” by Dan Mayo via Artlist

CORRECTION: This article originally described Nostr as a “blockchain-based social media site.” Nostr is an open protocol that allows users to transmit a post over a decentralized network of relays using cryptographic keys.