The chief executives of Facebook, Google and Twitter were in the hot seat once again, as lawmakers grilled them Thursday over their companies' alleged role in the proliferation of disinformation and misinformation online.
The Silicon Valley chief executives have already faced questioning on Capitol Hill multiple times over the past few years.
The deadly riot on Jan. 6 at the U.S. Capitol -- and the apparent role social media played leading up to the event -- loomed large over the virtual hearing commenced by two House subcommittees under the Committee on Energy and Commerce.
MORE: Twitter sues Texas attorney general, accusing him of retaliating after it banned TrumpThe hearing, titled "Disinformation Nation: Social Media's Role in Promoting Extremism and Misinformation," kicked off at noon ET.
Rep. Mike Doyle, D-Pa., came out swinging, accusing the platforms of allowing misinformation related to the election (including the "Stop the Steal" movement) and related to the COVID-19 vaccine to be amplified online.
Doyle asked each of the tech chiefs to answer with a "yes" or "no" as to whether their platforms bear some responsibility for disseminating disinformation related to the election that eventually led to the attack on the Capitol.
Facebook's Mark Zuckerberg and Google's Sundar Pichai demurred, refusing to give a straight "yes" or "no" answer. Pichai called it a "complex question."
Twitter's Jack Dorsey, meanwhile, responded: "Yes, but you also have to take into consideration a broader ecosystem. It's not just the about the technology platforms that were used."
Doyle also cited new research that found 65% of anti-vaccine disinformation online came from just 12 individuals or organizations, and asked the tech chiefs if they would commit to taking these disinformation "super spreaders" down "today."
Zuckerberg said his team would need to look at the exact examples. Pichai said they have clear policies and remove content, though some content is allowed "if it’s people’s personal experiences."
Dorsey said, "Yes, we remove everything against our policy."
MORE: Trump Twitter ban raises concerns over 'unchecked' power of big techThe CEOs took heat from lawmakers on both sides of the aisle, with many Republicans accusing them of silencing conservative voices.
"If you become viewed or continue to become viewed as an anti-conservatively biased platform there will be other people that step up to compete and ultimately take millions of people away from Twitter," Steve Scalise, R-La., said.
Bob Latta, R-Ohio, said that, "We're all aware of Big Tech's ever-increasing censorship of conservative voices," accusing the platforms of serving a "progressive agenda."
Fellow Ohio republican Rep. Bill Johnson, accused the tech chiefs of "smugness" and carrying "an air of untouchableness in your responses to many of the questions ... being asked."
"All of these concerns that Chairman (Henry) Waxman stated in 1994 about Big Tobacco apply to my concerns about Big Tech today, about your companies," Johnson said, pledging to hold them accountable.
The hearing, and prepared testimony from the tech chiefs, touched heavily on reform to Section 230 of the Communications Decency Act. Section 230 has made headlines in recent months as many people -- including Trump -- have called for it to be overhauled.
The law essentially provides internet companies legal safety from the content users post on their sites. For example, if someone posts something libelous about someone else on Facebook, Section 230 means that Facebook can’t be sued for that the way someone can sue a news organization.
Some digital rights advocacy groups have argued that Section 230 protects free expression in the digital age.
During the hearing, Zuckerberg called for two changes to Section 230 for large platforms, but noted that for smaller platforms, "I think we need to be careful about any changes that we make that remove their immunity because that could hurt competition."
"For larger platforms, first, platforms should have to issue transparency reports that state the prevalence of content across all different categories of harmful content," Zuckerberg said.
"The second change that I would propose is creating accountability for the large platforms to have effective systems in place to moderate and remove clearly illegal content -- so things like sex trafficking, or child exploitation, or terrorist content," Zuckerberg said. "I think it would be reasonable to condition immunity for the larger platforms on having a generally effective system in place to moderate clearly illegal types of content."
In prepared testimony, Google's Pichai defended Section 230, saying it allows “consumers and businesses of all kinds benefit from unprecedented access to information and a vibrant digital economy.”
He argued that Google’s "ability to provide access to a wide range of information and viewpoints, while also being able to remove harmful content like misinformation, is made possible because of legal frameworks like Section 230 of the Communications Decency Act.”
"Regulation has an important role to play in ensuring that we protect what is great about the open web, while addressing harm and improving accountability,” Pichai wrote. “We are, however, concerned that many recent proposals to change Section 230 -- including calls to repeal it altogether -- would not serve that objective well.”
MORE: Facebook announces plans to aid COVID-19 vaccine rollout, combat misinformationPichai continued: "In fact, they would have unintended consequences -- harming both free expression and the ability of platforms to take responsible action to protect users in the face of constantly evolving challenges."
Twitter's Dorsey did not touch on Section 230 reform in his prepared testimony, but delved into the importance of social media companies building trust with their users.
"Quite simply, a trust deficit has been building over the last several years, and it has created uncertainty -- here in the United States and globally," Dorsey stated in prepared testimony. "That deficit does not just impact the companies sitting at the table today but exists across the information ecosystem and, indeed, across many of our institutions."
Dorsey emphasized the importance of transparency -- including in its algorithms -- in building trust and discussed company efforts to make its decisions as transparent as possible.
The Twitter CEO also spoke of efforts Twitter is investing in to address misinformation on its platform including Birdwatch and Bluesky. Birdwatch, launched in beta in January, uses a “community-based approach to misinformation,” Dorsey said. Essentially, it allows users to identify misinformation in tweets and write notes that provide context.
Bluesky, another anti-misinformation project funded by Twitter, has the goal of developing open and decentralized standards for social media.
“Bluesky will eventually allow Twitter and other companies to contribute to and access open recommendation algorithms that promote healthy conversation and ultimately provide individuals greater choice,” Dorsey said.
Dorsey has previously tweeted about the Bluesky initiative in a wide-ranging thread defending the company’s decision to ban Trump.