Feds Begin Using AI Tools to ‘Blacklist’ Citizens Critical of U.S. Govt

Any opinions expressed by authors in this article do not necessarily represent the views of Disswire.com.

The United States government has been pouring millions of tax dollars into developing AI-powered tools to censor and blacklist dissent voices.

This month, the House Judiciary’s Select Subcommittee on the Weaponization of the Federal Government Committee revealed that the U.S. government is funding AI censorship technology via the National Science Foundation (NSF) to censor political debate on social media platforms.

The report states:

“This interim report details the National Science Foundation’s (NSF) funding of AI powered censorship and propaganda tools, and its repeated efforts to hide its actions and avoid political and media scrutiny.”

“In the name of combatting alleged misinformation regarding COVID-19 and the 2020 election, NSF has been issuing multi-million-dollar grants to university and non-profit research teams.”

“The purpose of these taxpayer-funded projects is to develop artificial intelligence (AI)- powered censorship and propaganda tools that can be used by governments and Big Tech to shape public opinion by restricting certain viewpoints or promoting others,” the report adds.

“Additionally, the NSF developed a media strategy aimed at blacklisting certain American media outlets because they were scrutinizing NSF’s funding of censorship and propaganda tools,” the report continued.

However, an NSF spokesman rejected the allegations in the report.

“NSF does not engage in censorship and has no role in content policies or regulations. Per statute and guidance from Congress, we have made investments in research to help understand communications technologies that allow for things like deep fakes and how people interact with them,” the spokesman said, according to The Epoch Times.

“We know our adversaries are already using these technologies against us in multiple ways. We know that scammers are using these techniques on unsuspecting victims. It is in this nation’s national and economic security interest to understand how these tools are being used and how people are responding so we can provide options for ways we can improve safety for all.”

The spokesman also denied that NSF ever sought to conceal its investments in the so-called Track F program and that the foundation does not follow the policy regarding media outlined in the documents discovered by the committee.

The House committee report cites the Speaker’s notes from the University of Michigan’s first pitch to the NSF about the WiseDex tool.

The statement reads:

“Our misinformation service helps policymakers at platforms who want to… push responsibility for difficult judgments to someone outside the company… by externalizing the difficult responsibility of censorship.”

According to the report, universities, non-profits, and Big Tech are involved in the censorship campaign.

The NSF is distributing grants, funded by the American taxpayer, to the following universities:

  • The University of Michigan
  • The University of Washington
  • The University of Wisconsin
  • The Massachusetts Institute of Technology

The grants will be used to develop AI-powered censorship tools such as WiseDex, Course Correct, SearchLit, and Co-Insights, which will be used by social media platforms such as Facebook, Reddit, YouTube, and X.

The report from the House Weaponization Committee also slammed the “fact-checking” industry’s “pseudo-scientific” endeavor to censor alternative viewpoints and crush political dissent.

Wisedex, the AI-powered censorship tool costing taxpayers a whopping $750,000, can “assess the veracity of content on social media and assist large social media platforms with what content should be removed or otherwise censored.”

Meanwhile, Meedan’s Co-Insights is modeled on a digital snitching program called “community tiplines” to conduct “disinformation interventions – costing $5.75 million, funded by the taxpayer.

The University of Wisconsin-Madison’s Course Correct tool aims to “empower efforts by journalists, developers, and citizens to fact-check” “delegitimizing information” about “election integrity and vaccine efficacy” posted on social media.

The report notes that MIT’s SearchLit is designed to develop “effective interventions” for those vulnerable to misinformation campaigns and to teach people how to discern “fact from fiction” online.

“In particular, the MIT team believed that conservatives, minorities, and veterans were uniquely incapable of assessing the veracity of content online,” the report adds.

In simpler terms, this is a political operation aimed at suppressing alternative viewpoints.

The report noted:

“The scope of NSF’s mission has shifted over the years to encompass social and behavioral sciences. For example, NSF used to fund political science projects from the 1960s until 2012, when Congress banned such research from receiving NSF funding. However, in recent years, and after the academic outcry that Americans elected President Trump only because of “Russian disinformation,” NSF has spent millions of taxpayer dollars funding projects to combat alleged mis- and disinformation.”

The NSF responded to criticism from House Judiciary Chairman Jim Jordan (R-OH) last year by claiming the tools are designed to protect Americans from “foreign interference” in elections and public health emergencies like COVID-19.

“NSF is encouraged to consider additional research efforts that will help counter influence from foreign adversaries on the Internet and social media platforms designed to influence U.S. perspectives, sow discord during times of pandemic and other emergencies, and undermine confidence in U.S. elections and institutions,” the NSF claimed.

“To the extent practicable, NSF should foster collaboration among scientists from disparate scientific fields and engage other Federal agencies and NAS to help identify areas of research that will provide insight that can mitigate adversarial online influence, including by helping the public become more resilient to undue influence.”

However, the NSF states in the attached documents, “The overarching goal of this work is to equip the general public with the knowledge and skills needed to find trustworthy information online.”