On Wednesday, the House Appropriations Committee held a hearing titled, “United States Efforts to Counter Russian Disinformation and Malign Influence.”
During the hearing, Rep. Lois Frankel (D-FL) questioned Lea Gabrielle — a top State Department official under Trump — about Russia’s effort to misinform campaigns around the world. Gabrielle could only respond with “Ummm,” according to ABC News.
“It is clear that the Kremlin has been attempting to damage America’s credibility among our allies and our partners, undermine trans-Atlantic unity, and to sow discord in target societies,” Gabrielle testified. “They covertly plant false stories.”
Frankel wanted her to name concrete examples.
“What are the tales that they are telling, their fake news. What do you see?” Frankel wanted to know. “Give me a couple of examples of fake stories. Because we’ve been talking in generalities, I’d like to know what kind of information they’re spreading. What are they saying, for example?”
“Well, it really depends on the country,” Gabrielle responded.
Frankel pressed her for an example.
“Well, just give me an example,” Frankel said. “Pick a country and give me an example.”
“Um,” Gabrielle said.
“Anybody can help her,” Frankel said.
In a report released last October, the Brookings institute warned that although Russia’s goals and strategies aren’t new, the Kremlin is increasingly mastering digital tools to sow division and distrust by blending the line between fiction and fact.
“Over the next few months we are going to see more disinformation campaigns, including fake websites that work together as a network to spread disinformation, fake personalities and entities on Twitter and Facebook, and manipulation of social media networks’ algorithms, including Google, YouTube, and others,” the report concluded.
“And we’re not really paying enough attention to algorithmic manipulation.”
And as artificial intelligence becomes more sophisticated, so will Russia’s propaganda campaigns.
“Right now, humans control and produce online entities like bots and trolls,” the report notes.
“But soon, disinformation campaigns will become more automated, smarter, and more difficult to detect. AI driven disinformation will be better targeted to specific audiences; AI driven online entities will be able to predict and manipulate human responses; At some point very soon, we won’t be able to tell the difference between automated accounts and human entities.”
Take a look at the video clip below: