Memetic Warfare 2: Electric Boogaloo
Welcome to the second post of Memetic Warfare! Today’s main topic is the question of influence and efficacy, which we’ll explore via a few different cases. To start off, let’s discuss an unrelated but equally important topic: the overlap between cyber threat intelligence and influence operations.
You Shall Not Place a Stumbling Block Before the Blind
Everyone should go take a look at, if not read in-depth, the recently published report on Ghostwriter published at Cardiff University. For those that haven’t been following Belarussian malign influence activity recently (and who hasn’t?), Ghostwriter is fascinating APT/APM (Advanced Persistent Threat/Advanced Persistent Manipulator to borrow Microsoft terminology) based in Belarus. Ghostwriter has carried out numerous cyber-enabled influence/information operations in recent years in Europe, often utilizing account takeover or domain compromise techniques to gain control of authentic assets and utilize them in influence operations.
The report is great for a number of reasons. It not only covers Ghostwriter activity with a critical eye, but also covers countermeasures utilized and provides a historic overview of the APT. Most importantly the report brings up a key topic: “linkage blindness”.
Linkage blindness, as referred to in the report, is the phenomena of different, specialized investigators and analysts investigating one certain aspect of a particular APT/APM. This causes, say, cyber threat intelligence analysts to investigate the IoCs of a given campaign and determine that a hacking campaign is taking place, whereas IO analysts may identify coordinated influence activity online and also determine that an operation is underway, but not being able to link the two effectively and thus missing the forest for the trees so to speak. This same issue also applies to different organizations looking at the same threat actor and gauging it in different fashions, thus missing key linkages.
The solution to this as proposed by the report - a conclusion that I wholeheartedly agree with - is to understand the world of influence and cyber operations as intrinsically intertwined. Doing so, by for example creating blended teams of cyber/IO analysts, as well as enforcing or at least encouraging the public sharing of IoCs and other threat indicators regarding influence operations, may help bridge the gap and empower analysts to fuse cyber threat intelligence with open-source investigation to investigate and attribute networks. These two fields are not separate and disparate with entirely different skillsets; being better at one inherently improves one’s capabilities in the other. The future of information operations investigation is technical and a combination of CTI/OSINT tools and methodologies, empowering analysts to be able to identify potential cases of account/asset takeover, domain compromise, technical domain analysis and more.
I wrote a brief post about the overlap of CTI/OSINT tools and methodologies on LinkedIn the other day for those interested in reading more about specific tools.
After that brief aside, let’s move on to the topic of the week: efficacy.
Gauging Efficacy
One of the biggest issues that analysts and researchers of IO have is gauging efficacy. We tend to exaggerate the impact of low-energy botnets while downplaying the media and messaging apparatuses that are often key at truly shaping the opinions of the viewer.
This week’s events are great examples of how we can perceive influence originating from state actors; be it at the under-the-radar, person to person level, or by making an overly big deal out of really unconvincing AI videos.
The general point that I will try to convey is that overt, traditional vectors of influence such as state media outlets, “educational” indoctrination centers and more are all much more effective at actually influencing their target audiences. According to recent statements by the State Department GEC (Global Engagement Center), China leads globally in investment in overt messaging and state media, including funding other overt efforts - and is doing so effectively. In contrast, low-energy covert IO utilizing unconvincing AI - not as much.
While we can expect there to be new technical countermeasures for detecting AI-generated content as it continues to improve, we forget that there are already very feasible methods for identifying it already available to the general public!
Let’s move on to our first topic: a must-read article from Newlines Mag.
Argo Meme Yourself
Newlines Mag (a great magazine for anything Middle East in general), recently published an inside-look at the influence apparatus of Iran, focusing on international recruitment efforts for pro-Iran religious influence actors.
I won’t go into too much of the fascinating detail provided by the author, but will focus on one main point: Human, authentic actors play a key role in influence and disinformation efforts, be they overt or covert, and are often more effective than covert IO. Before we discuss that, let’s review Iranian influence globally and see how they empower human actors to be effective.
Iranian influence outside of the main areas often affiliated with Iranian covert IO (US, Europe, Israel and the Middle East) is multifaceted and overt. This is the case primarily because it makes sense; Iran has open and active relations with a number of states in Latin America, Africa, Central Asia and beyond, and thus has much less of a dire need for covert IO, whereas proxies and covert operations make the most sense in “denied” - to borrow military terminology - space.
Iranian state media, such as HispanTV, dovetail with Russian state media in Spanish as well as far-left South American state media outlets such as TeleSUR, Kawsachun, Granma and others (Venezuela, Bolivia and Cuba, respectively). These outlets frequently lift content and narratives from each other, sometimes even utilizing the same correspondents and engaging in official syndication and cooperation agreements. Iranian and Russian media cooperation in Latin America is extensive and effective, with the stats on viewership, hits and engagement on Russian state media in Spanish speaking for themselves.
Iranian state media content in Spanish is also often juxtaposed upon various cultural and religious norms in Latin America. Waxing poetic about the similarities between the virgin Mary and Shia belief in her, for example, is often couched with social justice discourse of Iranians spreading the good word internationally via good deeds. The lack of competing narratives, political reality and general (in many cases well-earned) wariness of the US and the West in Latin America all contribute in their own way to creating a environment rife for exploitation, including in-person recruitment of vulnerable individuals.
The article, written by a Latin American convert to Islam under a pseudonym, focuses on Iranian efforts to recruit locals - be they in Latin America, Africa, Central Asia or beyond. These individuals are enticed to study in Iran for free for up to 4 years, become accredited as sheikhs, and many are even offered paid positions in local Shia Islamic centers abroad.
This system is of course riddled with its own idiosyncrasies, doublethink and more, but in fact serves as a reasonable influence recruitment pipeline for human sources. This pipeline is enabled to a large degree by a friendly, or at least neutral, media environment that has been shaped not only by Iran but by other actors, which is what enables these meaningful, human-to-human “brainwashing” efforts to take place. Further dangling remuneration for additional influence-pushing via local Shia mosques and religious centers is a key component. Iran-affiliated Shia centers, often staffed by graduates of the aforementioned program, are active in Latin America, Europe, the US and even Asia, with Thailand being an interesting center of activity.
This sort of impact is hard to measure (surprise!). While the author quickly picked upon the ludicrous facets of the Iranian regime and its indoctrination program in his stay, it’s hard to not be concerned by how easily other students were swayed and gave in to the indoctrination. One can only imagine the potential impact that they (and others who have completed that program) may have domestically upon their return to their home country.
The AI-des of March
On March 2nd, the Washington Post broke a story on the use of AI-generated video content, created by commercially available tools developed by a company named Synthesia, in the service of malign influence campaigns. This operation , emanating from Venezuela, was one of several reported upon in recent weeks, including on a similar Chinese operation reported upon by the New York Times. While certainly an interesting approach, the wider response from the industry and public has been overblown.
Prior to diving in to why deepfakes frankly aren’t that interesting (yet), I’d like to mention a few things:
IO/Disinformation campaigns in Latin America are underreported with the arguable exception of Brazil. Some firms such as Nisos have begun to publish reports on IO in Latin America, although more is still needed.
Latin American threat actors are still very much under-researched, with many of the well-known ones such as Cuba, Venezuela and others being more active than commonly thought.
The report that the Washington Post referred to, written by Cazadores de Fake News, is available here and you should read it. The report itself is great and shows how small but capable teams of people with the requisite investigative techniques, language capabilities and mindset can often outperform larger, more “qualified” teams.
So, why are deepfakes - and now AI-generated videos - overblown in IO, at least currently? There are a few reasons.
Deepfakes are still incredibly unconvincing. Most people would immediately recognize a deepfake video, and in fact many have in the past, with the infamous Zelensky deepfake video being an excellent example of the low level of quality currently available. I struggle to see anyone being fooled by these without significant jumps in quality.
Low exposure. While in the future, a theoretical well-made deepfake may go viral and be hard to track, most deepfakes will still be artificially promoted by networks of online assets. This in of itself is a fundamental paradox for IO: the need to promote an inauthentic narrative, image, video or otherwise is best and most efficiently done by networks, which can be analyzed and reverse engineered.
Countermeasures. While many in the industry and media tend to overblow the impact of “AI”-generated disinformation threats, in practice the main capabilities discussed have always been underwhelming at best, despite becoming almost ubiquitious in IO as Meta reported in 2022. The first AI development of import was the use of GAN (generative adversarial networks) images, as shown below.
Take a look at the above image. Does it appear convincing? At first glance, yes. Taking a deeper look exposes the number of inconsistencies that expose that this image is GAN-generated:
Inconsistent reflection in sunglasses
Different lens/frame shape
Different decoration on both sides of the glasses over the ears
Different earrings
Odd shade appearing on forehead
There are more, but these are also typically found in GAN images in general:
Overly-centered face position
Asymmetries and inconsistencies
Inexplicable backgrounds
More
Throwing this image in Seint_PL’s Am I Real? Tool immediately shows the overlap that GAN images share - the mouth and eyes are always located in highly similar locations.
Anecdotally, one can pretty quickly develop a feel for what’s GAN and what isn’t as one investigates network upon network.
So, GAN images can be pretty easily identifiable manually. Most platforms also at this stage have numerous mechanisms for identifying GAN images automatically, although these are far from imperfect.
There are other methods of identifying GAN images too. For example, GAN images are generated automatically with no metadata, which is suspicious to platforms as photos uploaded to personal profiles are usually those taken by actual cameras, which generate real metadata on images they take.
GAN images, while commonly used and effective in certain cases, have already had somewhat effective countermeasures developed against them, and they themselves are imperfect. How could we effectively counter the threat vector of AI-generated videos?
Thesis, Anti-Thesis, Synthesia
Synthesia provides a text-to-AI video generation platform for commercial purposes. Just looking at the homepage shows the myriad benign uses of Synthesia - cybersecurity training videos, educational purposes, real estate marketing and tons of others (a personal favorite is the Santa Claus video). Just watch a few of the videos and you’ll see that they’re not bad. They certainly are sufficient now for basic commercial purposes, but they have yet to successfully transverse the “uncanny valley” both in visuals and audio.
Synthesia provides clients with access to a variety of “AI Avatars” based upon real actors. Note that they also support dozens of languages per avatar, which may be an especially appealing selling point for IO in the future.
Cazadores de Fake News utilized Pimeyes, a popular facial recognition search engine, to search the faces of the inauthentic assets in videos generated by Synthesia. They identified dozens of other videos in which these inauthentic individuals also were used, and thus were able to track them back to Synthesia and its tools comparatively easily.
Hany Farid, an expert referred to in the article, also proposed watermarking videos/image content created by generative AI firms. This can and should be done, but may introduce problems regarding branding as client firms almost certainly wouldn’t want to expose such watermarks on their content. Additional methods, such as less or even invisible watermarks may be more feasible.
There are other steps, however, that could be implemented.
Basic regulatory screening for use of certain generative AI tools. Implementing basic KYC (know your customer) procedures would make it somewhat more difficult, although by no means impossible, for threat actors to exploit these tools. Additionally, further limiting access to these tools based upon country of origin for customers in extreme cases may also be a reasonable step.
Content moderation: flagging certain videos that utilize “indicative” keywords for further investigation and approval could prevent some cases of these tools being prevented prior to video dissemination.
Perceptual hashing: Just as hashes are used to check for similar images across datasets, mandating the hashing of faces of all AI-generated assets and sharing this information in a public database for utilization by public and private organizations could enable the easy identification of AI-generated content.
It’s easy to get carried away by AI, and especially generative AI. I’m certainly no luddite and do believe that generative AI will be one of the next big developments in tech, but there is also no doubt in my mind that these low-quality tools are not convincing anyone at scale.
We still lack the ability to effectively utilize AI to analyze content - be it authentic or created by generative AI. A recent Russian botnet targeting Finnish accounts with anti-NATO narratives recently did quite the linguistic faux-pas by confusing the words for “save” and “download” in Finnish - quite a shame that NATO won’t ever be able to download Finland.
More importantly, there are common-sense ways to investigate and prevent the misuse of generative AI already available today. Will they be sufficient in 5-10 years? Who knows. We may well, and probably will see leaps and bounds of progress, as well as developments that are now impossible to predict.
What we will probably see in the short-term are generative AI tools for videos that enable the more efficient and effective creation and editing of video content, but not full automation as currently done by Synthesia. The sum of various AI tools, however, may lead to a more potent potable than the indivdual use of these tools. For example, integrating ChatGPT with Dall-e and Synthesia to create a pipeline of automated multimedia content disseminated by burner accounts themselves integrated with ChatGPT, enabling them to comment and have conversations with real people online. This, however, could also be mitigated by extant tools a la DetectGPT.
The above combination could potentially be developed today to an unknown degree of efficiency. What we can say now, however, is that the actual impact of influence is amorphous and impossible to truly quantify. Having said that - I think that the Iranians are getting a better return on their investment than the Venezuelans are, at least for the coming few years.
Thanks for reading this week’s issue of Memetic Warfare Weekly! Next week, we’ll go over a case in which I hope to present some practical investigation techniques.