Welcome to Memetic Warfare Weekly’s third post! I’m happy to have you here. My name is Ari Ben Am, and I’m the founder of Glowstick Intelligence Enablement. Memetic Warfare Weekly is where I share my opinions on the influence/CTI industry, as well as share the occasional contrarian opinion or practical investigation tip.
I also provide consulting, training, integration and research services, so if relevant - feel free to reach out via LinkedIn or contact@glowstickintel.com.
This week we’ll focus on some best practices for domain analysis, once we get through the week’s developments. The past week was uncharacteristically busy for influence/cyber developments, so bear with me. We will go over some practical domain analysis and investigation tips in next week’s blog post.
Big Trouble in Little Ontario
Global News Canada has published an article claiming that members of the Ontario legislature are in fact part of Chinese “election interference” networks. The report cites internal Canadian government documents, which I presume that we may see more of with Trudeau’s planned government commission on Chinese interference.
A redacted document provided by the Canadian government on alleged Chinese election interference. Good luck finding any use for this one. Source: Global News Canada
The article claims that one Vincent Ke, a Progressive Conservative member of Ontario’s legislature, served as a central figure in transferring money to preferred Chinese candidates. These funds, ranging from tens to hundreds of thousands of dollars, were allegedly transferred directly from the Chinese consulate in Toronto to pro-China front organizations and businessmen, who then rendered the funds to the supported candidates.
Moldovan Money Moves
It wasn’t just a big week for Chinese interference in Canada as well - Russian interference in Moldova has begun to reach a head. The US National Security Council has already begun to overtly call out Russian interference in Moldova meant to destabilize the government and keep it in Russia’s “orbit”.
Moldova has been subjected to full-spectrum Russian interference as of late. Reporting from NPR emphasizes the digital and online efforts to influence the Moldovan public via traditional media, social media and messaging apps - Telegram and TikTok received a special shoutout in the article. Russian efforts have also contributed, to an unknown degree, to protests against the Moldovan government.
I imagine that we’ll see more State Department GEC content on Moldova in the coming weeks. I won’t go in-depth on Moldova, as frankly I don’t know a lot about it, but for those interested, feel free to read more here about the recent alleged “coup” attempt in Transnistria.
Master of (sock)Puppets
NATO StratCom center’s now annual publication of a report on social media platform manipulation, meaning the purchase of engagement and online assets from grey-market commercial providers. This report is worth reading in its entirety, so I won’t harp on its details, but there’s one main finding that I want to emphasize, so I’ll quote directly from the executive summary:
“In this report—the fourth version of our social media manipulation experiment—we show that social media companies remain unable to prevent commercial manipulators from under-mining platform integrity. Overall, no platform has improved compared to 2021 and, taken together, their ability to prevent manipulation has decreased.”
The report posits that many platforms have pivoted from trying to prevent commercial providers from operating on their platforms, but rather focus on trying to prevent their “reach”. The efficacy of this approach remains to be seen, but personally - I’m skeptical.
It Takes an Intelligence Community
The US intelligence community (a phrase I personally hate and almost always replace with “government”) published its Annual Threat Assessment. John Hultquist of Mandiant does a nice job of breaking down key elements of the report for quicker consumption, although I’d recommend skimming through it yourself if you have the opportunity.
Overall, short of maybe North Korea, nothing here blew me away in the actual report itself. Things you mostly expect the US government to say, for example that Russia will continue to interfere in US elections, Iran will continue to carry out cyberattacks against Israel, and the requisite mention of China developing AI capabilities. One notable thing about China here is the apparent claim that China is now focusing on down-ballot influence opportunities at state and local government, because these officials are apparently more “pliable”.
Giving Negative Finns
You should follow Pekka Kallioniemi on Twitter. Pekka goes in-depth on pro-Russia disinformation actors that - surprise - are actual people! Real people, tweeting in the name, are often much more influential than poorly coordinated networks of burner assets online. Pekka’s recent thread on Pedro Baños, a pro-Russian, Spanish-language individual is great and in-depth.
Rock’em SOCOM (ro)Bots
The Intercept recently published an article reviewing internal SOCOM documents for future development and procurement. I’ve added a screenshot below from the file uploaded by the Intercept, which they’ve since made available. The reporting focuses on Military Information Support Operations, or “MISO” - the military application of technology and operations to conduct IO, “digital deception”, disinformation campaigns at more for “tactical edge”.
This document is quite interesting. The Intercept rightfully points out that most of the US’ efforts internationally emphasize the importance of countering deepfakes malign influence operations, while the US has on occasion carried out IO. The Stanford Internet Observatory and Graphika have also investigated past US information operations which in many ways aped Russian, Chinese and Iranian operations with minor improvements; my personal favorite was the utilization of GAN images photoshopped onto real stock images of people with replaced backgrounds.
The article’s author, and his quoted experts, refer to the use of “deepfakes” and IO as more akin to nuclear proliferation than legitimate weapons of war and statecraft, as tools to generate deepfakes/IO may be utilized by undesired, nefarious actors. There also is a concern about legitimacy: using such weapons may delegitimize one’s claims about being a benign actor working for the good of democracy and so on.
I personally am not so convinced. While I don’t think that it would be in the US’ interest to carry out IO at a national scale, the tactical and even strategic use of deception has been a mainstay of statecraft and war for thousands of years, and there’s no functional difference between traditional deception and say creating a deepfake of a Russian military officer.
IO can and should be used at the tactical and covert action level, with judicial oversight and clear rules and boundaries, and isn’t fundamentally much different than, say, offensive cyber capabilities which may also proliferate to other organizations. How a certain tool is used can matter just as much as the utility of the tool itself: responsibly using IO against legitimate targets for the right reasons can and should be considered acceptable when done by certain states. That’s not to say that there shouldn’t be rigorous public and private debate on the topic, and that such tools should only be used with proper legal and ethical frameworks.
I’d love to hear from readers on this topic as I’m sure that many feel differently than me on this - feel free to comment or reach out.
Additionally, the specific mention of the use of cyber capabilities to target IoT devices to better understand local populaces is quite an overt mention of what may be beyond the pale for legitimate use of IO. What it shows in addition, which is equally important, is that IO and cyber are two sides of the same coin, and will only continue to grow and develop in synergistic fashions. For a little more on this topic, see my recent LinkedIn post on the topic here.
It’s Pronounced “Jeph”-i
Last but not least is DFRLab’s report on a Russian IO network active on Telegram in a variety of languages. The report is good and DFRLab puts out good content in general, but they did one of my least favorite things - Gephi visualization of networks. Gephi visualizations of networks almost never tell you anything meaningful about network activity and detract from meaningful link analysis graphs that can actually simplify and impart knowledge about a network. I implore people in the industry: please stop creating useless Gephi graphs for your reports. There are other, more effective and more time-efficient methods of sharing your findings. I cannot think of one occasion in which I’ve seen a Gephi graph of a botnet of any sort and have left with deep insight that I couldn’t have received from looking at a manual link analysis graph or a table/chart.
Having said that - the report is great in that it focuses on Telegram-centric activity. I personally am of the opinion that messaging applications are the future for both cyber and IO activity - some innovative use-cases, published by Dragos, include the use of Telegram and Discord bots for C2 in network penetration operations. The utility of messaging applications for IO is increasingly understood by researchers:
Messaging apps are often “black boxes”, requiring deep manual investigation or use of limited tools such as TGStat (often based in Russia/ Belarus) to investigate. This makes it harder for analysts to investigate and report upon messaging application IO.
Messaging applications are inherently designed to enable the mass dissemination of content, and it’s much easier for content to spread virally.
Content moderation is also much more difficult, as messaging app providers are loathe to actually look in groups, let alone private chats between individuals.
This is especially relevant for creating infrastructure. Creating burner dissemination accounts and groups/channels is very easy (requiring only a phone number) and comparatively hard to take down. We still lack the onboarding defensive processes present on most social media platforms.
So messaging applications are the next main focus for investigators, and it’s great to see them get some more attention. The report rightfully emphasizes Russia’s flair for polyglot networks as well, which I think should always be emphasized.
Till next week, thanks for reading. If you have any comments, questions or otherwise - feel free to reach out!
I’m enjoying reading these weekly posts. A lot of IO news , articles and developments I missed out on. Curious to know how you keep up with all the new developments and articles? Do you have an RSS feed or something similar?
Following up: I do agree with you on the use of IO for offensive reasons, similar to what we see in CTI. My main concern is “poisoning the well”. If IO continues to increase and evolve, and we employ offensive IO to combat it, at what point are authentic voices lost and overshadowed by IO networks content?