Networks in China and Iran also used AI models to create and post disinformation but campaigns did not reach large audiences

In Russia, two operations created and spread content criticizing the US, Ukraine and several Baltic nations. One of the operations used an OpenAI model to debug code and create a bot that posted on Telegram. China’s influence operation generated text in English, Chinese, Japanese and Korean, which operatives then posted on Twitter and Medium.

Iranian actors generated full articles that attacked the US and Israel, which they translated into English and French. An Israeli political firm called Stoic ran a network of fake social media accounts which created a range of content, including posts accusing US student protests against Israel’s war in Gaza of being antisemitic.

  • Ilandar@aussie.zone
    link
    fedilink
    arrow-up
    9
    ·
    5 months ago

    it was better than banning AI in the free world and giving dictators advantages in AI tech.

    The US doesn’t need to ban AI. It just needs to stop publicly deploying it, untested and unregulated, on the masses. And some of these big tech companies need to stop releasing open models that can be easily obtained and abused by bad actors. Dictatorships don’t actually like AI internally, because it threatens their control of the narrative within their country. For example, the CCP has been very cautious of it when compared to the US because it is concerned about how it could be employed against the party.

    And this whole arms race argument sort of ignores the fact that the US continuing to mass deploy this shit at breakneck speed is already giving the dictators the advantages they need to fuck with democracy. No one needs to have a real war with the US if it starts one with itself.