Calls to crack down on the Russian propaganda machine are on the rise in Europe — amid concerns over the impact of global disinformation campaigns.
EU commissioner for transparency and values Věra Jourová said on Thursday (16 June) EU countries should remain vigilant and impose sanctions on media that pose a threat to their national security in order to counter the current “information war”.
“It is also in the hands of EU member states to make an assessment on whether a media poses a threat to national security,” she told a press conference, adding that EU capitals do not have to wait for EU-level sanctions on platforms, outlets, or TV channels to take action.
After the Russian invasion of Ukraine in late February, the EU announced an unprecedented ban on Russian state-controlled media outlets Russia Today and Sputnik — deemed by Jourová as the “Kremlin’s weapons”.
But some EU countries, such as the Baltics, have gone further in their fight against Russian disinformation since the ban of Russia Today and Sputnik.
Latvia’s media regulator, earlier this month, decided to ban all Russia-based TV channels until Russia stops its invasion of Ukraine.
Previously, Lithuania also imposed sanctions on several Russian broadcasters operated by Russia’s Gazprom-Media.
Neighbouring Estonia has taken similar actions.
“[Russian] president [Vladimir] Putin sets the tone and decides what is the truth and everyone who deviates from this can go to jail …We [just] want the society to have better tools to differentiate and work with facts,” said Jourová, while presenting the bloc’s tougher approach against disinformation under an updated EU code of practice.
The new EU code, which includes more than 30 signatories, has pledged to scale up efforts to combat fake news, deep fakes, and political advertising.
The signatories include Meta (which owns Facebook, Instagram, Messenger, and WhatsApp), Twitter, Google and Tik Tok as well as advertising associations, civil society organisations and fact-checkers.
Apple and Amazon are missing from the list, although Amazon’s live streaming e-sports platform Twitch is included.
The new code, Jourová said, shows that Europe has learned its lessons. “We are not naive any longer.”
The platforms now have seven months to make sure that their commitments are enough to fight fake news, being considered “risk mitigation” under the EU’s flagship tech policy Digital Services Act (DSA).
All media players falling under the scope of the DSA will have to submit their reports by January 2023 for a commission review. Platforms could face fines of up to 6 percent of their turnover if found in breach of the rules.
This means that the code will still be a self-regulatory tool until the DSA is in place.
The code will create a new body, officially known as the “permanent task force”, to monitor the implementation of the code in relation to the DSA obligations.
But digital rights defenders and activists say monitoring is key to avoiding having a paper tiger against disinformation.
“We need monitoring with teeth from the commission, … otherwise, this code could become just a cheap way to avoid the fines they [the platforms] could face under the Digital Services Act,” said Luca Nicotra, a campaigner at international group Avaaz.
The new code, which builds on its previous version from 2018, includes commitments for how platforms should fix their algorithms to avoid amplifying or monetising disinformation.