More than a month before Russia invaded its former Soviet neighbour, dozens of Ukrainian government websites were taken down before returning with a warning: ‘prepare for the worst.’
Experts say Russian hackers â€” state and proxy â€” have been pummelling their neighbour for years, releasing devastating data-wiping malware and even briefly taking down the country’s power grid in a pattern of attacks that have caused mayhem far beyond the Ukraine.
So, as troops loomed at the border, governments around the world feared a major cyber attack could be launched at any moment.
But, three weeks into the conflict, such action remains to be seen. Rather than battering transport services or taking healthcare systems offline, only limited, unsophisticated attacks seem to have been levelled against Ukraine.
‘Cyber operations take a long time’
Of course, no-one knows whatâ€™s being planned in the war rooms of the Kremlin. And in spite of years of international investment and an informal army of anti-Putin hackers, Ukraineâ€™s power grid, for example, is arguably still vulnerable to Russian cyber ops.
But in times of war, these kinds of large-scale infrastructural attacks may actually be a poor use of resources, experts told Metro.co.uk.
‘Cyber operations take a long time,’ said cybersecurity researcher Sneha Dawda from the Royal United Services Institute. ‘It’s not simple to create a piece of malware that could disrupt the whole energy infrastructure of Ukraine.’
The kind of sophisticated attack needed to take out a transport network or a power grid requires intimate knowledge of a target system â€” its entry points, its access credentials â€” said Esther Naylor, a research analyst at Chatham Houseâ€™s International Security Programme.
This is knowledge Russia simply may not have, in spite of its reported use of its neighbour as a ‘test bed’ for cyber ops. These may have given Ukraine the opportunity to learn about how its neighbour hacks, giving it time to adapt its cybersecurity efforts in response, Dawda added.
On some measures, Ukraine is less ‘cyber mature’ than Russia. Back in 2020, it ranked 78th compared to Russiaâ€™s 5th on the International Telecommunications Unionâ€™s Global Cyber Index, for example.
But stacking up one countryâ€™s cyber capabilities against another is no exact science, Dawda explained, and the country will have made improvements over the last two years.
The fact Russiaâ€™s cyber attacks have been ‘nowhere near the scale that was expected’ might suggest Ukraine has made itself a harder target. But it also might indicate Russia sees ‘a physical offensive as more effective.’
â€˜Cheaper to drop bombsâ€™
It is possible the country already has a cyber weapon in its arsenal it just hasnâ€™t deployed. But it would be expensive and time-consuming to develop new infrastructure-busting malware.
And right now, Russia is funding a major invasion whilst facing crippling economic sanctions. Pricey cyber attacks with long lead times may not be a sustainable way of using limited resources.
Serious cyber operations are ‘so hard and expensive,’ Dawda said, that it may simply be ‘cheaper to drop bombs.’
‘When you’re bombing hospitals, when you’re bombing civilian infrastructureâ€¦ It doesn’t make sense to try and take down the power grid because you can do that with shelling,’ added Naylor.
Thatâ€™s not to say that cyber operations donâ€™t play a significant role in war. But they may not always come as blockbuster attacks.
Instead, it may be more effective to use cyber as an accessory to traditional warfare: a way to enhance disruption and promote an atmosphere of chaos.
So, how do governments do this? And how might these kinds of operations affect you?
Creating chaos on the battlefield
On the battlefield, governments may use cyber operations to directly disrupt their opponentâ€™s military equipment and comms.
Although these capabilities are top secret, the UK did reveal details of one successful cyber campaign.
Between 2016 and 2017, British intelligence operators used cyber techniques to jam phones, disrupt drones and interfere with servers used by terrorist group ISIS.
Jamming phones, for example, is thought to have hampered communication between ISIS commanders and those fighting on the ground, leaving them misdirected and even cut off.
‘We wanted to deceive them and to misdirect them, to make them less effective, less cohesive and sap their morale,’ Strategic Command commander General Sir Patrick Sanders told Sky News last year.
It was crucial, he added, to make sure these activities took place alongside physical operations.
‘You can’t just do that in cyberspace. You have to coordinate and integrate [it] with activities that are going on on the ground, whether it’s from our own forces, special forces and others.’
Hitting ISIS servers with malware served another crucial purpose: disrupting the flow of propaganda used to radicalise and recruit new members. Together with the US, British operators locked ISIS members out of their accounts, removed online posts and deleted material stored on their servers.
In the wake of this operation, the UK set up the National Cyber Force – a collaboration between the military, GCHQ, MI6 and defence lab Porton Down.
Outside of the battlefield, governments are highly concerned about a cyber threat that hits far closer to home: the rampant spread of disinformation.
A hallmark of numerous governmentsâ€™ cyber activity â€” during peacetime and in war â€” is the creation, dissemination and amplification of fake news. Whether shared by state media outlets or spread by pro-government groups, false information is often used to sway public opinion and influence political debate.
Last summer, researchers with the Centre for Information Resilience identified a network of social media accounts using fake news and other tactics to push pro-China narratives and encourage discord in the West.
CIR director of investigations Benjamin Strick said the group appeared to be trying to ‘delegitimise the West by amplifying pro-Chinese narratives.’
He added: ‘The network targets significant subjects such as U.S. gun laws, Covid-19, human rights abuses in Xinjiang, overseas conflicts and racial discrimination in an apparent bid to inflame tensions, deny remarks critical of China, and target Western governments.’
Pushing false narratives on social media is not the sole preserve of authoritarian regimes. Former U.S. president Donald Trump was famous for making false and misleading statements on his Twitter account. So much so that he was permanently banned from the platform last January in the wake of the Capitol riot.
Russia is highly experienced at using fake news to wield influence online. During the pandemic, pro-Kremlin networks and state media outlets used fake news to criticise mRNA jabs and accuse the EU of political bias in its assessment of the countryâ€™s Sputnik V vaccine.
The country has employed this familiar playbook against Ukraine for years. Now, alongside its physical invasion, itâ€™s putting more and more of its efforts into sharing lies about the country.
This ‘downpour’ of disinformation is likely being used to ‘conceal the ground realities’ and ’cause chaos,’ according to Chatham House researcher Isabella Wilkinson.
‘Very generally, one could say this aims to cause a smoke and mirrors effect: to act as a smokescreen,’ she explained. ‘This allows a tighter control of the information environment. And as we know, information is so powerful.’
Much of the material being shared â€” posts calling a pregnant blogger a ‘crisis actor’, or claiming actors on a movie set are parading as injured civilians â€” implies Ukraine is falsely presenting itself as a victim of war.
Other disinformation, like baseless claims on state TV that Ukraine is planning to nuke Russia, are being used to justify Putinâ€™s false pretext for invasion: that Ukraine â€” and sometimes its allies â€” were an imminent danger.
Concerningly, officials from China have been spreading some of these fake stories even further.
Fertile ground for â€˜toxicâ€™ disinformation
Social media, whether itâ€™s Facebook, Twitter and now TikTok, can be fertile ground for a ‘toxic combination’ of disinformation and misinformation, where deliberate attempts to deceive are unwittingly amplified by users unaware theyâ€™re spreading fake news, Wilkinson said.
In Russia, which has implemented strict rules on journalism and restricted access to global news outlets and social media, the number of reliable sources available to citizens to challenge state narratives is shrinking.
While youâ€™re unlikely to be directly impacted by battlefield cyber ops, thereâ€™s a good chance youâ€™ll come across fake news in some form if youâ€™re on social media. And these arenâ€™t isolated to periods of war.
‘It’s really important to recognise that the recent activities and information operations targeted against Ukraine aren’t necessarily unique,’ Wilkinson said.
‘They’re really just part of Russia’s longer term playbook of information operations there.’
Although these operations may often start online, theyâ€™re usually designed to have an impact on the real world. Governments and proxies can use fake news to try and turn the tide of public opinion in their favour, for example by spreading lies about political opponents or inflaming tensions abroad.
If enough people believe the lies â€” or at least stop trusting their own authorities â€” disinformation actors may be able to influence elections, undermine support for foreign governments and, in the case of Russia and Ukraine, build support for a war.
Protecting yourself from cyber operations
Cyber ops â€” whether they take the form of disinformation, denial of service attacks, malware or surveillance â€” arenâ€™t restricted to governments or to war time. So, itâ€™s important to be vigilant whenever you use the internet.
To protect yourself from disinformation:
- Be wary of stories that seem surprising or shocking. Ask yourself if there might be an ulterior motive behind them.
- Try and verify the author of a social media post or a news article. Where has it been published? Are they a reliable source?
- Check if multiple reliable outlets reporting the same story
- If you arenâ€™t familiar with a particular outlet, see if itâ€™s known for sharing disinformation by Googling it or checking if itâ€™s on a list of known fake news websites, like this one from PolitiFact.
- Unless youâ€™re confident a story is true, donâ€™t share it.
To protect yourself from malware:
- Use different passwords for different accounts, and donâ€™t share them with anyone. You can use a password manager to keep track of them if theyâ€™re hard to remember.
- Change the default password issued with your router. If itâ€™s not clear how to do this, contact your internet service provider.
- Use two-factor authentication where its available
- Be wary of clicking on links in emails, especially if the sender is unfamiliar or the URL looks suspicious.
- Keep your apps up to date.
- If youâ€™re concerned about privacy, check what permissions an app will use before you install it.
The UKâ€™s National Cyber Security also has a list of simple rules to protect yourself against malicious attacks.
But the responsibility isnâ€™t just on us. Government agencies like the National Cyber Force have been set up to protect the UK from harmful cyber attacks. The agency, which is set to expand this year, is also thought to be engaged in anti-disinformation activities.
‘We’re fortunate in the UK, that we do have a government with different departments whose job it is to focus on these things,’ said Naylor. ‘We have emergency response teams even in each different government department.’
The UKâ€™s latest cyber strategy recommends shifting responsibility on security from user to provider, with the government promising to remove ‘as much of the burden’ from individuals as possible.
Companies providing essential services are already required to abide by certain cybersecurity rules. The new strategy promises regulations and incentives will be updated to encourage private companies to engage and invest in cyber resilience.
Big tech firms have taken steps to combat disinformation on their platforms, but theyâ€™re regularly accused of being slow to remove harmful content.
Some experts say the visual nature of disinformation shared online has made it particularly hard to identify and remove.
Policing visual content can be challenging â€” especially for newer platforms like TikTok, according to Stanford Internet Observatory technical research manager ReneÃ© DiResta. She told CNN: ‘Facebook and Twitter have had some rather extensive experience in content moderation during crises; I think TikTok is finding itself having to get up to speed very quickly.’
Social media firms have had some successes so far in removing fake accounts. Facebook announced last month it had taken down a network of fake news outlets and personas pushing anti-Ukrainian messages on its platform.
But with new false posts appearing online all the time, disinformation will surely continue to shape â€” and distort â€” the war in Ukraine.