Why zero trust security is central to operational resilience – and regulatory compliance
Cybersecurity Awareness Month might have come to an end for another year, but as any cybersecurity expert will point out, the actual awareness never stops.
When, in 2004, the Bush administration in the US decided to give cybersecurity an officially designated month of awareness, the goal was to help individuals stay safe online in the face of increasingly common threats to their technology and their confidential data.
For today’s security professionals, those look like almost innocent times. The challenges then were real and significant, but today’s are almost overwhelming.
Modern and complex
Businesses of all sizes now operate with extremely complex and fast-evolving IT estates that span on-premise and cloud deployments, home and hybrid working, legacy systems and the latest applications.
Our own research from November 2021 indicated how eye-watering all this can be. At more than 80% of financial institutions, the IT system had changed more in the previous 12 months than over the company’s entire lifespan. Systems were presenting operational issues at a rate of at least one a day or 365 a year.
Other organisations identified similar challenges. VMware reported that the first half of 2020 saw a 238% increase in cyberattacks on financial institutions. With failures increasing all the time, that number can only be higher today. As to the cost, IBM and the Ponemon Institute’s annual research found that the average cost of a data breach in the financial sector was $5.72 million.
Ticking time bombs
Inevitably, the pandemic played a major role in these numbers. But who is to say when the next disruptive event will occur, or how trading conditions will evolve? After all, it was only three months after our research that Russia – itself one of the world’s greatest exporters of cyber-disruption – invaded Ukraine.
The fact is that although IT complexity helps deliver modern, customer-focused financial services and creates efficient, productive workplaces, it also presents a wide variety of enticing vulnerabilities to threat actors with various degrees of sophistication.
The following are just some of the most common vulnerabilities today. Imagine these multiplied over multiple systems:
- Misconfigurations: Manual configuration of any software or system is always a risk and many application security tools still require it, which makes it a major threat to both cloud and app security.
- Unsecured APIs: A particular vulnerability in the era of open banking, unsecured application programming interfaces (APIs) can become an easy target for attackers to breach – and the process for securing APIs is another one that is prone to human error.
- Unpatched software: This is an old favourite, but the sheer number of updates issued and end-points to protect makes it all too easy to get behind on patching, or even completely miss a new release.
- User credentials: How many employees really create unique and strong passwords for each of their accounts? Reused and recycled IDs and passwords are relatively easy to exploit with brute computational force. Once inside, the criminal appears to be just another legitimate user. This is particularly problematic when employees are routinely given more access and permissions than they need.
- Runtime threats: Much of a cloud network’s underlying infrastructure is secured by the cloud service provider, but not much else is. Users can find themselves running workloads in an inadequately protected public cloud, exposing operating systems and apps.
- Zero-day vulnerabilities: These are the security flaws that the threat actor knows about but the software vendor doesn’t, and consequently has had zero days to develop a patch.
These and others make the risk of system outages, data loss, and privacy breaches intolerably high and seriously challenge operational resilience. And addressing this threat is no longer a question of basic survival instinct. Understanding infrastructure vulnerabilities and prioritising their safeguarding is now a regulatory requirement. From the UK and the EU to Hong Kong and Australia, relevant authorities require financial firms to achieve operational resilience, measure service-level agreements (SLAs) for critical business services, and to publish proof that they have done so.
New security for a new era
The most effective response to this is a zero trust approach. As the name suggests, this assumes that no piece of software and no person accessing the system can be trusted. It assumes that hackers can – and will – penetrate outer defences and wreak havoc once inside the network, as many of the vulnerabilities above suggest.
Every individual should therefore be challenged each time they carry out a given action or access a certain system and prove their right to be there. If legitimate users don’t have free run of the place, then neither do criminals.
This may involve a marginal increase in load, and consequently a marginal decrease in performance. But that is nothing compared to the catastrophic loss of performance that a full system outage causes. Advanced system monitoring plays a key role in the adoption of zero trust and checking that software and unreliable users are not introducing new vulnerabilities to the system.
But the biggest challenge is that the financial services industry – like most other technology-reliant sectors – operates legacy systems that were not designed with zero trust in mind. Long-established middleware and mainframes struggle to cope with the model, and the security threat alone is reason to modernise the software architecture.
As systems get more complex, dependency on IT grows, and cyber threats proliferate, there is no room for security approaches that are no longer fit for purpose. Firms must better equip themselves to protect their business and their customers. Designing systems with a zero trust approach built in from the start must become a fundamental step in the security process.