The high cost of failure
Data in all its forms and access to it in real time is becoming ever more critical as financial institutions seek to manage myriad risks.
Operational risk management presents banks with a dilemma. On the one hand, during recent years they have been under tremendous pressure to cut costs. On the other, there is equal pressure to upgrade systems and revamp core technologies to meet the more exacting requirements of the modern regulatory agenda. Caught between these two opposing forces, banks run the risk of fines for inadequate risk management, as numerous examples testify.
According to the European Banking Authority, European Union legislation requires that institutions adequately manage and mitigate operational risk, which is defined as the risk of losses stemming from inadequate or failed internal processes, people and systems or from external events. Operational risk includes legal risks but excludes reputational risk and is embedded in all banking products and activities.
A salutary reminder of the cost of getting it wrong arrived in August, when Standard Chartered was fined $300 million by the New York State Department over failures in its anti-money laundering (AML) processes. It was a particularly punishing episode for the bank because it brought back memories of the bank’s $340 million fine in 2012 for allegedly breaching US sanctions in Iran. The latest penalty suggests the corrective measures the bank took to its operational risk processes in the interim did not go far enough.
However, this is just the latest in a long string of failures, all of which can be attributed in one way or another to poor management of operational risk. The spectacular $1.9 billion fine landed on HSBC in December 2012 for AML failures in Mexico remains the largest single example, but it was far from being an isolated incident. In September 2013 JP Morgan was fined $920 million for the ‘London whale’ trading loss, which had already cost the bank $6.2 billion. Two months later, JPM reached a $13 billion settlement with the US authorities – the largest settlement with a single entity in US history. The settlement resolved federal and state civil claims arising out of the packaging, marketing, sale and issuance of residential mortgage-backed securities (RMBS) by JP Morgan, Bear Stearns and Washington Mutual before 1 January 2009. As part of the settlement, JP Morgan acknowledged it made serious misrepresentations to the public – including the investing public – about numerous RMBS transactions. Then in December, UBS was fined $1.5 billion for Libor rigging.
All of these events have contributed to a heightened focus on operational risk, over and above what was already happening as regulators sought to improve transparency following the financial crisis. In April, the Basel Committee on Banking Supervision (BCBS) published tools that require banks to assemble the necessary data to ensure they are monitoring their intraday liquidity risk and their ability to meet payment and settlement obligations. The BCBS wants banks to start using the new monitoring tools for reporting in January 2015, with full implementation by January 2017.
Whether that goal will be achievable within the time span remains to be seen. The tools set out by the BCBS require a retrospective view of aggregated data using credit/debit confirmations, but only 20 per cent of correspondent banking payment instructions on Swift are confirmed with a credit/debit confirmation message. To make matters worse, in its December 2013 report summarising the progress that global, systemically important banks have made, the BCBS suggested that some bank self-assessment submissions were too rosy and were not consistent with Basel III principles. It also warned that some banks may have overstated their progress.
“To achieve the level of detail required by the retrospective BCBS measures, banks will need to build the intraday position for each of their accounts with real-time credit/debit confirmations, says Catherine Banneux, senior market manager, banking, at Swift. “This is a critical component of the monitoring requirements that will differ according to a bank’s size and profile. Progress needs to accelerate in order for banks to be ready for BCBS reporting.”
According to EY (formerly Ernst & Young), common weaknesses in operational risk processes include gaps in the controlled sourcing and usage of data for risk reporting, which can cause insufficient data accuracy integrity and completeness, fragmented risk capabilities across domains and regions, redundant and/or inefficient risk data aggregation capabilities, and the inability to measure and monitor the effectiveness of risk control environments. The firm notes that since BCBS Principles apply to a bank’s normal and crisis risk management data across all risk types and all lines of business globally, the demand on data will be significant. Banks will be expected to provide accurate and timely data on internal risk management models, risk management processes, internal management and board reporting, and regulatory reporting.
“Banks must seize the opportunity to leverage the BCBS Principles to build a fit for purpose risk management environment, while achieving regulatory compliance and delivering significant business benefits,” says Richard Powell, senior manager and UK risk transformation lead at EY in London. “Using scarce transformational resources wisely will be critical.”
However, the pitfalls run even deeper still. Said Tabet, lead for governance risk and compliance strategy at US computing corporation EMC, says the nature of automation in financial services itself generates systemic risk through repetition, perpetuation and interconnectedness. What this means in practice is that even with the best intentions in the world and significant investment in risk management systems, banks may never be able to protect themselves fully against operational risks. They are part of the hazard of doing business.
Clear illustrations of this can be found around the globe. Earlier this month, Indian finance minister Arun Jaitley announced that the Indian Government is working to improve risk management in the banking sector, following several scandals at state banks. These include an investigation into state-run Syndicate Bank, whose head is accused of taking bribes to roll over a loan to steel manufacturer Bhushan Steel. The incident is not the first of its kind. Last year, the Indian authorities arrested a deputy managing director at State Bank of India for taking bribes, including Rolex and Omega watches, to sanction an illicit loan to a struggling company. These examples serve to highlight the scale of the challenge, which can sometimes involve senior personnel at the very top of a bank’s hierarchy and would be difficult to detect using conventional IT systems.
Tackling the culture of senior management figures is a big enough challenge on its own: before 2008, most banks didn’t even have a separate dedicated risk division. But EMC reports that there is also some anxiety about the risk that comes with a more technology driven approach. The European Systemic Risk Board, much like other national regulators, has already made its concerns about new, fast developing technologies very clear, according to Tabet.
“While technology innovation is a good thing and should be encouraged, its implementation and application in financial services requires supervision and regulation,” he says. “Data is at the centre of risk assessment in financial services but requires quality, integrity, security, semantics, context, traceability and transparency to become useful insight. To deliver all of these requirements in a disciplined and consistent fashion, we need standards, industry best practice, effective and efficient regulation and supervision. For example, knowing credit default swaps spreads, prices, and other data that is used primarily to infer correlations is not enough on its own to measure risk. As shown by the 2008 crash, too little insight was available to make informed decisions and assumptions made based on limited data caused a huge crisis in the market.”
According to Tabet, using data in all its forms to measure risk is the key to protecting financial institutions from any number of threats, from cyber risks to more traditional fraud, spotting patterns of behaviour and suspicious activity in real time. Strong controls and continuous monitoring environments based on smart data analytics are needed to keep track of threats in real time. Risk tools are being developed to take advantage of new machine learning technologies that make use of the volume, variety and velocity of data and today’s high performance computing resources. But before these insights can be gleaned Tabet believes that a better understanding of the network and the data on it, particularly its location and security measures around it, is necessary.
“This is particularly key in times of crisis, when financial institutions need to react in real time,” he says. “They need to know where critical data is and who owns it. They need to know who the key stakeholders are, how their data relates to external data within and across counterparties and jurisdictions. They need to have actionable context-aware information. For example, can you recognise when a cyber attack is taking place and, if so, can you protect your most sensitive data from it and ensure that customer facing services remain unaffected, in real time?”
Fraud is yet another front that operational risk executives have to cover. Incidents of online fraud, typically involving stolen bank account details, are thought to be on the rise. A PWC survey in the US in May found that 75 per cent of businesses surveyed had detected a security breach in the past year, while the average number of security intrusions was 135 per organisation. A separate study by security firm Trustwave the same month found that 96 per cent of applications have one or more serious security vulnerabilities. The average loss from a data breach for companies in Germany, the US and the UK now stands at $4.8 million, $5.4 million and $3.1 million respectively.
The causes of data breach were addressed by a Verizon Data Breach Investigations Report earlier this year, which found that over the past three years 67 per cent of breaches in retail involve some form of malware and 76 per cent involve hacking. Meanwhile, the study by Trustwave found that easily guessable passwords were the largest single cause of the problem. In 31 per cent of cases, weak passwords were the entry point for cyber criminals. When checking compromised credentials, Trustwave found that ‘123456’ topped the list of the most commonly used passwords followed by ‘123456789’, ‘1234’ and then ‘password’. Also, nearly 25 per cent of the user names had the same passwords stored for multiple sites. However, according to Chase Paymentech data breaches arising from human error, system glitches or business process failures can be just as common. Examples highlighted by the research include data left unsecured on laptop computers and data emailed to an employee’s home email address, which is generally less secure than the work environment.
Meanwhile in August, a study by post-trade services specialist Omgeo highlighted counterparty risk as an area in which financial institutions will need to protect themselves against changing market expectations. According to Omgeo, operational risk management is now viewed by some firms as part of due diligence for investors, meaning that in a worst case scenario, companies might lose out on business if they are seen to be weak in this area. In response, the firm made some specific recommendations on counterparty risk, on the basis that some firms are leaving themselves open to operational risks merely by failing to use and manage their available collateral effectively. These included an exhortation to build up a picture of what collateral and other inventory assets are currently held, whether they are available for use and how long they will remain so. Combining this information with trade exposure data and eligibility, haircut and concentration requirements from the legal agreements can then be used to decide what margin will be posted, called or recalled.
“Financial institutions must understand the data they have, the applications they run, where data is stored and how it is secured, before we can gain more solid insight into the businesses’ and indeed the sector’s risk exposure,” says Tabet. “Management of the IT infrastructure will be vital in driving innovation in the future, as it will be a key enabler of future change.”