Designing systems for real-time risk management
Financial organisations rely on risk management systems to assess strategic, compliance and operational risks. However, according to a pre-COVID-19 survey of more than 800 audit committee and board members conducted by KPMG, the top challenge for companies is maintaining a highly effective risk management program, due to fast changing regulations and volatility in the business environment. Almost half of the survey respondents reported that their risk management program and processes still require substantial work.
Since this report was issued, the banking industry has been highly impacted by extraordinarily volatile market conditions and challenges concerning business continuity, market and credit risk as a result of COVID-19. The crisis has unveiled gaps in existing risk management frameworks in terms of their accuracy, effectiveness and agility. Banks are under pressure to deploy updated risk models and to be able to run calculations more frequently, and on more data, for more accurate and optimised risk management.
Data complexities can threaten risk management
A risk model is only as good as the data you feed into it. The more you can feed into the model, the more accurate and robust it will be. But as data volumes grow, it’s becoming more and more challenging to ingest the relevant data into the risk calculation engine.
Reading terabytes of data can stress standard data processing systems such as relational databases (RDBMS) and NoSQL databases. The result might mean that a risk calculation can take too long to complete, forcing the trader to miss a valuable trade opportunity. Or, an overnight risk calculation can miss its deadline, forcing an organisation to not comply with regulatory guidelines while leaving traders with stale risk calculations. If an organisation can’t scale up effectively, the only choice is to reduce the amount of data that is fed into it, compromising the risk model’s robustness and accuracy.
Making the shift to deploy real-time, continuous risk analysis is also challenging because data comes in all forms, from multiple sources, and is consequently being processed and stored in siloed databases across the enterprise. Integrating the many back-end systems and mapping hundreds of tables, schemas, data models, and indexes can require a lot of manual effort. When the schema changes, this work needs to be done again, making it cumbersome and prone to errors.
While most risk and fraud solutions today are still based on structured data, the industry is moving towards incorporating unstructured and semi-structured data for more accurate risk calculations. For example, when assessing credit risk in addition to standard structured data, such as profile information and bank account information, additional unstructured data can be extremely useful. This data includes news reports, social media updates, earning call reports, geospatial information, and even recorded voice calls from the call centre.
Regulations also introduce their own data requirements, sometimes forcing specific datasets to be stored on premise or in specific geolocations, while complicating deployments by requiring real-time data replication to retain data consistency.
Consuming more data, faster for more accurate risk assessments
In order to ensure that the system supporting risk management models has the necessary robustness and flexibility required, it’s important to take a modern, future-proof approach and design a system architecture that is built for speed, performance and scale.
A distributed in-memory data fabric, can be implemented as a smart operational data store (ODS) or as Gartner refers to it – a digital integration hub (DIH) to break down operational data silos and unify different types of data, while rapidly executing data retrievals and complex queries. Because the smart ODS/DIH needs to connect to different operational data stores it is important that the integration be seamless. This is where no-code database integration and on-the-fly schema updates can help save the DataOps team weeks of their time while preventing manual errors.
As discussed previously, one of the most important aspects of a real-time risk management system is speed. For speed, the obvious choice is in-memory-computing which can perform at its best when uniquely co-locating data and business logic in the same memory space, and when providing server-side smart aggregations and filtering. This means that the data does not have to travel over the network which boosts performance, especially for complex risk models and algorithms.
But as volume grows, the system must retain the required performance. Elastic scaling, on-demand (with no down-time), provides the system with the ability to accommodate additional data and concurrent users, especially during peak volumes, without the need to overprovision expensive computing resources on premise. And if the scaling is driven by artificial intelligence (AI), the process can be automated, which is especially important during unplanned peaks like those experienced during COVID-19.
This approach enables a leading investment bank, with offices in over 40 countries to calculate and visualise trading risk on a dashboard every few seconds for hundreds of concurrent traders. In addition, they are running complex “what if” scenarios every night to predict outcomes based on price fluctuations.
Another investment bank, Société Générale with data centres in New York, Paris and Asia, efficiently replicates data between data centres and regions allowing real-time credit checks prior to executing trades. They benefit from sub-second response time to access consistent data, across all regions for hundreds of concurrent applications including pre-trade, risk management, and accounting services with zero downtime.