Data management: Knowledge is power
The ancient axiom that knowledge is power applies no less to financial services than to any other area of life. During recent years there has been an unrelenting fixation on big data. The topic has been at the top of Swift’s Innotribe agenda for the past three years, for example. Yet the approach taken by financial institutions to managing their data varies hugely.
A white paper, Data Management – a Finance, Risk and Regulatory Perspective from Wolters Kluwer Financial Services (WKFS), states a key issue for data architects in creating a unified data management infrastructure is that operations in different countries often have different internal systems. As well as the obvious issue of inconsistent data formats, these systems often lack common vocabularies and definitions. This creates a “considerable obstacle to achieving a group-level view or consolidated reports needed to meet specific regulatory requirements”, the paper says.
Wolfgang Prinz, vice-president of product management at WKFS says the usage of data within a financial institution differs from team to team and this is complicated by regulatory differences. “For example, risk departments following Basel bank capitalisation rules will measure probability of default over a 12 month period. But under International Financial Reporting Standards, the finance department will measure over the life of a product, such as a 20 year mortgage,” he says. “The issues raised in this paper show that it is of paramount importance that firms have standard data architecture to make this kind of distinction. This will enable the data to be meaningful to all potential recipients, while retaining the underlying consistency so that all users are working from the same base figures.”
The structure of individual companies also can be a major barrier to smooth, efficient processing of data. Of the vast quantity of data held at financial institutions, much of it is divided between different internal systems. Mergers and takeovers complicate the picture still more, making it difficult to understand and keep track of vital business information. At the same time, the presence of inefficient manual record keeping procedures, such as spreadsheets, undermines the quality and reliability of the data and ultimately its usefulness to key decision makers.
“If you don’t have quality data you can’t be sure about anything,” says Mark Davies, general manager and head of business entity data specialist Avox. “Regulation is important, but it’s even more important to ensure that there is a business there tomorrow. Organisations should look at the consistency of the data. They need to take quality control seriously and make sure that they can pass information around the business, without it being changed. So a manual process, entering information on spreadsheets, is not ideal.”
Last year, several major institutions received headline-grabbing fines for failures in their anti-money laundering and know your customer procedures – fines that might have been avoided with better, more vigilant use of data. HSBC was fined nearly $2 billion in a case relating to its US subsidiary. The US Senate Permanent Subcommittee on Investigations conducted a year-long investigation of the bank and found it had “exposed the US financial system to a wide array of money laundering, drug trafficking and terrorist financing risks due to poor anti-money laundering controls”.
Such cases may have contributed to a growing awareness at the boardroom level of the costs of failing to fully understand the operations of their businesses. According to Davies, the result is that reliable data has become not simply a ‘nice to have’ area of the business, but a ‘must have’ for the large global financial institutions.
Enterprise-wide data management works, and it is a necessary prerequisite to enterprise business intelligence and ‘big data’ processing, says Thomas Statnick, global head of technology, treasury and trade solutions at Citi. An effective data management program should contain the follow building blocks: data standards, process, people, architecture and tools, he adds. Data standards are a necessity and through their implementation align an organisation to standard reference data sources, data, models, account and product hierarchies, transaction codes, etc. Data centric processes allow for the one-time cleanup and ongoing governance of the generation and use of data within an enterprise to ensure it remains aligned to the data standards. Data centric people, he says, define and socialise the standards and execute the processes and ensure governance. A consistent data architecture is needed to provide a reference for data distribution and/or aggregation. Finally, standard enterprise tools should be specified for the transformation and storage of data, metadata managements, data lineage, and business intelligence and analytics or visualisation.
“While slight customisation to an organisation’s needs is possible, each of these building blocks is essential to build and end to end ecosystem for an enterprise’s data,” says Statnick. “An enterprise should avoid the temptation to pick and choose from these building blocks and while it may be practically and politically difficult to build this data ecosystem, the effort will pay substantial dividends when all the pieces come together.”
Statnick says progress towards an enterprise data management goal must be rooted in quantitative metrics, such as those provided by a data maturity model (DMM). A baseline should be established before any work starts and then periodic assessments of progress using the same quantitative model should be conducted on six month or yearly intervals to measure progress. “It’s important to have a target DMM score and associated timeframe to help drive the program’s execution. It’s also important to be pragmatic about how much can be achieved over a period of time given the initial DMM score (a complex enterprise will not go from a low DMM score to a high one in a year), so set stretch goals but ones that are achievable,” he says.
However, not all financial institutions necessarily share the view that enterprise-wide data management systems are necessary. For many in the asset management community, for example, data management is not a core business area nor is it a source of revenue and therefore it tends to be neglected.
“Firms take a cautious approach,” says Charlie Price, senior director, pricing and reference data at market data firm Interactive Data. “The cost of building a server is a major deterrent. Many organisations are not willing to write a blank cheque for enterprise data management.”
Cost always has been a barrier to effective data management – a situation that the relatively constrained economic environment of the past five years has not necessarily helped. Adding to the difficulty is the fact that data vendors sometimes buy and sell data from each other before reselling them at inflated prices. This practice has been characterised as ‘rent seeking’ behaviour that imposes extra costs on the industry without adding any value to market participants.
Philippe Verriest, director of communication information services at Euroclear, says there are three main costs associated with data. First, data vendor fees typically total several millions of euros per year. Second are costs associated with management of the data itself, which include licences for individual users that can amount to several millions more. Finally there are costs incurred due to data matching inconsistencies, which require corrections at a cost of between €7-€50 per trade. These costs can add up to several millions of euros each year, bringing the combined total to several billion euros each year. Verriest estimates that only around 80 per cent on average of the data processed by major financial institutions is accurate.
“Banks try to control the cost of data management by outsourcing operations to India,” he says. “But that doesn’t solve the root problem, that’s just reducing the labour costs associated with the same process. The root problem is to improve the quality of data, and that’s what you need to do to reduce the number of errors.”
Centralised data management should help to improve the accuracy of data, especially when systems are automated and corrections are passed through automatically to different segments of the business. A centralised system also helps to reinforce consistency, since all parts of an organisation will in theory be running to a harmonised rulebook that doesn’t vary in its standards and measures. Staying close to the prime sources for data is recommended, as the fewer stages information has been through, the less opportunity there is for corruptions and mistakes to creep in.
“You have to make sure that if there is an error it is investigated and traced back to the prime source to find out why,” says Verriest. “At our best, we are hitting 97-98 per cent accuracy in the data. High quality data is vital to the success of business and a central solution for data management definitely helps to ensure the quality and timely delivery of that information when and where it is needed.”
Statnick says substantial investments need to be made for most enterprise-wide data management programs to work, but there is return on this investment. Front-office organisations should work with their operations and technology counterparts to build a business case for enterprise-wide data management. “How much can be saved because the data is now clean and there are fewer exceptions or production problems? What new products and services can be offered and how can these be commercialised? What is the value of clean data to current clients? These are all important components of the business case and if an ROI cannot be determined one must question whether to make an investment in enterprise-wide data management.”