Improving data governance and ensuring data ownership
Recently there has been a significant number of new regulations requiring financial institutions to increase the amount of data fields they have on their customer records – whether it’s FATCA indicia or EMIR classifications, writes Julie Benson, test and implementation manager at iMeta.
We are also seeing more swingeing fines imposed by the regulators when processes and data are found to be inadequate. This combination has triggered an increased focus on data governance, with chief data officers being appointed and enterprise data governance programs initiated.
Even without these programs and C-level executives, all of us involved in managing financial data need to remember to apply data governance principles in every IT project. The outcome for the business of any IT project can be significantly improved by asking the following questions:
- Who owns the data internally: what team/function needs this data to do their job?
- What is the data: what is it used for within the company?
- Who owns the data externally: where is the data going to come from and in what format?
- Who else will be using the data: what other teams and systems might also handle the data during its lifetime?
The impact of not asking these questions is usually a proliferation of fields that are storing the same data for different teams, in slightly different formats. It can also lead to data not being available at all points in workflows or for the correct teams.
For example, in the world of reference data investment banks often have multiple settlement systems, some of which may cover the same asset classes. There may be historic differences in naming conventions that have been used by different systems. It is also likely that there has been churn among the people who implemented the system originally and in the operational teams that use them on a daily basis. Consequently, differences in naming conventions make it more difficult to spot cross-overs between systems. Integration projects that aim to aggregate this data and create golden source records can therefore suffer; unable to create a true single source.
Furthermore, this approach can lead to overly complicated user interfaces, and operational and management information reports that do not give a clear picture across the business. There may also be increased costs and complexities to any subsequent projects that need to touch the data. Often, poor legacy data is simply left alone, but then proliferates into new systems and solutions. This can subtly increase the cost and time of using the system; if users have to enter data into more fields than is strictly required.
A powerful tool is currently being developed that will remove the ambiguity in terminology; called the Financial Industry Business Ontology from the Enterprise Data Management Council. This tool aims to provide a common, standard terminology for all data for the financial industry. As standard definitions are released, it will be possible to start using this tool to reduce the uncertainty as to what the data actually is. The semantic repository of data terms covers a wide range of financial terms and common ones, such as business entities, will be of use to a broad set of financial institutions. These terms can then be mapped to any internal alternative names for the same data and, therefore, provide a reliable means of mapping and understanding what the data is.
Even with these standards, the difficulty can lie in how the company holds the answers to the questions about ownership and usage. Traditional methods of data dictionaries and schemas have been the historic definition of what the data is. A first step towards being able to answer ownership and usage questions is for day-to-day business users to have access to that information in order for them to relate the corresponding data fields to the everyday use of those fields. IT teams need to consider making this information visible to users as well, through tool tips and built in help features – preferably generated from a single source to prevent having to maintain the information in several places and run the risk of getting out of sync and causing confusion.
Knowing who owns the data and uses the data is now being tackled by enterprise data governance strategies. Where does this leave companies that haven’t started on this road or who might be too small for a large strategic project to be cost effective?
A relatively simple option for any company, regardless of size or operational complexity, is to use the data dictionary or schema as a starting point to ensure questions are asked about the owners and users at the start of each project. The information can then be built up gradually over time using a bottom-up approach. This requires good communication skills from project personnel, to ensure the benefits are made visible and it isn’t just seen as a cost that can be dropped.
Businesses that plan data governance in advance and tackle the problem holistically will achieve better quality data, more maintainable systems and improved usability. This will ultimately lead to fewer errors, clearer procedure documents and reduced training for new users.
So, while we initially talked about the regulations that are spurring many OTC market participants to focus on data governance, the problems associated with bad governance and the benefits to be gained from good governance applies to all financial institutions.