Doing reference data right: cheaper, better, faster … or else
The majority of regulations require firms to classify their clients into a discrete set of categories; JWG research has identified at least 12, including FATCA, AMLD IV, EMIR, CRD IV and MiFID II. When implementing them, firms are faced with two clear options: An easy option, 12separate programmes, each costing ten pounds to implement; and a hard option, one programme costing fifty pounds.
Unfortunately, most firms seem to have chosen option A, leading to huge on-going implementation costs and the potential for fines as early as 15 September. However, there is still the opportunity to align projects for a holistic customer classification change programme.
To take a use case, EMIR and CRD IV require firms to classify their customers FC, NFC or NFC+ depending on the volume of derivatives that they trade. However, this classification is then put to different uses, defining whether or not OTC trades must be cleared under EMIR, and whether or not a CVA charge must be applied to the trade under CRD IV. Therefore this is clearly a two-stage process. A first, on-boarding stage where firms must elicit a classification from their OTC counterparties. And a second, cross-referencing stage where the classifications must be aligned, stored and made available to different parts of the business as needed.
For the first stage, it is not enough that just any classification is reached; it has to be ‘right’. Regulators have made clear that they are looking for classifications to be consistent, and they know the answers; they have the master list. Furthermore, getting the classification wrong at the first stage means further complications down the line. An incorrect classification could mean clearing against the wrong counterparties, holding too much or too little capital against derivative exposures, and the potential for multiple fines from a single mistake.
Therefore, let’s look at the methods available. Route A, the easy option, says counterparties can self-certify. This is a good place to start from, but alone it is insufficient: In multiple Q&As, ESMA has stated that firms cannot rely on self-certification where they have information which casts that classification into doubt. Therefore, firms are compelled to consider a more difficult model to implement that looks at the bigger picture of each counterparty’s status. However, this also gives firms the opportunity to align their regulatory data collection needs in a single due diligence process. Education and outreach for counterparties might also help to produce more reliable self-certifications.
For the second stage, the cross-referencing stage, the easy option is to link new client classification data to areas of the business using ‘powerful’, universally available applications, such as email or spread sheets. However, classifications will also have to be consistent within firms. Therefore, in our use case, if firms are clearing trades with a counterparty because they believe they are NFC+, then they must also calculate CVA against that trade. This makes it necessary to align the OTC trading desks with whoever is making the CVA calculation within Risk, raising a host of practical questions about the data infrastructure and architecture required to support such an alignment.
For instance, where should the raw classification data be sourced from and stored, and are these recognised as golden sources? How often are updates to the data expected? What data standards exist already for the raw data attributes? What do the service level agreements (SLAs) look like between those providing the classification and those using it? What data quality controls will be needed? And what documentation and evidence will be necessary to demonstrate compliance? Answering these example questions for your firm will give a much clearer picture of what the target operating model looks like.
Ultimately, time is short and so any solution is going to be tactical rather than strategic. However, in this regard firms should make themselves aware of incoming regulatory drivers that may affect their target operating model, such as the BCBS Principles for Risk Data Aggregation. As data that has an impact on risk, this new classification data will be within scope of the BCBS Principles, meaning that it attracts new requirements that have to be in place by January 2016, such as single sources and appropriate end-user controls. However, this is a big ask considering it is a fundamental change from the way most firms’ systems are currently designed to work on a day-to-day basis.
This problem will split firms into two tiers. Those in the higher tier will use the ‘here and now’ fire-fights to make the case for a holistic target operating model, and convince senior management what is needed to get it right.