Conduct risk explained: fix the systems or pay the fines
Conduct risk is becoming something of a buzzword for regulators this year. In the UK, the FCA took over supervision of consumer protection from the FSA and published its Risk Outlook 2013, which outlined the new conduct risk regime, back in March. Since then, and coming on the back of the FSA’s MMR and the RDR, it has published several more papers outlining this new approach.
This interest has not been confined to UK regulators. The OECD has published a report on conduct risk principles, which it expects all members to integrate into their regulatory approach, and the HKMA has issued a paper on the topic, echoing the language and approach of the FCA. A nascent international consensus seems to be forming on the importance of “conduct risk” as a concept in the emerging post-crisis regulatory landscape.
But there is just one problem: nobody quite agrees on what conduct risk means or where its boundaries are set.
When the FCA and other regulators talk about “conduct risk”, they tend to mean the risk to customers of banks’ controls and operations failing. It blurs with the more general concept of financial consumer protection, which it is in some senses the offspring of. The FCA itself has referred to conduct risk in the context of “consumer detriment arising from the wrong products ending up in the wrong hands, and the detriment to society of people not being able to get access to the right products”.
This has become a hot button issue because of the high profile misselling scandals in recent years, notably PPI, the settlement costs of which stand at more than £15 billion.
This regulatory conception of conduct risk has gone hand-in-hand with a new supervisory approach based on two main features: it will be outcomes rather than process based, and it will seek to be proactive and intervene early, before consumer interests are harmed.
But the narrow definition of conduct risk should not be taken at face value. Technological change is at the centre of this issue, as it has brought new risks and opportunities to the field of consumer protection and to understanding why it cannot now be viewed as an independent regulatory field.
On the one hand, the generation of increasingly granular customer data and its increasingly sophisticated manipulation allows firms to price risk more appropriately to individual customers, to tailor and target products to specific risk profiles.
However, this reliance on automated processes and ‘Big Data’ also leads to the risk of systems failures having exponentially larger negative implications for banks’ customers than in the days of siloed data and manual processes, particularly with the increasing complexity of financial product design.
Technological change has meant that, in practice, consumer protection is now a systems architecture, applications, and data management challenge as much as a front office compliance issue.
As a direct result, in an increasingly automated financial services sector it is also difficult in practice to isolate the narrow regulatory definition of “conduct risk” from wider issues of data management and security in banks, which inevitably overlaps with other regulatory change drivers. In different contexts the term “conduct risk” is often conflated with issues such as operational risk, cyber security and data privacy, all of which are areas of rapid regulatory evolution.
In March, the FCA fined JP Morgan’s UK wealth management business £3.1 million for failing to keep complete and up-to-date information on client objectives, risk profile and risk appetite placing them at risk of receiving inappropriate investment advice. This example is particularly interesting because it was not related to human failure or a failure of controls alone. Instead, the regulator argued that the firms’ computer systems did not allow sufficient client information to be retained. This is indicative of the way in which narrow definitions of conduct risk cannot be taken at face value or viewed independently.
Banks’ IT systems, from 2015 and onwards, when many of these regulations start to bite, will need to generate, store and safeguard customer data (even when outsourced), protect their systems from failure and attack, and manipulate this data in more intelligent ways to generate adequate MI to inform sales, marketing and product design.