Rise of the machines will be monitored
The Financial Stability Board (FSB) has stated in its first report on artificial intelligence (AI) and machine learning (ML) that the risks they pose need monitoring.
Certainly, this is true, but it only gets at the surface of the three real needs of regulators driven by the digitalisation of intelligence within financial markets – explainability, accountability, and action.
Explaining the unexplainable
The rise of machine learning (ML) and deep learning (DL) technologies is being driven by the broad access to larger and larger quantities of data, and faster and cheaper ways to analyse that data at scale. However, generally these analyses are correlative in nature and do not serve to identify or elaborate on any causal relationships – if one even exists.
To assess whether firms are meeting fiduciary responsibilities in line with regulations, the firms themselves will need to truly understand how and why their AI technologies reach the conclusions that they do. This requires a level of causal understanding and inference that ML and DL technologies don’t inherently provide.
Identifying solutions that either supplement these mathematically complex systems or integrating with them is one route. Another route is to address different ways to reach decision solutions, including the use of Bayesian techniques and symbolic logical reasoning systems – both of which are avenues currently being explored in research.
The EU has taken a step toward requiring this with facets of its General Data Protection Regulation (GDPR) and in the US, David Gunning currently leads the Explainable Artificial Intelligence programme with the Defense Advanced Research Projects Agency (Darpa). However, firms need to take it upon themselves to start pushing for this from their suppliers.
Holding beneficiaries accountable
The FSB also raised concerns that “network effects and scalability of new technologies may in the future give rise to third-party dependencies”, in the form of ecosystem participants that exist outside the regulatory umbrella. It posits that these outsiders could create an unchecked dependency within the fabric of our financial systems. This is certainly true, but it is important to understand that what drives these initiatives is the value of the data itself held by the firms who implement the solutions, much more than the value of the knowledge in the outsider firms.
Most cutting edge AI solutions are sourced from the output of scientific research at academic institutions. The publicly accessible collective output of academia enables a broad cohort of developers, integrators, consultants, and other scientists to explore, experiment, validate and extend these ideas through the value of the data each has on hand – the first-party data provided by the clients or firms who choose to implement solutions.
While the AI expertise to craft these black boxes may be an outside resource to the firms, there is certainly very little proprietary knowledge within the outside parties that cannot be recreated by filling in the gap between public understanding from research and private data. This is the same reason that the rise of a few AI specialists is likely to be a short term phenomenon as much of the publicised research becomes commoditised and productised.
In the end, the firms that benefit from the use of the technologies must also be the ones to bear the responsibility of their use. To that end, it is imperative that regulators consider this when defining how to balance between proper safeguards and overreactive barriers. A negative scenario would be for one body of regulators to impact an entire industry within their regional control – and not the industry outside of their region – by regulating the research or implementation communities that feed the industry participants.
Fight fire with fire
While the report addresses the fact that regulators themselves are also using AI, and notes that “…with use cases by regulators and supervisors, there is potential to increase supervisory effectiveness and perform better systemic risk analysis in financial markets,” it does not address the impact of regulators being behind the wave.
Traditionally, regulation follows the first “crashing” of a wave as a means to prevent further disasters. This is the worst possible scenario for the world in the midst of this fourth industrial revolution. The pace of progress is so fast even compared to a few decades ago that if regulators don’t actively and aggressively engage AI technology adoption, they will find themselves in a gunfight with nothing but a pocketknife.
While addressing AI technology, they should also address data privacy concerns. Part of the accelerating capability of AI in the financial sector is a lack of clear lines on where data privacy rules limit a firm’s right to use your data. With significant third party behavioral data available to combine with first party data that the firms possess – and no checks on appropriate or inappropriate use of that data – there is significant potential for abuse of protected class identifiers that are masked by highly correlated unprotected data.
For example, the Chicago Open Data archive lists data regarding calls to their 311 service for pothole repairs. It is certainly possible to use the information about density of reporting and speed of repairs to inform property values of homes located on the streets. Unfortunately, this type of data has the possibility to hide a racial correlation through additional layers of correlation with income, repair frequency, and population density.
Finding ways to address these data privacy concerns is noted in the publication as an active area of research. However, regulators could also take advantage of the same AI technologies to identify firms that are circumventing traditional data protection. Such use of data is well within the capability of regulators – assuming the investments are made to empower them.
All is not lost
While AI races ahead at breakneck speed, it is important to note that all of these systems are implemented by humans. While this has the potential to reinforce or shield the impact of human bias, it also gives an opportunity for us to examine current rules and regulations for existing implicit bias.
Such focus and opportunity to review these biases with a critical eye is a worthwhile effort on its own. If collective energy can be focused on methods of understanding, structures of understood accountability, and an aggressively active engagement on the part of regulators, then there is an opportunity to drive deeper efficiency in almost all aspects of financial market systems while avoiding the pitfalls of reckless exuberance.
Brian Martin,
director of technology,
Publicis.Sapient