The infallible innovator: banking paradoxes for the age of disruption
If you are a regular reader, you know how I feel about corporate innovation. Essentially we are hired to do new things for the first time. Then badgered to make them look like old things and make each first time feel familiar. And although innovation is framed, at the top level, as the bank’s learning playground, the place where we will experiment, fail fast and pivot, the reality is that experimentation is encouraged on condition of infallibility.
Essentially you are hired to be God, minus the praying, the adulation and the thanksgiving.
Getting the plan right is non optional
The conversation usually goes like this.
Let’s do a thing. A thing we’ve never done before, using a technology we’ve never used before, to achieve an outcome we’ve never had before.
Great.
But before we do it, give us guarantees that the tech will work, the numbers will stack up, the assumptions are sound and everyone will get their heart’s desire. And rather than rolling our eyes, us innovation bods actually do all that. We give proof of tech acceptance criteria and benchmarking reports, NPV scores and value pool assessments. We advocate.
This is not wise, for we are not advocates of this solution any more than the next one. It’s the outcome we are after, and if we have to test and discard ten ways of getting there before we alight on the right one, so be it.
But no.
Advocacy is needed for sign-off. So we advocate against our better judgment.
Adding insult to injury, once we get post-advocacy approval, we are expected to provide accurate estimations of effort, man days and outputs. For the experiment. For the thing we are all doing for the first time, with the tool we are all using for the first time, with the tech none of us have touched before. And nobody shouts “this is crazy talk”. Instead we estimate. And would get it scarily right (we are getting good at this) if it wasn’t for all the dependencies the corporate overlords don’t want to discuss.
The four months it gets the business, compliance, operational risk and legal to ok their little corner of whatever it is you are trying to do, after project sign-off as they are trying to not go first.
The two months it takes “the other team” to extract some static data that wasn’t even contested.
The six weeks it takes to provision an environment.
The three month email tennis between InfoSec and IT before we can even start.
By the time all that time passes and all the comedy incidents have occurred and been quietly resolved, you better hope your estimations are right because there is no more scope for delay in this pilot. Any buffer you had secured to experiment, pivot, get it wrong and try again was eaten up and spat out by housekeeping.
Getting the tool right is non optional
So better hope you picked the right kit to prove whatever it is you are proving.
I know it was an experiment with proof of tech baked in. But you are now several months down the road, with approvals, committees and control function sign offs all in place. The data is finally extracted and anonymised, the environment provisioned, the licenses or partners required for the test on-boarded, the specialists seconded and ready to go. You’ve lost a few friends along the way and gained a few more wrinkles.
Time to turn the engines on and see if the kit works, before you get to testing the hypothesis at last.
Better hope the tech you are using works. And is the right one for what you are trying to prove. Because if, after all this effort, all this pain, all this anger from the people who had to help you along the way and who are adamant it is not their job to help you and they deserve canonisation for even answering your calls, after all this, success is the baseline expectation.
If you picked the wrong tech, if you picked the wrong partner, if the hypothesis holds but the solution doesn’t deliver, brace yourself because next time your approvals will take twice as long. Willingness to help will be further reduced and corporate support will flicker just above zero. Because you are the guy who advocated so passionately and got everyone working for months and months for something that didn’t work. No use reminding them you didn’t advocate, just danced the corporate dance. No use reminding them the delays were theirs and the pain down to their policies and silos and analogue technology, not your experiment.
It’s no use stressing that it was an experiment, failure is learning.
Making mistakes is at the heart of scientific discovery but in banking innovation getting the kit wrong is not an option because if you do, it will be the last mistake you get to make.
Getting the use case right is non optional
So you do a lot of the testing and validating and losing sleep in your own time. So although you still don’t advocate you also know better than to put forward things you don’t have a high level of confidence in. There goes open-ended experimentation for big breaks. But that’s ok. That’s not in the job description.
We still get to build new things to unlock new value.
Only the value needs to be pre-defined and pre-measured. And the success criteria set and closely monitored before you even start the journey of getting ready. Of learning the new tool. Of running the test.
Imagine you spend six weeks building a widget giving carbon footprint analytics for individual and institutional investments. And six months trying to get it past UAT and into the next release cycle. Call it a total of nine months, full gestation period for a human being, to get what is essentially an analytics capability to the end user. It doesn’t matter what the widget does. That’s the easy bit. Once we have the capability up and running and the ability to access, analyse and query the data; the ability to offer secure distribution of the solution; and easy-to-consume visualisations, we can work on any use case you like, Boss.
Only if the first use case doesn’t work out we will never get to work on any others.
Because the technology and the whole initiative will be tainted by the lack of business impact the test case had. Which was selected because it was not that big and scary and visible so if it went wrong it wouldn’t matter too much and if it went right we could dial things up.
Why is it always only you who remembers this conversation?
So although the use case was just a starting point at the beginning, it’s the be all and end all by the end. So you better get it right, if you want to get to the next one.
Hitting the milestones is non optional
Project plans are there to be followed and bonuses get slashed if you have too many Red RAGs against your name. So what happens to the accidental discovery along the way? What happens when you discover something that the client wants more than what you set out to build? What happens when you discover some of your assumptions were wrong? What happens when you discover your estimations were wrong because, let’s face it, they were educated guesses based on the least amount of information you would ever have about the project?
What happens is: nothing.
The plan is the plan. The plan is communicated to boards and committees and you are held to it. Meeting the milestones is more important than exploring what you just discovered. Or solving the root cause of the problem you just encountered that you know you will encounter again. Or pausing to build a repeatable process for all the housekeeping tasks that amount for 80% of the delays in each innovation project.
You have to meet the milestones. And to do that you have to work your team hard, you have to drive your vendors hard. You have to be so certain that the tool and use case you picked will work. You have to guess where the organisation will trip you up, because of silos, policies, conflicting priorities, incompatible ways of working, lack of knowledge or plain old inertia.
To be allowed to bring your experiment into the world, go live with whatever you built and maybe go on to the next thing, in short to do the job you were hired to do, you need to make sure your part of the experiment is watertight. Because everything else that can go wrong on your organisation’s side and maybe the vendor’s too, will.
And while you are running around creating buffers and negotiating dependencies, bashing heads to get people to honour deadlines, standing by people’s desks as they expedite vulnerability scans, harassing procurement and chasing vendors, you have to remember that if you falter, if you make a wrong assumption or take a wrong turn, if you get delayed or just mess up, there will be nobody to square that circle for you.
And all your lives were used up before you even pitched for this experiment: they went up in smoke when you took the job. The very job that stands for doing things for the first time.
You may be your organisation’s face of human centric design, experimentation and working out how to do new things for the first time but to keep that face pretty you have to get things right first time, know in advance, meet disruption with conviction and uncertainty with pre-agreed deliverables. You may be human but you cannot err. You may run a learning function but you can’t allow for trial and error.
You may be hired for creativity but you will be rewarded for precision.
You need to know before you learn. You need to experiment without chance of failure.
So there you are.
Expected to have answers to questions as they emerge, expected to get your piece right securing margin for error for everyone else, expected to be the answer to the executives sleepless nights’ prayer of future profitability.
Ours is neither the power, nor the glory and yet we keep coming back for more.
It is a cult of sorts.
By Leda Glyptis
Leda Glyptis is FinTech Futures’ resident thought provocateur – she leads, writes on, lives and breathes transformation and digital disruption.
Leda is a lapsed academic and long-term resident of the banking ecosystem, inhabiting both start-ups and banks over the years. She is a roaming banker and all-weather geek.
All opinions are her own. You can’t have them – but you are welcome to debate and comment!
Follow Leda on Twitter @LedaGlyptis and LinkedIn.