Improving operational efficiency through digital transformation
With increasingly demanding customers, and competition from fast-growing fintech companies, banks are facing new pressures to digitise their offerings. Developing new software is key, but finding a reliable means to quantify the productivity of software teams and quality of code is increasingly difficult.
How can banks be sure they are getting a reasonable output from the large investments they make in software engineering?
Software development plays a significant part in the way banks operate and determines the scope and quality of services they offer. The need to automate their processes is driven by customer demands, the increased use of mobile technologies, new regulatory obligations and security concerns, along with competition from new technology-led financial services providers.
The urgency to deliver highly innovative new value propositions, as well as maintaining and improving existing products, makes software development a strategic imperative. However, it also represents a major operational cost for banks. As a major area of investment, it is increasingly crucial for banks to actively control software development costs and quality to optimise speed to market of new digital services.
For more than half a century, banks have explored various approaches to measuring the productivity of software developers, including counting lines of code, function points and story points.
Methods falling short
If a bank were to measure the number of lines of code produced by software developers to measure productivity, it must be assumed that all lines of code are proportional to progress through the project – which is not always the case. The measurement also assumes each line of code represents the same amount of work, and that this is consistent across all languages and source file types.
Another problematic aspect of measuring lines of code is that it is particularly susceptible to manipulation. If software developers know their employers are using a particular metric to judge their productivity, they can change their working practices to meet requirements – while compromising the value of code. For example, developers will bloat code bases with unnecessary lines of code to appear productive, when fewer lines of more succinct code will most often result in more maintainable software that runs more efficiently. Software developers can effectively “game” the metric.
Functionality of the software is central to the purpose of creating it. Each function point represents fulfilment of one step towards producing software with the functionality required. Counting function points may appear to be an appealing method of measuring productivity, but the measurable points vary depending on the processes used to measure the function points and the architectures or languages used across the applications within a portfolio. The insight function points can give on an application-by-application basis may be interesting if consistency of measurement can be achieved, but they are not a reliable, consistent, or fair basis to make comparisons across applications in a software estate.
Equally problematic, function points require different amounts of work depending on the language used and they do not account for the amount of work invested in developing software that does not contribute to delivering functions. Function Points are not attributable to individual developers or teams, they are written into source code by many contributors. Individual contributions are not divisible at the point of counting function points.
Perhaps most significant is the expense associated with gathering such data, because it must be collected manually and requires iterative recounts. From a stakeholder perspective, the insight from Function Points is complex. Without a software development background and an understanding of function point calculation, it is unrealistic to expect someone to understand the data as a measure of value – in financial or productivity terms.
Conceptually related to function points are story points, and above them, use case points. Both are different approaches to codifying the functionality that is to be delivered. These all come with variants of the problem so it is fiendishly difficult to get consistency of measurement across applications, teams, or even individuals, and so would be a poor choice for evaluating the performance of software developers within a bank or financial institution.
Assurance with consistent, reliable metrics
For banks to be assured that software developers are being productive and writing quality code within budget, they need to deploy a consistent, reliable metric to enable a strong and robust comparison of individuals, teams, and projects.
This is a complicated problem, but one that can be solved as the technology for this now exists.
Banks need to digitise their offerings to compete with technology-led financial services, and successfully managing software development is essential. Monitoring the productivity of software developers enables banks to stay competitive by cost effectively bringing new software products and software enabled operations to market faster and improving the operational efficiency of so called “legacy” infrastructure through digital transformation.
By Jason Rolles, CEO of BlueOptima