Teradata Universe Madrid 2019: Teradata agrees that less is more
Last year the San Diego group declared much analytics investment by banks and corporates was wasted and it was consolidating its offerings so they could achieve answers from data, reports Graham Buck.
Teradata caused something of a stir last October at its annual event in Las Vegas, when it urged financial institutions and corporates to “stop buying analytics”. A self-defeating command it might seem, given that data management solutions for analytics (DMSA) has been the company’s core business for much of its 40 years and ever since it gained its independence from NCR Corporation back in 2007.
However, the company’s actual – and longer – message was “stop buying analytics – it’s time to invest in answers” and it featured prominently when the Teradata Universe EMEA conference was held in Madrid this week, gathering together more than 400 of its customers representing 36 countries.
“Technology is coming together to help us lead better lives. It can now understand who we are at every single touch point, while at the same time respecting our privacy,” said Oliver Ratzesberger, previously Teradata’s chief operating officer (COO) and its president and CEO since the start of this year.
“Data and analytics can be used to make our lives easier – for example even compiling our tax returns for us. But there is a danger that the capabilities are being hampered by increasing complexity.
“There is so much technology that we are flooded with it, but as individuals and businesses. It poses a challenge for companies, with millions of pieces of technology and open source projects, creating ever-increasing numbers of data silos. This is helping to power a $200 billion analytics industry.”
Yet as Ratzesberger noted, a recent survey of corporates found 74% of respondents agree that analytics technology has become too complex, and 79% of their employees complain they’re not able to easily access the data they really need.
This had put the brakes on what was, until a couple of years ago, a scramble by many companies to adopt technology, which had created a level of “technology debt” or overcomplexity. Spending wasn’t producing the answers that banks and corporates were looking for and much of the money was being wasted. “The conversation has shifted to radical simplification and operational excellence,” he suggested.
“Predictive data intelligence (PDI) is the new standard for our industry. It cuts across the entire enterprise to bring valuable answers – for the business executive, the CIO/IT and the business analyst.”
Teradata’s own response last year was to simplify its’ customers’ analytics investments by combining many of its offerings in a new data platform named Vantage.
“Vantage has been developed to transform data into answers via PDI,” said Ratzesberger. “It’s the fastest-growing product in the company’s history and a connective tissue for ecosystem simplification. It takes the risk out of corporate decisions, delivering massive scale and integration.”
Ironing out bias
The Madrid conference was distinct from many fintech events in focusing firmly on predictive analytics and artificial intelligence (AI), with rather less mention of blockchain.
A newly-published report by business information provider IHS Markit predicts that the global business value of deployed AI in financial services, which was estimated at $41.1 billion in 2018, will have reached US$300bn by 2030 based on the number of AI projects now underway in the banking sector.
So there was keen interest in a presentation by Teradata’s technology officer, Stephen Brobst, on eliminating bias in the deployment of machine learning (ML). Although as he himself admitted, the title was misleading as while it’s possible to reduce inherent bias eliminating it is a much tougher challenge.
“Deep learning and AI are widely perceived as some sort of magic,” said Brobst. “But deep learning is a matter of maths, not magic – a statistical method, whereby we can ‘learn’ to classify patterns using multi-layer neural networks.
“We put things into a ‘black box’, and using maths instead of humans to make decisions, we assume bias will be eliminated. But as undesirable bias occurs when an AI solution reflects the values of its human designer.”
It’s a theme developed by US mathematician Cathy O’Neil in her 2016 book Weapons of Math Destruction. O’Neil argues that insulating algorithms and their creators from public scrutiny means that they’re likely to contain built-in bias and AI algorithms risk being similarly biased.
Indeed AI algorithms have not one, but several potential ‘Achilles heels’, suggests Brobst. “Solutions are only as good as the data that you put into them and they can be undermined.”
Selection bias rests on human decisions on the sources of acquisition for a data set and also which observations from that data are used and which are discarded. Stability bias means that inputted data fails to include the most recent and valuable findings and may be further impaired by regulatory rigidity.
There’s also the danger of emergent bias, resulting from changes in societal knowledge, consumer behaviour and cultural values, or of bias in biometric data: such as the accuracy of speech recognition when highly correlated to certain ethnic accents.
“Bias doesn’t originate in AI algorithms, but is sourced from humans,” noted Brobst, noting that even today it’s likely that a single male on a certain salary applying for a mortgage will be viewed more favourably by a bank than a single female earning the same amount when assessing their ability to repay.
He also noted that bias in deep learning is a bigger issue than with traditional ML because of the black-box nature of neural network algorithms. Bias in AI solutions is already illegal in the policies set out in the European Union’s General Data Protection Regulation (GDPR), and similar legislation is included in the California Consumer Privacy Act (CCPA), New York City’s Algorithmic Accountability Bill and India’s Personal Data Protection Bill.
“So pinpoint the places where bias could potentially be introduced into your AI solutions,” he told his audience. “Bias can be understood – and managed. Proactive management of bias is critical for the successful deployment of AI.”
Brobst’s key recommendations towards this goal include:
- Creating a standard-based approach for developing and deploying machine learning;
- Identifying prejudices that exist in processes for collecting and processing data;
- Testing for unwanted biases that may be embedded in algorithms;
- Investing in explainability;
- Monitoring the performances of algorithms;
- Challenging assumptions