Most of us are worried about AI rules
With every technological breakthrough there needs to be updates to the rules, and the UK general public think the government needs to pull the finger out when it comes to artificial intelligence (AI), reports Telecoms.com (Banking Technology’s sister publication).
That rules need to be changed in relation to everyday life is a given, but it would also be a fair assumption that government and regulators will be miles behind. New research from Machina Summit.AI has revealed the UK general public are concerned about the role of AI in an unregulated environment.
65% of the respondents are positive about the potential of AI, believing there is potential to make the world a better place, however 82% also believe there will be organisations who will use the technology in a negative way.
It’s an interesting contradiction, but one which is understandable. The optimism is present, but so is a sense of realism to the fact there are nefarious individuals out there who will use the power of good for evil. The dark web is there for a reason after all.
In truth, there needs to be an element of control put over the AI euphoria. There are few technological breakthroughs which have the same potential to cause disruption as AI will. Think about the industrial revolution; this was a shift which removed the necessity to employ humans for repetitive physical tasks, the AI revolution will do the same for repetitive mental ones, but at a much quicker rate.
How this is managed needs to be carefully overseen by an organisation which isn’t concerned about the bottom line. Now this might slow down progress, but in this instance, it is not necessarily a bad thing.
Many will claim the removal of mundane tasks will create more opportunities for people to add value in other areas of the business. But one question you need to ask yourself is whether the people being replaced by the machine will be suitable applicants for the new roles created?
There will be pockets of the current workforce who do not have the qualifications or education to become data scientists or business analysts. Or what about taxi drivers who will be eventually replaced by autonomous vehicles, some might be in their late-50s or 60s; would all of them have the capacity to retrain at this mature stage of life? Or what about the bookkeepers of the world? This is a qualified role which could be under threat.
In all of these examples, there will be people who can be retrained and employed in another career, but there will be those who can’t. Without legislation or regulation, companies who undertake an AI solution will do so without concern over the welfare of these individuals. They might claim they do, but let’s be honest, every business will trim costs if and where possible.
Another example can be taken out of the Facebook labs. In recent weeks, a Facebook AI programme invented its own language, surprising the programmers themselves. The programmers forgot to incentivise the chatbots to use English, therefore it did what it wanted.
This was in a controlled environment, with a very limited AI programme, so there was no real element of danger, but the idea is there. Unless AI is carefully monitored, standardised and regulated, the risk of adverse effects is high. These programmers were the subject of human error, they simply forgot about a small detail, but this is the sort thing which will happen without structure. Freedom to experiment is all well and good, but there needs to be an element of control.
Ultimately, AI will have a much more profound impact on our lives than most technological advances which we have experienced so far, and the government needs to make sure it is keeping in-line. When you look at precedent, you can understand why there are some concerns in various corners of the world.
In the UK, the Communications Act was written in 2003, which superseded the Telecommunications Act of 1984. That is a very slow rate of change. The Digital Economy Act was initially written in 2010, though a new one was introduced this year. This new act mainly dealt with regulation of the BBC, children’s access to sensitive material and online copyright infringement. It also made some vague promises to deliver the internet to farmers.
When you look at the timescales involved, this is simply not good enough. Ask yourself how much changed between 2010 and 2017, and whether basing 2017 decisions on 2010 rules would be a good idea. We don’t think so.
It remains to be seen whether governments around the world wake up to the importance of keeping up to date with technological developments, but it is crucial to ensure the ethical, meaningful and safe development of AI. We are not optimistic, but this is one case where we would be happy to be proved wrong.
Banking Technology Awards 2017 are still open for entry!
Know any innovative products, inspirational projects, skilled teams or visionary leaders that deserve a special recognition this year? Nominate them for a Banking Technology Award!
Deadline for submitting the nominations has been extended to 8 September 2017.