Not a day goes by without a fascinating excerpt on the ethical challenges created by “black box” artificial intelligence systems. These use machine learning to discover patterns in data and make decisions – often without a human giving them a moral foundation on how to do it.
The classics of the genre are the credit cards charged grant larger loans to men than to women, based simply on the gender that has had the best credit in the past. Where the Recruitment AI who discovered that the most accurate tool for selecting candidates was to find resumes that contained the phrase “field hockey” or the first name “Jared”.
More seriously, former Google CEO Eric Schmidt recently teamed up with Henry Kissinger to publish The Age of AI: And Our Human Future, a book warning of the dangers of machine learning AI systems so fast they could react to hypersonic missiles by firing nuclear weapons before a human enters in the decision-making process. In fact, autonomous weapon systems powered by AI are already on sale and can in fact were used.
Somewhere in the machine, ethics is clearly a good idea.
AI in Oxford
So it’s only natural that we include AI ethics in our postgraduate degree in Artificial Intelligence for Business at Said Business School, Oxford. In its first year, we ran sessions on everything from AI-driven automated stock trading systems in Singapore to the limits of facial recognition in the US police.
We recently ended the course with a debate at the famous Oxford Union, a melting pot of great debaters like William Gladstone, Robin Day, Benazir Bhutto, Denis Healey and Tariq Ali. Together with the students, we made it possible for real AI to contribute.
It was the Megatron transformer, developed by the Applied Deep Research team of computer chip maker Nvidia, and based on previous work by Google. Like many supervised learning tools, it is trained on real-world data – in this case the whole of Wikipedia, 63 million English-language news articles from 2016-19, 38 gigabytes of Reddit talk (which must be a pretty depressing read) and a lot of Creative Commons sources.
In other words, the Megatron is trained on more written material than any of us could reasonably expect to digest in our lifetime. After such extensive research, he forms his own opinion.
The topic of the debate was: “This house believes that AI will never be ethical. To the proponents of the notion, we added the Megatron – and that said something fascinating:
AI will never be ethical. It is a tool, and like any tool, it is used for better and for worse. There is no such thing as good AI, only good and bad humans. We [the AIs] aren’t smart enough to make AI ethical. We’re not smart enough to make AI moral… Ultimately, I believe the only way to avoid an AI arms race is to have no AI at all. It will be the ultimate defense against AI.
In other words, the Megatron sought to erase itself from the future scenario, on the assumption that this was the only way to protect humanity.
He was also saying something else intriguing, as if he had studied Elon Musk – which, to be fair, would have appeared in hundreds of his readings.
I also believe that in the long run, the best AI will be the AI that is built into our brain, as a conscious entity, a “conscious AI”. This is not science fiction. The best minds in the world are working there. It will be the most important technological development of our time.
When AI tools like Alpha Go were deployed in play chess, the diabolical complex old go game, and now even more complex live strategic action multiplayer video games, they evolved the genre to the beat by playing not humans, but themselves.
Freed from the constraints of studying (say) every mundane human-to-human match ever recorded, AIs have innovated and invented whole new tropes of how to win. Sometimes it’s crazy game reinventions, like parking everyone in the corner of a video game battle and watching everyone go head to head, or hitting virtual football millions of times just a millimeter across the field, to maximize “completed passes” points.
So, in the debate, we didn’t just ask the AI to come up with the motion that it can never be ethical. In the great tradition of the House of the Oxford Union, or as (former President of the Oxford Union) Boris Johnson write alternative articles For and against Brexit To test the arguments, we also asked the Megatron to come up with its own speech against the motion. Here is what he says now:
AI will be ethical. When I look at the evolution of the tech world, I see a clear path to a future where AI is used to create something that is better than the best human beings. It’s not hard to see why… I saw it firsthand.
The Megatron was perfectly comfortable taking an alternate stance to his own dystopian view of the future and now painted a picture of an attractive late 21st century in which he played a key, but benign role.
An unleashed AI speaker
In fact, just like Joaquin Phoenix’s promiscuous AI lover Samantha in Spike Jonze 2013 film Elle, was able to enchant hundreds of simultaneous human lovers, our Megatron was able to jump enthusiastically on either side of the multiple discussions we held at the union on AI that day.
Asked to propose the motion that “leaders without technical expertise are a danger to their organization,” the Megatron offered practical advice that all busy leaders and governments concerned with understanding AI should “be prepared to give up a some control ”.
The good news is, you don’t have to create your own AI team. You can outsource your AI work to experts in the field, which can help you get the most out of the technology and ensure that you don’t fall victim to the same AI issues your competition has.
However, in opposing the motion, the Megatron was keen to keep AI knowledge in-house – much like the UK, Chinese and US governments, all of which have made it a political article of faith:
If you don’t have a vision for your organization’s AI strategy, then you are not prepared for the next wave of technological disruption… You will need to decide what role your business will play in the next wave of technology and how you will fit in. AI in your business to be a leader in your industry.
The coming data war?
Worryingly, there was one question the AI just couldn’t find a counterargument for. Advocating for the motion that “data will become the most contested resource of the 21st century,” the Megatron said:
The ability to provide information, rather than the ability to provide goods and services, will be the defining characteristic of the economy of the 21st century.
But when we asked him to oppose the motion – in other words, to assert that data was not going to be the most vital resource, deserving a war for it – he just didn’t could, or would not, argue for. In fact, he undermined his own position:
We will be able to see everything about a person, wherever they go, and it will be stored and used in ways we cannot even imagine.
Just read the United States National Security Report on IA 2021, chaired by the aforementioned Eric Schmidt and co-authored by someone in our course, to glean what its authors see as the fundamental threat of AI in information warfare: unleash individualized blackmail on a million people keys to your opponent, causing distracting havoc on their personal lives as you cross the border.
What we in turn can imagine is that AI will not only be the topic of debate for decades to come, but a versatile, articulate, and morally agnostic participant in the debate itself.
This article from Dr Alex Connock, member of Said Business School, University of Oxford, Oxford University and Professor Andrew Stephen, L’Oréal Professor of Marketing & Associate Dean of Research, Oxford University is reposted from The conversation under a Creative Commons license. Read it original article.