At a special screening of his new film “Oppenheimer,” the renowned filmmaker Christopher Nolan drew comparisons between the rise of AI and the development of atomic weapons. According to Variety, Nolan opined that most of the people who run the organisations that are currently leading the charge in the industrial adoption and promotion of AI “don’t know what an algorithm is,” and they certainly “don’t want to take responsibility for whatever that algorithm does.” Of course, Nolan is undoubtedly correct hear. Caught up in the current AI hype, many an aunt, uncle, politician, and CEO of multinational corporations have enthusiastically adopted AI tools with little understanding of the underlying technology, or their inherent practical and ethical implications. Is a chatbot just a chatbot? How does this play out when said chatbot is used to scam your grandmother? What level of liability should be assigned to the purveyors of these technologies? These are the practical questions with which we have all become too familiar since OpenAI (which is now much less open than it once was) launched ChatGPT on November 30, 2022. However, the auteur, Nolan, has larger concerns.
Channelling his inner James Cameron, Nolan fears that “AI systems will go into defensive infrastructure ultimately. They’ll be in charge of nuclear weapons.” This is all very Judgement Day sounding, but Nolan’s concerns are well founded and raise important questions about accountability on a larger stage. “To say” he continued, “that that [AI] is a separate entity from the person wielding, programming, putting that AI to use, then we’re doomed. It has to be about accountability. We have to hold people accountable for what they do with the tools that they have.”
So, we’re at a point where we are unleashing another new technology with atomic-like potential. But, for the most part, we do not have an Oppenheimer. That is to say that we do not have skilled scientists who are also ethically minded making these things. AI pioneer Geoffrey Hinton is the obvious exception here, but by and large we have jumped right over the existential fear which sticks to atomic weapons like a bad smell to the utopianism that accompanied nuclear power (that will be “too cheap to meter”). Except, rather than being promised cheap and abundant energy, the AI revolution promises a revolution in work. We won’t have to read books or articles because AI can summarise them for us. Why learn how to paint when we can just use Midjourney or DALL·E 2? Or, perhaps more pressing, why hire an illustrator when we can prompt an AI? Again, these are all obvious and well-worn topics – what will we do with X now that AI can do that job – but there are deeper, and more pressing issues. The economic displacement of artists by AI are, to a degree, intended consequences – image generators were made to make art. What should be truly worrying us is what happens when these technologies act in unintended and unexpected ways. Further, given the speed at which these technologies operate and act, what happens when such unintended consequences are replicated millions or billions of times? The answer is that even small mistakes can be magnified to have significant, even severe, consequences. What, then, is to be done?
We don’t have an easy answer, but in our own small way – and with a much, much smaller budget – we attempt in our forthcoming documentary film “Project Q: War, Peace and Quantum Mechanics” to show how the first (atomic) and second (semiconductor) quantum revolutions measure up to the coming third quantum revolution in computing, communications, and artificial intelligence. Now that isn’t an answer per se, but perhaps we can find a semblance of a solution to the problems of the future by reconsidering the past.
On that note, and with hope in our collective hearts, we are heading to the cinema!
Leave a Reply