An AI for Humanity
This is the text of a talk that I gave to the European Commission in Brussels on the 10th October 2023
For years academics have published studies about the limits of automation by AI, suggesting that jobs requiring creativity were the least susceptible to automation. That turned. out. well.
Actually, that's not completely true: some said that jobs that need a long period of education, like teaching and healthcare, were going to be the hardest of all to automate. Oh. dear.
Let's face it, all predictions about the limits of AI have been hopelessly wrong. Maybe we need to accept that there aren’t going to be any limits. How is this going to affect our society?
Studies came out from Stanford and MIT this year, looking at the potential of AI assistants to improve the productivity of office workers. Both came to the same conclusion—that the workers with the lowest ability and least experience were the ones who gained the most in productivity.
In other words, AI has made human knowledge and experience less valuable.
Researchers at Microsoft and Open AI wrote something important on this phenomenon that I’d like to quote in full:
Large swaths of modern society are predicated on a “grand bargain” in which professional classes invest years or even decades in technical education and training and are [afforded] the exclusive right to practice in their field, social prestige, and above-average compensation.
Technical disruption of this social contract can have implications not only for the medical field but for numerous other knowledge-intensive professions including law, banking, engineering, accounting, and others.
Let’s talk about the fairness of this. Because the AI models didn’t invent medicine, accountancy or engineering. They didn’t learn anything directly from the world—human experts taught AI models how to do these things. And they did it without giving their permission, or even knowing that it was happening.
The large tech companies have sucked up all of human knowledge and culture and now provide access to it for the price of an API call. This is a huge transfer of power and value from humanity to the tech companies.
Biologists in the 90s found themselves in a very similar position. Celera Genomics was trying to achieve commercial control over the human genome. To stop this happening, the publicly funded Human Genome Project (HGP) resolved to sequence the human genome and release the data for free on a daily basis, before Celera could patent any of it.
The HGP was criticised because of ethical concerns (including concerns about eugenics), and because it was thought to be a huge waste of money. The media attacked it, claiming that a publicly funded initiative could not possibly compete with the commercial sector. Fortunately for humanity, a group of scientists with a vision worked together to make it a success.
And it was a huge success: in purely economic terms it produced nearly $1 trillion in economic impacts for investment of about $4B. Apart from the economics, the Human Genome Project accelerated development of the genomic technologies that underly things like mRNA vaccine technology.
The parallels to our current situation with AI are striking. With OpenAI, just like Celera, we have a commercial enterprise that launched with an open approach to data sharing but eventually changed to a more closed model.
We have commentators suggesting that a publicly funded project to create an open-source AI would be ethically dubious, a waste of money and beyond the competency of the public sector. Where the analogy breaks down is that unlike in the 1990s, we do not have any strong voices arguing on the other side, for openness and the creation of shared AI models for all humanity.
Public funding is needed for an “AI for humanity” project, modelled on the human genome project. How else can we ensure the benefits of AI are spread widely across the global population and not concentrated in the hands of one or two all-powerful technology companies?
We’ll never know what the world would have looked like if we’d let Celera gain control over the human genome. Do we want to know a world where we let technology companies gain total control over artificial intelligence?
FAQ
How about all the ethical considerations around AI - shouldn’t we consider this before releasing any open-source models?
Of course. Obviously, there are ethical implications that need to be considered carefully, just as there were for the genome project. At the start of that project the ethical, legal, and social issues (or ELSI) program was set up. The National Institutes of Health (NIH) devoted about 5% of their total human genome project budgets to the ELSI program and it is now the largest bioethics program in the world. All important ethical issues were considered carefully and resolved without drama.
Aren’t there enough community efforts to build open-source AI models already
There are good projects producing open-source large language models, like Llama 2 from Meta and Falcon from the TII in the United Arab Emirates. These are not quite as powerful as GPT-4 but they prove the concept that open-source models can approach the capabilities of the front-running commercial models; even when produced by a single well-funded lab (and a state-funded lab in the case of the TII). A coordinated international publicly funded project will be needed to surpass commercial models in performance.
In any case, do we want to be dependent on the whims of the famously civic-minded Mark Zuckerberg for access to open-source AI models? We shouldn’t forget that the original Llama model was released with a restrictive licence that was eventually changed to something more open after a community outcry. We are lucky they made this decision. But the future of our societies needs to rely on more than luck.
How about the UK Government AI Safety Summit and AI Safety Institute - won’t they be doing similar work?
Absolutely not! The limit of the UK Government’s ambition seems to be to set the UK up as a sort of evaluation and testing station for AI models made in Silicon Valley. This is as far from the spirit of the Human Genome Project as it’s possible to be.
Sir John Sulston, the leader of the HGP in the UK, was a Nobel Prize-winning scientific hero, who wanted to stop Celera Genomics from gaining monopolistic control over the human genome at all costs. The current UK ambition would be like reducing the Human Genome Project to merely testing Celera Genomics’ data for errors.
How will an international ‘AI for humanity’ project avoid the devaluation of human knowledge and experience, and consequent job losses?
It may not be possible to avoid this. But governments will at least be able to mitigate societal disruption if they can redistribute some of the wealth gained via AI (e.g. via universal basic income). They will not be able to do this if all of the wealth accrues to only one or two technology companies based in Silicon Valley.
How about existential risk?
‘Existential risk’ is a science fiction smokescreen generated by large tech companies to distract from the real issues. I cannot think of a better response than the words of Prof Sandra Wachter at the University of Oxford: “Let’s focus on people’s jobs being replaced. These things are being completely sidelined by the Terminator scenario.”
Martin Goodson is the former Chair of the RSS Data Science and AI Section (2019-2022). He is the organiser of the London Machine Learning Meetup, the largest network of AI practitioners in Europe, with over 11K members. He is also the CEO of AI startup, Evolution AI.
The views expressed are his own and do not necessarily represent those of the RSS
Microsoft and Amazon are based in Seattle, not Silicon Valley.