The UK’s flagship institute for artificial intelligence, the Alan Turing Institute, has been at best irrelevant to the development of modern AI in the UK.
Open source, and the involvement of technology companies with Govt initiatives is necessary if we want to get value - economic and public good - from AI in the UK. Open source also allows scrutiny from the community, which is desirable - maybe even essential - given the potential of LLMs.
This is a very good articulation of one of the main failures of both the UK’s top down AI efforts and of the Alan Turing Institute. The conference organised by them just two months ago was shockingly blindsided, considering LLMs mainly as a regulatory etc. risk.
Modern AI advancement was kickstarted in academia and then carried forward in private labs that are run like academic labs. Academia is the key to UKs AI efforts, it can be a greater source of publications, open source code, and spin offs than it is now. There’s potential and talent there. The private sector should pick up investment into AI research and setup in-house labs, without the pressure of immediate product benefits.
All of this can be sponsored and subsidised by the government. The government should acknowledge its limitations with humility and act as a sponsor, not as a objective setter. Govt doesn't have a crystal ball. No panel can predict the right direction in something so experimental and empirical as AI research. Give universities compute and funding for scholarships. Give companies more freedom to develop and use AI. Researchers will find the topics to work on and some of them will break through. Fund big datasets collections across industry and academia. I can imagine there are even better ideas.
The Wired published an article about OpenAI founding story back in 2016. One can see how the lab was formed and led: money + talent + freedom to research and publish. The industrial labs in tech giants are run the same way (Meta’s LeCun talks about publishing and open source all the time).
In summary, government should fund researchers and remove barriers. People know what to work on. UK academia can atract them and the industry can create conditions for them to continu the work. Nurture this. No need for a steering committee.
PS The agenda of the Alan Turing institute is embarrassing, it sounds like an echo chamber org.
Nov 28, 2023·edited Nov 28, 2023Liked by Martin Goodson
I attended a DS and AI EDUCATORS course with the Turing this year and they spent a huge chunk of time talking about liberal arts, social justice and ethics.
Now it's not that these things aren't important, they should sit underneath everything we look at as a starting point, but it does seem to me that whoever is responsible for looking outwards and setting the vision very much got caught up in a certain kind of politics rather than seeing the world as it is and focusing on the core mission they should be.
They also made a conscious effort (something I was told on application and still get annoyed about) to reorient away from a blend of private and public sector participants to mostly an academic audience.
Remember the government by following the WEF Green Agenda has made sure we not only don't have the energy to have a substantial AI industry, they have made it prohibitively expensive to train LLMs in the UK. Meanwhile many of the highly profitable service industries such as legal services will be increasingly automated away. The government is doing it's best to send us back to the Dark Ages rather than to the sunny uplands of the future. Only by ridding ourselves of the current political class can we have a future.
"Hardware is one hurdle, the United Kingdom has only two top-500 supercomputers suitable for large language model training, while France already has six. This is easily fixed with funding."
Even someone who thinks LLMs are as important as the industrial revolution should know this is a bullshit red herring. What top-500 supercomputer has been used for large language model training, ever? I mean, maybe someone used one once: 0 breakthroughs came of it. Top-500 supercomputers have always been government welfare cheese boondoggles, just like the Turing institute.
Glassdoor indicates that the Alan Turing Institute pays senior research associates $49,015 per year, senior project managers $48k, and senior research fellows the same. I wasn't able to identify any roles there that paid above $50k/year.
Is it any surprise that when you hire people with PhDs and you pay them less than bartenders that you don't get the best people? Those people go to Google or Microsoft and make 10x as much.
"The UK AI Strategy: are we listening to the experts?"
The UK government has made it clear, repeatedly, that people are sick of experts and we have no need of them, thank you very much. The view of UK government is that clever chaps come up with things in their garden sheds. That is a metaphor, but only just.
There are lots of ways of looking at this, most of them somewhat depressing. But among them, the inability of HMG to act strategically in any industry seems to be at the core of the problem. Industrial strategy of any kind is anathema across Government, quangos, institutes and suppliers to which it outsources thinking. It's not a problem that is confined to the politicians and there are occasional attempts on the part of individuals to overcome the inertia. The best hope, I fear is that Government does not deliberately get in the way of AI development in the UK in an attempt to compete with European regulators.
Don't really see the benefit in more red tape by setting up AI panel and task force. Most of the people in such panels and task forces would be people that don't understand the practical complexities and uncertainties around the work. Effort needs to shift towards building tools around AI trust and assurance though which is a greater aspect of work missing in the ethics of ai in most organizational environments. Simply publishing models after models without publishing trust benchmarks or of the harms/bias/risk/fairness assessments is not the responsible way of working in the AI community.
Interesting.
Open source, and the involvement of technology companies with Govt initiatives is necessary if we want to get value - economic and public good - from AI in the UK. Open source also allows scrutiny from the community, which is desirable - maybe even essential - given the potential of LLMs.
This is a very good articulation of one of the main failures of both the UK’s top down AI efforts and of the Alan Turing Institute. The conference organised by them just two months ago was shockingly blindsided, considering LLMs mainly as a regulatory etc. risk.
Shame to hear that their conference was so weak Michael - I suspected it would be.
Modern AI advancement was kickstarted in academia and then carried forward in private labs that are run like academic labs. Academia is the key to UKs AI efforts, it can be a greater source of publications, open source code, and spin offs than it is now. There’s potential and talent there. The private sector should pick up investment into AI research and setup in-house labs, without the pressure of immediate product benefits.
All of this can be sponsored and subsidised by the government. The government should acknowledge its limitations with humility and act as a sponsor, not as a objective setter. Govt doesn't have a crystal ball. No panel can predict the right direction in something so experimental and empirical as AI research. Give universities compute and funding for scholarships. Give companies more freedom to develop and use AI. Researchers will find the topics to work on and some of them will break through. Fund big datasets collections across industry and academia. I can imagine there are even better ideas.
The Wired published an article about OpenAI founding story back in 2016. One can see how the lab was formed and led: money + talent + freedom to research and publish. The industrial labs in tech giants are run the same way (Meta’s LeCun talks about publishing and open source all the time).
In summary, government should fund researchers and remove barriers. People know what to work on. UK academia can atract them and the industry can create conditions for them to continu the work. Nurture this. No need for a steering committee.
PS The agenda of the Alan Turing institute is embarrassing, it sounds like an echo chamber org.
Totally agree.
I attended a DS and AI EDUCATORS course with the Turing this year and they spent a huge chunk of time talking about liberal arts, social justice and ethics.
Now it's not that these things aren't important, they should sit underneath everything we look at as a starting point, but it does seem to me that whoever is responsible for looking outwards and setting the vision very much got caught up in a certain kind of politics rather than seeing the world as it is and focusing on the core mission they should be.
They also made a conscious effort (something I was told on application and still get annoyed about) to reorient away from a blend of private and public sector participants to mostly an academic audience.
Remember the government by following the WEF Green Agenda has made sure we not only don't have the energy to have a substantial AI industry, they have made it prohibitively expensive to train LLMs in the UK. Meanwhile many of the highly profitable service industries such as legal services will be increasingly automated away. The government is doing it's best to send us back to the Dark Ages rather than to the sunny uplands of the future. Only by ridding ourselves of the current political class can we have a future.
"Hardware is one hurdle, the United Kingdom has only two top-500 supercomputers suitable for large language model training, while France already has six. This is easily fixed with funding."
Even someone who thinks LLMs are as important as the industrial revolution should know this is a bullshit red herring. What top-500 supercomputer has been used for large language model training, ever? I mean, maybe someone used one once: 0 breakthroughs came of it. Top-500 supercomputers have always been government welfare cheese boondoggles, just like the Turing institute.
Jean Zay
Glassdoor indicates that the Alan Turing Institute pays senior research associates $49,015 per year, senior project managers $48k, and senior research fellows the same. I wasn't able to identify any roles there that paid above $50k/year.
Is it any surprise that when you hire people with PhDs and you pay them less than bartenders that you don't get the best people? Those people go to Google or Microsoft and make 10x as much.
Exactly
The Turing Institute has been a huge waste of taxpayers money from the start.
"The UK AI Strategy: are we listening to the experts?"
The UK government has made it clear, repeatedly, that people are sick of experts and we have no need of them, thank you very much. The view of UK government is that clever chaps come up with things in their garden sheds. That is a metaphor, but only just.
There are lots of ways of looking at this, most of them somewhat depressing. But among them, the inability of HMG to act strategically in any industry seems to be at the core of the problem. Industrial strategy of any kind is anathema across Government, quangos, institutes and suppliers to which it outsources thinking. It's not a problem that is confined to the politicians and there are occasional attempts on the part of individuals to overcome the inertia. The best hope, I fear is that Government does not deliberately get in the way of AI development in the UK in an attempt to compete with European regulators.
Don't really see the benefit in more red tape by setting up AI panel and task force. Most of the people in such panels and task forces would be people that don't understand the practical complexities and uncertainties around the work. Effort needs to shift towards building tools around AI trust and assurance though which is a greater aspect of work missing in the ethics of ai in most organizational environments. Simply publishing models after models without publishing trust benchmarks or of the harms/bias/risk/fairness assessments is not the responsible way of working in the AI community.