Good morning from New Economy Brief.
As part of its plans to improve productivity in both the public and private sectors the UK government has revealed more about how it proposes to accelerate the adoption of Artificial Intelligence (AI) technology across the economy.
As well as looking at these plans in more detail, this week’s New Economy Brief explores forecasts of the potential impact of AI on the economy and whether the government is ready for the regulatory challenges it will bring, and the threat it could pose to jobs.
–
The AI Opportunities Action Plan.
Last week Prime Minister Keir Starmer unveiled the government’s plan to “turbocharge growth and boost living standards” by “mainlining AI into the nation's veins”. He also confirmed that the government would adopt 50 recommendations from the recently published AI Opportunities Action Plan, which was led by entrepreneur Matt Clifford. Clifford was appointed as the Prime Minister’s AI Opportunities Adviser earlier this month.
What’s in the plan? The plan aims to accelerate the adoption of AI in both the public and private sectors to boost productivity, through a variety of measures including:
Boosting productivity. The government has set its sights on raising the UK’s dismal productivity growth rate in both public and private sectors. According to the Central Digital and Data Office (CDDO) almost a third of tasks in the civil service could be automated to free up time for other work. Meanwhile, the Tony Blair Institute (TBI) has argued that embracing AI could save up to £40 billion in public sector productivity gains, by helping workers “accelerate processing of planning applications or benefits claims, communicate with citizens better through chatbots, collect and process information for transactional services, expedite research and support tasks, manage diaries, draft notes and much, much more”. The Ada Lovelace Institute, however, warns that we simply don’t know enough about AI use in the public sector – because it’s poorly tracked and evaluated – to make firm predictions about the scale of productivity benefits, and that rolling out AI across government and public services should not be seen as a ‘quick win’.
–
What are the likely economic impacts?
As with many new technologies, economic forecasts of the effects of AI are uncertain and vary widely.
The government points to IMF research which calculates that “if AI is fully embraced”, productivity could increase by 1.5% a year – amounting to £47 billion each year over the next decade. The TBI estimates that more adoption of AI across the economy could raise UK national income by anything between 5% and 14% by 2050, depending on how this is done, with 11% being the most likely outcome – equivalent to around £300 billion a year. In the nearer term, the TBI estimates are more modest, predicting GDP is between 0.1% and 1% higher over the next five years, and 0.6% - 6% higher by 2035.
On the other hand, Imogen Parker from the Ada Lovelace Institute warns us to take many predictions about AI’s effects with a pinch of salt. This cautious messaging is echoed by some in the business community, with Goldman Sachs’ Head of Global Equity Research pointing out that ‘eighteen months after the introduction of generative AI to the world, not one truly transformative—let alone cost-effective—application has been found.’
Who are the winners and losers? The Big Tech sector is infamous for its concentration – a small number of powerful companies own most of the data and computing infrastructure that modern AI depends on, and are likely to realise huge financial gains if these technologies are rolled out across the public sector. These companies are also notoriously untransparent about how the technology works, which is a problem for regulators – more on this below. Interestingly, Sam Altman, the CEO of OpenAI (the company that launched ChatGPT), warned that the AI “revolution will create phenomenal wealth” and “price many kinds of labour…toward zero”, calling for higher taxation of assets in response. Many organisations adopting this labour-saving technology might bank any increased productivity by cutting staff or pocketing the extra profits without raising wages, although the TUC has published proposals to give workers more of a say in how AI is used at work. There is also an important question of whether increased use across the public sector improves productivity at the expense of the quality of service provision, as there are risks to service users if mistakes are made by automated systems.
–
What about jobs?
The Institute for Public Policy Research (IPPR) also warned that there are possibilities for huge job disruption in the future, depending on how the government manages the transition. It found that back office, entry level and part-time jobs are most exposed to automation, with women, young people and those on lower wages significantly more at risk of being replaced. IPPR modelled different scenarios ranging from ‘full displacement’ (nearly 8 million jobs lost and no GDP gains), to ‘full augmentation’, where all jobs at risk from AI are augmented to adapt to it, so there are no job losses and GDP grows by 13% (£306 billion a year).
Limiting disruption. IPPR’s report offers a variety of helpful suggestions on how more jobs can be augmented rather than lost as AI is adopted more widely in the workplace. For instance, the government could offer tax incentives or subsidies to encourage job augmentation over displacement, or implement a more active labour market policy to promote job creation in areas more resilient to automation, including under-resourced sectors such as social care or mental health services, as well as increasing demand for these jobs through higher wages.
–
The role of regulators.
The government has adopted many of the AI Opportunities Action Plan’s recommendations to strengthen regulatory bodies so they can “mitigate AI risks and drive growth”. But the Ada Lovelace Institute warned the government to be “cautious” about giving regulators formal growth goals. “Regulators’ primary role should be to protect the public,” argued its Director Gaia Marcus, adding that “just as the Government is investing heavily in realising the opportunities presented by AI, it must also invest in responding to AI’s negative impacts now and in the future.” (IPPR’s Carsten Jung was interviewed for Joe making a similar point – governments should be “super vigilant” with regulation and “aren’t worried enough about the job impacts”.)
Ringfencing public service provision? Imogen Parker’s response to the government’s announcement reminds us that whether the issue is self-driving cars, police use of facial recognition technology or chatbots giving out medical advice, AI adoption can only succeed if the public trust that it will be used safely. IPPR also recommends ringfencing certain tasks and occupations to ensure “continued human involvement and work augmentation”, such as those involving interpersonal relationships and trust, ethical sensitivity and moral judgements, originality and artistic expression, as well as high-stakes decision-making. This should encourage “more careful thinking about where human involvement is desirable and for what reasons – rather than rushing to automate all things that are technically possible to automate”.
Shaping innovation in the public interest. Many economists believe that strong regulation is needed to ensure disruptive technologies like AI develop in the public interest. Mariana Mazzucato and co-authors argue that governments looking to prioritise innovation in AI should look at improving not just the rate of technological development, but also its the direction. They call for the government to shape AI markets actively through competition policy to avoid “value-extractive economic futures that will lead to important socio-economic harms, such as the concentration of market power and negative effects on labour markets”. They also suggest other measures, such as through innovation policy – for instance, by using government procurement to create demand for R&D in uses of AI that benefit the public. (For more, check out Common Wealth’s briefings on corporate governance in AI, democratising data ownership, and more.)
–
Managing the transition safely
The government is keen to demonstrate that it is delivering its plans and making progress on achieving its missions (like growth in living standards and shorter NHS waiting lists). AI may help do that, but many of the benefits tech companies are promising have yet to be proven – and will likely bring serious risks that need to be managed carefully. It is also important to consider the UK is positioning itself in a global race for developing AI technology - one of US President Trump’s first actions was to invest $500bn in AI infrastructure, in an announcement surrounded by the CEO’s of some of the world's most powerful tech companies. Former IMF Chief Economist Olivier Blanchard commented that: “We may have reached AI singularity. Not the one you think about. But political singularity in which AI players control the government and eliminate all AI regulation.” In this context it is important to ask whether it is worth rushing the development of AI in the UK before stronger regulation and oversight have been implemented.
What should Labour learn from Bidenomics? Labour MP Jeevun Sandher asks why people in the US felt their living standards got worse under Joe Biden even though the US was growing faster than any other G7 nation. He drew three lessons for the UK Labour government. First, investing in physical infrastructure takes time to feed through to incomes. Second, investing in human capital and skills has a bigger, faster impact on wages. And third, investments need to be spread over a large area to be widely felt.
Financing the Sustainable Development Goals. With the 2030 deadline fast approaching, the UN’s Sustainable Development Goals (SDGs) are “dangerously off track”, argues Mariana Mazzucato, who says that the key to reaching these targets is to shift the roles of public and private finance. Mazzucato argues that SDGs must be placed at the centre of economic planning so that the public sector can “strategically guide private investment toward high-impact, mission-driven strategies”.
Comparing cost-of-living crises. British households on low to middle incomes are even further behind their counterparts in other countries than previous figures have suggested, according to the Resolution Foundation. New research finds that once adjusted for the spending habits of poorer households (with 9% more of family budgets going towards housing and 4% more buying food), the gap between lower income Brits and their EU equivalents is even larger than aggregate statistics suggest. This is in large part due to the fact that UK housing costs are 44% above the OECD average.
Achieving clean power. The government’s 2030 clean power target is achievable and will bring down costs for households, argues a new E3G report. The think tank’s ‘Electricity Bills Charter’ sets out measures to maximise bill savings from clean power, such as shifting legacy costs to general taxation, strengthening gas plant regulation and improving energy efficiency.
A new age of authoritarian capitalism. Future Economy Scotland co-director Laurie Macfarlane explores the economic factors behind Trump’s second victory such as the rise of China, the technological arms race and European economic woes. He argues that progressives must “start grappling with the distinct political economy of a new authoritarianism”, an ideology distinct from its neoliberal predecessor.
Landlords support rent controls. New research by Common Wealth finds that 44% of landlords support rent controls. Support among the general population is even higher (75%), with Labour voters particularly enthusiastic (85%).