Sunday , April 28 2024
Home / comment / Robert Skidelsky on Keynes, AI, the future of work, and more

Robert Skidelsky on Keynes, AI, the future of work, and more

 

 

COMMENT | ROBERT SKIDELSKY   This week, PS talks with Robert Skidelsky, a member of the British House of Lords and Professor Emeritus of Political Economy at Warwick University.

Robert Skidelsky Says More…

Project Syndicate: Last year, you lamented the reversion of contemporary policy discussions to “the age-old standoff between market-based supply-side economics and a supply-side approach rooted in industrial policy,” because it leaves out a Keynesian focus on the “insufficiency of demand.” How would such a focus alter policymakers’ approach to key issues like climate change and energy security? For example, how could the US advance the Inflation Reduction Act’s stated goals using a Keynesian approach?

Robert Skidelsky: This is a really difficult opener! My main issue with the contemporary policy discussion is that it disregards Keynes’ insight that capitalist economies suffer from a chronic deficiency of aggregate demand. In other words, it assumes that economies have an in-built tendency toward full employment. But if that were true, there would be no case for expansionary fiscal policy.

The IRA – which includes $800 billion in new spending and tax breaks to accelerate the deployment of clean-energy technologies – had to be dressed up as “modern supply-side policy,” aimed at reducing inflation by lowering energy costs. But fiscal expansion based on a model that denies the need for it is bound to come to grief, as markets push for a return to sound finance and sound money. In the UK, Labour has had to abandon its pledge to spend an extra £28 billion ($35 billion) per year on green energy, because it couldn’t answer the question, “Where is the money coming from?”

The link between green investment and what UK Shadow Chancellor of the Exchequer Rachel Reeve calls “securonomics” is tenuous. To the extent that green investment at home replaces energy imports, it can reduce supply-chain vulnerabilities. But this argument applies to any “essential” import, from food to pharmaceuticals to computer software. Where does the quest for national self-sufficiency end? In any case, national self-sufficiency is no safeguard against domestic terrorism.

The truth is that the best way to secure supply chains is by ensuring a peaceful global environment. Establish the conditions of peace, and there will be no need for securonomics.

PS: You believe that generative artificial intelligence poses serious risks to humanity, from exacerbating inequality to infringing on civil liberties. Your new book is called The Machine Age: An Idea, a History, a Warning. How would you sum up the warning you are trying to convey?

RS: The risk is threefold.

First, AI renders a growing share of human workers redundant. If computers can do most types of work more efficiently than humans, what is left for humans to do? The threat of redundancy, which machines always raise, has grown exponentially, as AI has made inroads into cognitive work. Rising inequality and proliferating mental-health problems are the natural consequence of growing uselessness.

Second, AI poses a threat to human freedom. Governments and businesses have always spied on their subjects, in order to control them better or make more money out of them. Digital surveillance has made such spying easier and more comprehensive than ever. This carries risks beyond loss of freedom and privacy, as demonstrated by the recent Post Office scandal in the UK: hundreds of sub-postmasters were wrongfully accused of stealing money after faulty accounting software showed discrepancies in the Post Office’s finances. The old wisdom – often attributed to Thomas Jefferson – that “the price of liberty is eternal vigilance” has never been more apposite.

Third, the uncontrolled advance of AI could lead to our extinction as a species. Just as massive earthquakes, volcanic eruptions, and other natural disasters threaten our survival, so do anthropogenic, largely technology-driven forces like nuclear proliferation and global warming. AI can compound the threat these forces represent.

PS: The idea for The Machine Age, you explain, arose partly from a short essay by John Maynard Keynes, in which he “predicted that his putative grandchildren would have to work only three hours a day,” enabling them to live a life of greater leisure. But whereas “Keynes treated work purely as a cost,” and “economic theory treats all work as compelled,” work is also a source of meaning. So, as you noted over a decade ago, a future of more leisure will require “a revolution in social thinking.” Where should such a revolution start?

RS: Keynes understood perfectly well that work was not simply the “cost” of living, but the Industrial Revolution collapsed labor and work – previously treated as two distinct concepts – into one category: labor. This gave rise to the economic doctrine that most of us work only because we have to, and would be much happier engaging in nothing but leisure.

The ancient Greeks had a surer grasp of the nature of work, distinguishing between negotium (working because one has to earn money to live) and otium (self-realization through work). The first step toward recovering the idea of otium would be to reject Benjamin Franklin’s dictum that “time is money.”

By the Way…

PS: If automation is to enable humans to enjoy more leisure, the gains must be distributed fairly. In an economy where, as you put it in The Machine Age, “the means of production are largely privately owned,” how can policymakers ensure that the gains of productivity-enhancing technologies like generative AI are “shared sufficiently widely”?

RS: The quick answer is to tax the wealthy sufficiently to provide a universal basic income, which is not tied to labor. Regard everyone – not just the rentier – as having the right to an “unearned” income. The efficiency gains brought about by machinery make this possible; it is for politics to find a way to “spread the bread evenly on the butter.” 

PS: “If we are to outsource more and more of the tasks of life to ever more efficient machines,” you write in your book, “it is important to make sure that their preferences are consistent with those of humans.” But while that may require “reserving moral choices exclusively for humans,” could our growing reliance on machines produce a breed of moral idiots with diminished capacity to make such choices?

RS: This is a vital point. The standard mantra is that the smarter the machine, the smarter its human controller will need to be. This explains the demand for continuous “upskilling”: we must be able to keep up with machines. But I have much sympathy for the contrary proposition: the smarter the machine, the dumber its users will need to be, so as not to throw human spanners into the mechanical works. Only moral idiots will accept the infallibility of algorithmic judgments.

PS: Just as Keynes alluded to “the old Adam in us” in his essay, religion features heavily in The Machine Age. How can humanity’s evolving relationship with the divine help us understand our relationships with work and machines, and the challenges posed by technological innovations like AI?

RS: Albert Einstein made the case for religion with exemplary lucidity: “science without religion is lame; religion without science is blind.” Religion and science are not opposites, as the Enlightenment supposed and today’s secularists believe, but complements, providing distinct, if overlapping, ways of understanding human life. Religious leaders must place themselves at the forefront of today’s debates on the meaning of AI for the future of humans, and they must do so in language that is sufficiently striking to command attention.

Read More 

What Is a Doctor?: A GP’s Prescription for the Future

By Phil Whitaker

This combines book combines a general practitioner’s lament over the decline of the British primary health-care system with a powerful attack on the “medicalization” of health care (treating all health-care issues as medical issues, when many are actually lifestyle issues). Far from saving the National Health Service money, computerized, “evidence-determined” medicine costs the NHS billions of pounds per year in largely useless pills and medical interventions.

Playing God: Science, Religion and the Future of Humanity

By Nick Spencer and Hannah Waite

A timely and powerful contribution to the “science versus religion” debate, this book rebuts the dogma that they provide opposite ways of knowing the world. Rather, Spencer and Waite argue, science and religion address two different but overlapping aspects of human existence: its material basis and its purpose. The view of science and religion as binary oppositions is not only out of line with most people’s perceptions; it breaks down philosophically, as soon as one asks the question of what humans are for.

Making China Modern: From the Great Qing to Xi Jinping

By Klaus Mühlhahn

In this recent attempt to unravel the mystery of Asia’s economic “retardation,” Mühlhahn makes the essential point that the link between the human and natural worlds was never severed in China as it was in the West. This slowed economic development in China – and Asia more broadly – but ultimately led to greater resilience to shocks. A counterfactual history, which positions Asia, not Europe, as the central civilization of modern times, would be fascinating.

By a PS Contributor

 The thread that led me to The Machine Age started with a short essay by John Maynard Keynes, published in 1930, called “Economic Possibilities for Our Grandchildren,” in which he suggested  that, in the future (meaning today), machines would increase productivity so much that humans would be able to work just three hours per day. My 2012 book How Much Is Enough?: Money and the Good Life, which I wrote with my son Edward, aimed to explain why this hasn’t happened.

In The Machine Age, I delve further into the relationship between humans and work, encountering on the way Joseph Needhan’s famous question: Why did the Industrial Revolution not originate in China? But my inquiry sprouted additional legs. I came to believe that machines might be liberating for some purposes, but highly entrapping for others. This brought me to the great dystopian trio: Yevgeny Zamyatin, Aldous Huxley, and George Orwell. And I could not write a book about machines in our current era without addressing the threat AI poses to our physical survival.

My personal response to the “machine challenge” is not to invent and deploy ever cleverer machines, but neo-Luddism: to stop all investment in AI development, except for specially sanctioned purposes (such as to alleviate suffering). But I have come to believe that we will have to suffer a plague of locusts before any such determination is possible. “For such things must come to pass, but the end shall not be yet” (Matthew 24:6).

****

Robert Skidelsky, a member of the British House of Lords, is Professor Emeritus of Political Economy at Warwick University. He is the author of an award-winning biography of John Maynard Keynes and The Machine Age: An Idea, a History, a Warning (Allen Lane, 2023).

© Project Syndicate 1995–2023

Leave a Reply

Your email address will not be published. Required fields are marked *