// archives

AI

This tag is associated with 4 posts

Converging Crises

Maybe we can get through the climate crisis without a global catastrophe, although that door is closing fast. And maybe we can cope with the huge loss of jobs caused by the revolution in robotics and artificial intelligence (AI) without a social and political calamity.

But can we do both at the same time?

We should know how to deal with the AI revolution, because we have been down this road before. It’s a bit different this time, of course, in the sense that the original industrial revolution in 1780-1850 created as many new jobs (in manufacturing) as it destroyed (in cottage industries and skilled trades).

The AI revolution, by contrast, is not producing nearly enough replacement jobs, but it is making us much wealthier. The value of manufactured goods doubled in the United States in the past thirty years even as the number of good industrial jobs fell by a third (8 million jobs gone). Maybe we could use that extra wealth to ease the transition to a job-scarce future.

The climate emergency is unlike any challenge we have faced before. Surmounting it would require an unprecedented level of global cooperation and very big changes in how people consume and behave, neither of which human beings have historically been good at.

These two crises are already interacting. The erosion of middle-class jobs and the stagnation (or worse) of real wage levels generates resentment and anger among the victims that is already creating populist, authoritarian regimes throughout the world. These regimes despise international cooperation and often deny climate change as well (Trump in the US, Bolsonaro in Brazil).

And there is a recession coming. Maybe not this year, although almost all the storm signals are flying: stock markets spooked, a rush into gold, nine major economies already in recession or on the verge of one, an ‘inverted yield curve’ on bonds, and trade wars spreading. Even Donald Trump is worried, which is why he postponed the harsher US trade tariffs against China that were due next month.

Economists have predicted nine of the past five recessions, as they say in the trade, so I’m not calling the turn on this one. But a recession is overdue, and a lot of the damage done by the Great Recession of 2008 has still not been repaired. Interest rates are still very low, so the banks have little room to cut rates and soften the next one. When it arrives, it could be a doozy.

So what can we do about all this? The first thing is to recognise that we cannot plot a course that takes us from here and now through all the changes and past all the unpleasant surprises to ultimate safety, maybe fifty years from now.

We can plan how to get through the next five years, and we should be thinking hard about what will be needed later on. But we can’t steer a safe and steady course to the year 2070, any more than intelligent decision-makers in 1790 could have planned how to get through to 1840 without too much upheaval. They might have seen steam engines, but they would have had no idea what a railroad was.

We are in the same position as those people with regard to both AI and the global environmental emergency (which extends far beyond ‘climate change’, although that is at its heart). We know a good deal about both issues, but not enough to be confident about our choices – and besides, they may well mutate and head off in unforeseen directions as the crises deepen.

But there are two big things we can do right now. We need to stop the slide into populist and increasingly authoritarian governments (because we are not going to stop the spread of AI). And we have to win ourselves more time to get our greenhouse gas emissions under control (because we are certainly going to go through 450 parts per million of carbon dioxide equivalent, which would give us +2̊ C higher average global temperature).

The best bet for getting our politics back on track is a guaranteed minimum income high enough to keep everybody comfortable whether they are working or not. That is well within the reach of any developed country’s economy, and has the added benefit of putting enough money into people’s pockets to save everybody’s business model.

And the best way to win more time on the climate front is to start geo-engineering (direct intervention in the atmosphere to hold the global temperature down) as soon as we get anywhere near +2̊ C. To be ready then, we need to be doing open-air testing on a small scale now.

There will be howls of protest from the right about a guaranteed minimum income, and from the greener parts of the left about geo-engineering. However, both will probably be indispensable if we want to get through these huge changes without mass casualties or even civilisational collapse.
_______________________________
To shorten to 700 words, omit paragraphs 7 and 8. (“And there…doozy”)

Lovelock at 100

Forty years ago James Lovelock published his book ‘Gaia: a New Look at Life on Earth’, setting forth his hypothesis that all life on Earth is part of a co-evolved system that maintains the planet as an environment hospitable to abundant life. Today his approach is known as ‘Earth System Science’, and is central to our understanding of how the planet works. But back in 1979, he already had a warning for us.

“If…man encroaches upon Gaia’s functional powers to such an extent that he disables her, he would then wake up one day to find that he had the permanent lifelong job of planetary maintenance engineer….

“Then at last we should be riding that strange contraption, ‘the spaceship Earth’, and whatever tamed and domesticated biosphere remained would indeed be our ‘life support system’. [We would face] the final choice of permanent enslavement on the prison hulk of the spaceship Earth, or gigadeaths to enable the survivors to restore a Gaian world.”

For the past thirty years I have travelled down to Devon every four or five years to interview Jim, but essentially to ask him ‘Are we there yet?’ The last time I went, he said ‘Almost’. But he seemed remarkably cheerful about it, even though ‘there’, he believed, would imply the death of around 80 percent of the global population (‘gigadeaths’) before the end of the century.

There’s nothing harsh or cold about Jim, but it would be fair to say that his manner is impish. He’s a dedicated contrarian who delights in challenging the accepted wisdom – and is generally proved right in the end. And although he was one of the first scientists to sound the alarm about global warming, he never bangs on about our folly, he never raises his voice, and he never despairs.

Once I asked him if he thought things would ever get so bad that human beings would go extinct. “Oh, I don’t think so,” he said. “Human beings are tough. There’ll always be a few breeding pairs.” But, he added, they’d have trouble trying to rebuild a high-energy civilisation, because we have used up all the easily accessible sources of energy building this one.

It is a rather god-like perspective, but that probably comes naturally if you have spent your whole life trying to stand back far enough to see the system as a whole. The Gaian system, that is, which he defines as “a complex entity involving the Earth’s biosphere, atmosphere, oceans, and soil; the totality constituting a feedback or cybernetic system which seeks an optimal physical and chemical environment for life on this planet.”

In other words, it’s all connected. The Earth’s temperature, the oxygen content of the atmosphere, all the qualities that make it a welcoming home for abundant life are maintained by the actions and inter-actions of the myriad species of living things. They are the creators as well as the beneficiaries of this remarkably stable status quo.

It sounds a bit New Age – he and American evolutionary biologist Lynn Margulis, who collaborated with him in the earliest thinking on the proposition, took some flak for that from their scientific colleagues – but he wasn’t really suggesting that the super-organism he proposed had consciousness or intention. Gaia was from the start a serious scientific hypothesis that could be subjected to rigorous testing.

It has now been elevated into an entirely respectable and widely accepted theory. Indeed, Gaia provides the broader context in which most research in the life sciences, and much chemical, geological, atmospheric and oceanographic research as well, is now done.

Jim Lovelock has changed our contemporary perspectives on life on this planet as much as Charles Darwin did for the 19th century, and like Darwin he has done it as an independent scientist, mostly working on his own and with relatively modest resources. Even more remarkably, he published his first book, and his Gaia hypothesis, when he was already 60.

That was forty years ago, and on Friday he turns 100. But he hardly seems to have aged at all, and to celebrate his birthday he has published a new book (his 10th). It’s called ‘Novacene: The Coming Age of Hyperintelligence’, and it’s just as much off the beaten track as his first book, ‘Gaia’.

He’s being cheerful again. Yes, we are approaching the ‘Singularity’, the artificial-intelligence takeover when our robots/computers become autonomous. Yes, after that it is AI, not us, that will lead the dance. But don’t panic, because the AI will be fully aware that its platform needs to be a more or less recognisably Gaian planet, and will cooperate with us to preserve it.

In that case, we will no longer be in the driver’s seat, but we will probably still be in the vehicle. “Whatever harm we have done to the Earth, we have, just in time, redeemed ourselves by acting simultaneously as parents and midwives to the cyborgs,” he writes, and he may be right. He’s certainly right a lot more often than he’s wrong. Happy birthday, Jim.
_____________________________
To shorten to 700 words, omit paragraphs 5 and 6. (“There’s…one”)

Gwynne Dyer’s new book is ‘Growing Pains: The Future of Democracy (and Work)’.

Jobs: Moravec’s Paradox vs. AGI

Don’t bother asking if jobs are being lost to computers. Of course they are, and the current wave of populist political revolts in Western countries is what Luddism looks like in an era of industrialised democracies. The right question to ask is: what KINDS of jobs are being lost? Moravec’s Paradox predicted the answer almost 30 years ago.

Right now, it’s the jobs in the middle that are at risk of disappearing. Not high-level professional and managerial jobs that require sophisticated social and intellectual skills and pay very well. Not poorly-paid jobs in delivery or the fast-food industry either, although automation will eventually take jobs in the service industries too.

But the middle-income, semi-skilled jobs, mostly in manufacturing or transportation, that used to sustain a broad and prosperous middle class are dwindling fast. Western societies are being hollowed out by automation, just as Moravec’s Paradox predicts. Often the newly unemployed find other work, but it is generally in the low-income service sector. These disinherited lower-middle-class and upper-working-class people are the foot-soldiers of the populist revolutions.

Back in the 1980s Hans Moravec, a pioneer researcher in artificial intelligence (AI), made the key observation that “it is comparatively easy to make computers exhibit adult level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility.”

The paradox is that activities like high-level reasoning that are challenging for human beings are easy for robots endowed with AI. Simple sensory and motor skills that are easy for the average one-year-old child, on the other hand, are far beyond the current reach of the robots. No surprise, really: those skills in human beings are the products of a billion years of evolution, and indeed are largely unconscious in us.

So the jobs that robots can most easily take are mid-level management jobs and semi-skilled, highly repetitive manual jobs – and there goes the middle-class meat in the sandwich. What’s left is a small group of rich people (who own the robots), an impoverished mass of people who provide them with services of every kind or have no jobs at all – and a level of resentment in the latter that is rocket fuel for a populist revolution.

This dystopian vision is commonplace nowadays, pushed to the top of the agenda by Brexit in the UK, the election of Donald Trump in the US, and neo-fascist election successes (though not yet victories) in the Netherlands, France and Germany. The same phenomenon may well play a big part in Italy’s election next month.

And the robots are soon going to be able to take out the rest of the jobs too. Moravec and his colleagues were working with the computers of thirty years ago, which were really simple-minded and single-minded. Today’s and tomorrow’s AI is running on computers that are orders of magnitude more powerful, and that allows them to do different things – like ‘deep learning’, for example.

The operating instructions don’t only come from the top (human beings) any more. More and more often, the AI is told what the result should be, and works out how to get there for itself by ‘deep learning’, a trial-and-error process that only becomes feasible when you have a number-crunching capability magnitudes greater than in the 1908s.

Then the path opens to (among other things) Artificial Intelligence that has human-level sensory and motor skills. Not right away, of course, but in due course.

There go the rest of the jobs, you might think, and certainly a lot will go. There goes the need for human beings altogether, the more pessimistic will think, and maybe that’s true too. But the latter outcome is still a choice, not an inevitability.

A significant number of AI specialists are now working on what they call ‘artificial general intelligence’: AGI. Rather than teach a machine to use symbolic logic to answer specific kinds of questions, they are building artificial neural networks and machine-learning modules loosely modelled on the human brain.

Horoshi Yamakawa, a Japan-based leader in AGI, sees two advantages to this approach. “The first is that since we are creating AI that resembles the human brain, we can develop AGI with an affinity for humans. Simply put, I think it will be easier to create an AI with the same behaviour and sense of values as humans this way.”

“Even if superintelligence exceeds human intelligence in the near future, it will be comparatively easy to communicate with AI designed to think like a human, and this will be useful as machines and humans continue to live and interact with each other.…”

Feeling reassured now? Thought not. There’s never much reassurance to be had when thinking about the future. Most of the jobs are going to go sooner or later, including the skilled manual jobs and the high-level management jobs that presently seem safe. We’ll have to get used to that, just like our recent ancestors had to get used to working in cities not on farms.

But maybe the robots will grow up to be our colleagues, not our overlords or our successors. If we take the trouble to design them that way, starting now.
_____________________________________
To shorten to 725 words, omit paragraphs 7 and . (“This dystopian…month”; and “The operating…1980s”)

Artificial Intelligence Threat

The experts run the whole gamut from A to B, and they’re practically unanimous: artificial intelligence is going to destroy human civilisation.

Expert A is Elon Musk, polymath co-founder of PayPal, manufacturer of Tesla electric cars, creator of Space X, the first privately funded company to send a spacecraft into orbit, and much else besides. “I think we should be very careful about Artificial Intelligence (AI),” he told an audience at the Massachusetts Institute of Technology in October. “If I were to guess what our biggest existential threat is, it’s probably that.”

Musk warned AI engineers to “be very careful” not to create robots that could rule the world. Indeed, he suggested that there should be regulatory oversight “at the national and international level” over the work of AI developers, “just to make sure that we don’t do something very foolish.”

Expert B is Stephen Hawking, the world’s most famous theoretical physicist and author of the best-selling unread book ever, “A Short History of Time”. He has a brain the size of Denmark, and last Monday he told the British Broadcasting Corporation that “the development of full artificial intelligence could spell the end of the human race.”

Hawking has a motor neurone disease that compels him to speak with the aid of an artificial speech generator. The new version he is getting from Intel learns how Professor Hawking thinks, and suggests the words he might want to use next. It’s an early form of AI, so naturally the interviewer asked him about the future of that technology.

A genuinely intelligent machine, Hawking warned, “would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.” So be very, very careful.

Musk and Hawking are almost fifty years behind popular culture in their fear of rogue AI turning against human beings (HAL in “2001: A Space Odyssey”). They are a full thirty years behind the concept of a super-computer that achieves consciousness and instantly launches a war of extermination against mankind (Skynet in the “Terminator” films).

Then there’s “The Matrix”, “Blade Runner” and similar variations on the theme. It’s taken a while for the respectable thinkers to catch up with all this paranoia, but they’re there now. So everybody take a tranquiliser, and let’s look at this more calmly. Full AI, with capacities comparable to the human brain or better, is at least two or three decades away, so we have time to think about how to handle this technology.

The risk that genuinely intelligent machines which don’t need to be fed or paid will eventually take over practically all the remaining good jobs – doctors, pilots, accountants, etc. – is real. Indeed, it may be inevitable. But that would only be a catastrophe if we cannot revamp our culture to cope with a great deal more leisure, and restructure our economy to allocate wealth on a different basis than as a reward for work.

Such a society might well end up as a place in which intelligent machines had “human” rights before the law, but that’s not what worries the sceptics. Their fear is that machines, having achieved consciousness, will see human beings as a threat (because we can turn them off, at least at first), and that they will therefore seek to control or even eliminate us. That’s the Skynet scenario, but it’s not very realistic.

The saving grace in the real scenario is that AI will not arrive all at once, with the flip of a switch. It will be built gradually over decades, which gives us time to introduce a kind of moral sense into the basic programming, rather like the innate morality that most human beings are born with. (An embedded morality is an evolutionary advantage in a social species.)

Our moral sense doesn’t guarantee that we will always behave well, but it certainly helps. And if we are in charge of the design, not just blind evolution, we might even do better. Something like Isaac Asimov’s Three Laws of Robotics, which the Master laid down 72 years ago.

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Not a bad start, although in the end there will inevitably be a great controversy among human beings as to whether self-conscious machines should be kept forever as slaves. The trick is to find a way of embedding this moral sense so deeply in the programming that it cannot be circumvented.

As Google’s director of engineering, Ray Kurzweil, has observed, however, it may be hard to write an algorithmic moral code strong enough to constrain and contain super-smart software.

We probably have a few decades to work on it, but we are going to go down this road – the whole ethos of this civilisation demands it – so we had better figure out how to do that.
___________________________________
To shorten to 725 words, omit paragraphs 5, 9 and 16. (“Hawking…technology”; “The risk…work”; and “Not…circumvented”)