// archives

Stephen Hawking

This tag is associated with 2 posts

The Bad Thing About LAWS

One of my daughters once proposed that my t-shirt should read: “I don’t support war, but war supports me.” And it’s true, I suppose.

I write about lots of other things too, but I have been studying war, writing about wars, going to wars (but never fighting in one) for the whole of my adult life, partly because international relations are so heavily militarised, but also because for anybody who is interested in human behaviour, war is as fascinating as it is horrible.

So you might assume that I would leap into action, laptop in hand, when I learned that almost 3,000 “researchers, experts and entrepreneurs” have signed an open letter calling for a ban on developing artifical intelligence (AI) for “lethal autonomous weapons systems” (LAWS), or military robots for short. Instead, I yawned. Heavy artillery fire is much more terrifying than the Terminator.

The people who signed the letter included celebrities of the science and high-tech worlds like Tesla’s Elon Musk, Apple co-founder Steve Wozniak, cosmologist Stephen Hawking, Skype co-founder Jaan Tallinn, Demis Hassabis, chief executive of Google DeepMind and, of course, Noam Chomsky. They presented their letter in late July to the International Joint Conference on Artificial Intelligence, meeting this year in Buenos Aires.

They were quite clear about what worried them: “The key question for humanity today is whether to start a global AI arms race or to prevent it from starting. If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”

“Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populations, warlords wishing to perpetrate ethnc cleansing, etc.”

“Autonomous weapons are ideal for tasks such as assassinations, destabilising nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity.”

Well, no, it wouldn’t be beneficial for humanity. Few arms races are. But are autonomous weapons really “the key question for humanity today”? Probably not.

We have a few other things on our plate that feel a lot more “key”, like climate change, nine civil wars in the Muslim parts of the world (Afghanistan, Iraq, Syria, southeastern Turkey, Yemen, Libya, Somalia, Sudan and northeastern Nigeria) – and, of course, nuclear weapons.

The scientists and experts who signed the open letter were quite right to demand an international agreement banning further work on autonomous weapons, because we don’t really need yet another high-tech way to kill people. It’s not impossible that they might succeed, either, although it will be a lot harder than banning blinding laser weapons or cluster bombs.

But autonomous weapons of the sort currently under development are not going to change the world drastically. They are not “the third revolution in warfare, after gunpowder and nuclear arms,” as one military pundit breathlessly described them. They are just another nasty weapons system.

What drives the campaign is a conflation of two different ideas: weapons that kill people without a human being in the decision-making loop, and true AI. The latter certainly would change the world, as we would then have to share our world for good or ill with non-human intelligences – but almost all the people active in the field say that human-level AI is still a long way off in the future, if it is possible at all.

As for weapons that kill people without a human being choosing the victims, those we have in abundance already. From land mines to nuclear-tipped missiles, there are all sorts of weapons that kill people without discrimination in the arsenals of the world’s armed forces. We also have a wide variety of weapons that will kill specific individuals (guns, for example), and we already know how to “selectively kill a particular ethnic group,” too.

Combine autonomous weapons with true AI, and you get the Terminator, or indeed Skynet. Without that level of AI, all you get is another way of killing people that may, in certain circumstances, be more efficient than having another human being do the job. It’s not pretty, but it’s not very new either.

The thing about autonomous weapons that really appeals to the major military powers is that, like the current generation of remote-piloted drones, they can be used with impunity in poor countries. Moreover, like drones, they don’t put the lives of rich-country soldiers at risk. That’s a really good reason to oppose them – and if poor countries realise what they are in for, a good opportunity to organise a strong diplomatic coalition that works to ban them.
To shorten to 725 words, omit paragraphs 2 and 14. (“I write…horrible”; and “Combine…either”)

Don’t Talk to Them

29 April 2010

Don’t Talk to Them

By Gwynne Dyer

“If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans,” said the world’s most famous theoretical physicist, Stephen Hawking, late last month.

He warned scientists not to try to communicate with extra-terrestrials, pointing out that “We only have to look at ourselves to see how intelligent life might develop into something we wouldn’t want to meet.”

Hawking’s concern is shared by others in the field. They don’t object to passive SETI: it can’t do any harm to “Search for Extra-Terrestrial Intelligence” by listening with radio telescopes for the radio emissions of civilizations around other stars. However, they think that active SETI – sending out messages saying “Here we are” – is just asking for trouble.

“Active SETI … is a deliberate attempt to provoke a response by an alien civilization whose capabilities, intentions, and distance are not known to us,” wrote Michael Michaud, former Deputy Director of the Office of International Security Policy in the U.S. State Department, in 2005.

The recent discovery of at least 400 planets orbiting nearby stars makes the issue more urgent, for we now know that planets are very common in our galaxy.

There have already been attempts at active SETI. In 1974 Frank Drake, the astronomer who founded the SETI project, used the Arecibo radio telescope to beam a message towards the globular star cluster M13, which has over a million stars in it.

But M13 is 25,000 light-years away, so we have at least 25,000 years to prepare for any response to the message.

In 2008, however, a high-powered message was sent to the Gliese 581 system, a five-planet system that is only 20 light years away and has two planets in the “habitable zone” for life. The message will get there in 2029.

Several messages have been beamed to other nearby planetary systems since then, in the blithe assumption that anybody there will be friendly. Scientist and author Jared Diamond has said that “those astronomers now preparing again to beam radio signals out to hoped-for extraterrestrials are naive, even dangerous.”

Michael Michaud was equally concerned, warning that “an Active SETI signal … might call us to the attention of a technological civilization that had not known of our existence. We cannot assume that such a civilization would be benign, nor can we assume that interstellar flight is impossible for a species more technologically advanced than our own.”

One assumption embedded in all these warnings is obvious: that life and even intelligence are probably quite common in the universe.

But the other implicit assumption, made even by an outstanding theoretical physicist like Hawking, is that light-speed or faster-than-light travel may be possible.

If it isn’t, then there would be little reason to worry about hostile aliens. They would have no conceivable motive to engage in interstellar raids or conquest, or even interstellar trade, if travel between the stars takes hundreds or thousands of years. Our current knowledge of physics says that faster-than-light travel is impossible, but leading scientists in the field clearly believe that today’s physics may not have the final answers.

We will have to leave that question open for a while, but there are two ways to test the assumption that life is common in the universe.

It will be several decades before we can go to Mars and the moons of Jupiter and Saturn to see if life exists (or once existed) there, but if life really starts up almost anywhere that conditions are suitable, then it’s unlikely that it would have emerged just once here on Earth.

All the familiar forms of life on Earth have the same biochemical make-up, which points to a single, common ancestor.

But the vast majority of species on this planet are microbes, and we have scarcely begun to explore their diversity. Among them there may be species that have a different biochemical basis, perhaps living in isolated parts of the biosphere, or maybe even co-existing with mainstream life.

If we ever found microbes of a different biochemical lineage, we would know that life here has arisen more than once. If so, then it’s probably as common as dirt all across the universe.

If we find no “alien” microbes, on the other hand, we still cannot be sure that life on Earth is unique, for one theory holds that life is spread from planet to planet, and even from star to star, by asteroid collisions. Maybe we only had one collision.

There is another way to test for extra-terrestrial life. As our ability to examine the atmospheres of planets circling other stars improves, we should eventually be able to detect the characteristic changes that abundant life of our kind causes in an atmosphere. Failing to find those changes would not be definitive proof that life is very rare in the universe, but it would be a very strong indication.

In the meantime, maybe it would be wiser not to go looking for trouble. As astronomer Zdenek Kopal said 20 years ago: “Should we ever hear the space-phone ringing, for God’s sake let us not answer, but rather make ourselves as inconspicuous as possible to avoid attracting attention!”__________________________

Gwynne Dyer is a London-based independent journalist whose articles are published in 45 countries.