CHAPTER EIGHT
Short History of Nuclear War
Everything we know about Soviet military thinking indicates rejection of those refinements of military thought that have now become commonplace in this country, concerning, for example, distinctions between limited war and general war; between “controlled” and “uncontrolled” strategic targeting, and between nuclear and non-nuclear tactical operations….Violence between great opponents is inherently difficult to control, and cannot be controlled unilaterally….Once hostilities begin, the level of violence has in modem times tended always to go up.
Bernard Brodie, 1963 [33]
After fifteen years of wandering in the swamps of limited nuclear war, Bernard Brodie returned after the Cuban crisis to his original conclusion that nuclear weapons had changed everything, and that their only rational use was to deter war, not to fight it. For this apostasy he was virtually ostracized at RAND, where most of the civilian intellectuals who then dominated American nuclear war doctrine saw his abandonment of the effort to make nuclear war into a rational instrument of policy as a rank betrayal. They were simply not willing to acknowledge the huge role that emotions play when life-and-death issues are at stake, or the fact that cultural and ideological differences would make their Soviet opposite numbers come to different conclusions and render meaningless their notions of a limited nuclear war in which each side signaled its intentions by selecting different categories of targets.
Part of the problem was RAND’s hyper-rational house style, most spectacularly embodied in Herman Kahn, author of a book that aspired to replace Clausewitz’s classic On War which he boldly entitled On Thermonuclear War. “I don’t understand people who aren’t detached,” Kahn said, and cultivated a style of cold-blooded analysis that dealt in millions of deaths as others might deal in dozens of eggs. On one occasion, when his coolness was criticized, he replied: “Would you prefer a nice warm mistake?” Others, more sensitive to the horrors they were analyzing and planning, frankly admitted to being both intellectually intrigued by the complexities of nuclear war planning and seduced by the sense of power and responsibility that came with it. William Kaufmann remarked that once he slipped into the deep, dark pit of nuclear strategy, “it was easy to become totally absorbed, living, eating, breathing the stuff every hour of every day” – but that once he had emerged from that realm and could see it from a distance, it all seemed crazy and unreal. [34]
Shortly before his death in 1978, Bernard Brodie told me he believed that most of the thinking on limited nuclear war by civilian strategists was, in effect, simple careerism. The theory of minimum deterrence, the only one appropriate to nuclear weapons, had been worked out and was virtually complete within a year of the Hiroshima bomb. It was simple, robust, and not susceptible to fine-tuning – but later entrants to the field of nuclear strategy had to establish their reputations by making some new contribution to the theory, which therefore led to a lot of “hypersensitive tinkering” with the basic assumptions. The best way for ambitious strategic analysts to advance their careers, he pointed out, was to identify some “flaw” in the existing deterrent theory and to provide some solution to it that enlisted the support of powerful interests in the military establishment and/or defence industry because it required new weapons. It would have been churlish to point out that he had spent many years traveling that road himself – and besides, sitting there in his living room overlooking Santa Monica Bay at the end of a remarkable career, he was clearly filled with regret about it.
Brodie’s eventual rejection of the new orthodoxy and his return to his original insight put him in distinguished company. Robert McNamara also ultimately ceased to believe that nuclear war was controllable at all. Far from defending the war plan he had left behind in 1968 as his legacy, he later said of it: “If you never used the SIOP – or any one of the SIOPs – to initiate the use of nuclear weapons, then they weren’t as inappropriate as they might have seemed. But if you were responding to a conventional force or movement by (escalating to the selective use of nuclear weapons) then it was totally inappropriate, because it would just bring suicide upon yourself.” [35]
Henry Kissinger, President Richard Nixon’s national security adviser, who spent the years from 1968 to 1976 struggling to achieve the goal McNamara had abandoned, ultimately admitted to him that he had also been unable to make U.S. nuclear strategy any more “appropriate” to the facts. In 1957, as an academic, Kissinger had written that the central problem in nuclear strategy was “how to establish a relationship between a policy of deterrence and a strategy for fighting a war in case deterrence fails “– but by 1974, after six years of experience in shaping the actual foreign policy of a nuclear power, he too had lost the faith: “What in the name of God is strategic superiority?” he asked. “What is the significance of it, politically, militarily, operationally, at these levels of numbers [of nuclear weapons]? What do you do with it?” [36]
The nuclear planners in the old Soviet Union was never torn by similar arguments. Civilians were rigidly excluded from questions of nuclear strategy in the Soviet system, and both the Russian military tradition and Marxist methods of analysis pushed doctrine in rather different directions from those it took in the United States. Limitations in warfare, once a war has begun, were almost incomprehensible to this intellectual tradition. In any case, until the early 1970’s the Russians were so grossly inferior in nuclear forces that their only available strategy was massive retaliation – or a first strike, if they saw an American attack coming and had a chance to preempt it. Even after McNamara “capped” American nuclear forces at about 2,250 delivery vehicles and the Soviet Union gradually caught up with and then somewhat surpassed that figure, it is highly improbable that Soviet strategists ever toyed with the notion of a limited or controlled nuclear war.
The Soviet Union’s passionate attachment to massive nuclear firepower was a consistent impediment in the series of U.S.-Soviet arms control negotiations that began in the later 1960s – though certainly no greater an obstacle than the United States’s mostly successful attempts to exclude from the negotiations whichever technological innovation it was currently counting on to restore its strategic superiority: MIRVs (multiple independently targetable reentry vehicles) from the SALT I treaty, the MX and cruise missiles from the SALT II treaty, and Star Wars from the START talks. [37]
The most intractable element in the push for new weapons on both sides was the fear of being left behind by technological change. Even as the first full generation of American intercontinental ballistic missiles, the Minuteman I missiles, began to go into their silos in 1963, the next step in the game of technological leapfrog was already underway. The Limited Nuclear Test Ban Treaty of that year had forced the cancellation of a U.S. Air Force test of how well those fortified silos could protect their missiles from a nuclear attack – the Air Force had planned to build a sample silo in Alaska and explode a nuclear weapon over it – and so the question of missile vulnerability seemed destined to remain permanently in doubt. The official Defense Department line was that it didn’t matter, because there was no strategic advantage to be gained by using one Soviet missile to destroy one American missile. However, one physicist at RAND, Richard Latter, had a disturbing idea: what if a single missile carried numerous warheads, each able to strike a different target?
He told his idea to the Pentagon’s director of Defense Research and Engineering, Harold Brown (later secretary of defense in the Carter administration), who agreed to put money into investigating it. But the potential Soviet threat of the future quickly turned into the real American threat of the present: new guidance technology and a “space bus” developed by the civilian National Aeronautics and Space Administration (NASA) to dispense several satellites from a single rocket launcher were quickly married to provide a workable system for delivering multiple nuclear warheads to separate targets. In 1965 the Department of Defense approved a program for equipping American ICBMs with MIRVs.
In secret memoranda, Defense Secretary McNamara admitted that this amounted to a counterforce system (designed to go after the enemy’s missiles), but he was also using MIRVs as a weapon in his bureaucratic and diplomatic battles. Having restricted the Air Force to only one thousand Minuteman missiles, he was now able to offer it a compromise that more than doubled the number of nuclear warheads its missiles could deliver. At the same time, he could deflect the growing Air Force pressure for anti-ballistic missile (ABM) defences to protect its missile fields by pointing out that MIRV technology could cheaply saturate any ABM defence system. Above all, he saw MIRVs as a diplomatic lever with which he could persuade the Soviets not to pursue the costly and ultimately futile path of ABM deployment: “We thought we could get by without deployment…that the Russians would come to their senses and stop deploying ABM – in which case we would not have deployed MIRV.” [38]
But as usual, the bargaining chip ultimately became a technological reality, and when the Soviet Union did finally follow the example of the United States and install MIRVs, the American defence establishment used that as a justification for the next large advance in missile technology.
I’ve long been an advocate of getting all the accuracy you possibly can in ballistic missiles. . . . if the evidence is overwhelming that you’re about to get hit, the advantage of preempting under those conditions are very substantial. . . . I don’t think there’ll be an Armageddon war; but I’ll put it this way. There has never been any weapon yet invented or perfected that hasn’t been used.
Gen. Bruce Holloway, commander-in-chief, SAC, 1968-72 [39]
When General Holloway submitted SAC’s request to the Nixon administration in 1971 for a new, very large ICBM with a high degree of accuracy (the missile that later became known as MX), the “undead” doctrine of limited nuclear war was already struggling out of its shallow grave. National security adviser Henry Kissinger had already sponsored a study that advocated an American nuclear capability for early “war termination, avoiding cities, and selective response capabilities [that] might provide ways of limiting damage if deterrence fails.” In early 1970 President Nixon, addressing Congress, asked, “Should a president, in the event of a nuclear attack, be left with the single option of ordering the mass destruction of enemy civilians, in the face of the certainty that it would be followed by the mass slaughter of Americans?”– and Mutual Assured Destruction, to the extent that it had ever been the real U.S. strategy, fell stone dead. [40]
Much of the thinking about U.S. nuclear strategy that went on during the next decade was considered too upsetting for the American public’s delicate sensibilities, and MAD continued to be invoked rhetorically as a proof of the U.S. government’s devotion to a purely retaliatory nuclear strategy. But the Foster Panel, set up by the Department of Defense to review U.S. nuclear strategy in early 1972, recommended “a wide range of nuclear options which could be used. . . to control escalation.” It envisaged a limited nuclear war in which the United States would achieve its political objectives and avoid destruction of its cities by adopting a strategy that would “(a) hold some vital enemy targets hostage to subsequent destruction by survivable nuclear forces, and (b) permit control over the timing and pace of attack execution, in order to provide the enemy opportunities to reconsider his options.” Its recommendations were incorporated in National Security Decision Memorandum 242, signed by President Nixon in January 1974 after Secretary of Defense James Schlesinger publicly disclosed that he was changing the targeting strategy to give the United States alternatives to “initiating a suicidal strike against the cities of the other side.”
The resulting revision of the U.S. nuclear target plan, SIOP-5, explicitly took Soviet residential areas off the target list, and even changed some aiming points in ways that reduced the effectiveness of nuclear strikes against Soviet military targets in order to reduce damage to heavily populated areas. At the same time, the plan made elaborate provisions for attacking all elements of the Soviet leadership – party, army, and technocrats – in order to ensure that “all three of those groups…would individually and personally and organizationally and culturally know that their part of the world was not going to survive,” as General Jasper Welch of the Foster Panel put it. Finally, SIOP-5 paid great attention to ensuring that at any level of nuclear exchange, the Soviet Union should not emerge as the more powerful economy in the postwar world.
If we were to maintain continued communications with the Soviet leaders during the war and if we were to describe precisely and meticulously the limited nature of our actions, including the desire to avoid attacking their urban industrial bases…political leaders on both sides will be under powerful pressure to continue to be sensible….Those are the circumstances in which I believe that leaders will be rational and prudent. I hope I am not being too optimistic.
Secretary of Defense James Schlesinger to Congress, March 1974 [41]
James Schlesinger, yet another RAND product, was well suited by intellect and temperament to implement such a policy. He admitted that he did not share the “visceral repugnance” of Robert McNamara, his predecessor, to even the selective use of nuclear weapons. They could be extremely effective in influencing Soviet behavior in a crisis, he believed, and he was confident (or said he was) that the consequent nuclear exchange could be controlled. Of all the defence secretaries who strove to keep American nuclear forces usable for purposes beyond that of deterring a direct nuclear attack on the United States, he was the most persuasive and sophisticated.
Schlesinger put no stock in simpleminded yearnings for a full counterforce strategy aimed at disarming the Soviets (he knew that Soviet missile-launching submarines, at the very least, would survive): “I was more interested in selectivity than in counterforce per se. Going after selected silos might be a way of delivering a message.” [42] He was a paid-up member of that school of American strategic thinkers who believed that national leaders could remain “rational and prudent” even after nuclear warheads had exploded on their territory, and that it could be strategically sensible to bargain by “taking out” certain Soviet military or industrial installations as a demonstration of U.S. determination to prevail in a crisis.
Or perhaps Schlesinger did not really believe that and merely wanted the Soviets to think that he believed it. From quite an early stage, the RAND style of thinking on nuclear strategy incorporated large elements of psychology. (Consider Thomas Schelling’s classic formulation of how “preemptive” attacks could happen: “He thinks we think he’ll attack; so he thinks we shall; so he will; so we must.” [43]) Schlesinger was well aware of the role that prior declarations of strategic intentions by either side could play in influencing the calculations of decision makers in an actual crisis. “Occasionally the Russians should read in the press that a ‘counterforce attack may not fall on silos that are empty’,” he once remarked. “Why give the Soviets that assurance?” [44] The same calculation, of course, applied to any other declaration of U.S. strategic intentions, such as Schlesinger’s assertions of willingness to respond to some local Soviet military initiative with selective U.S. nuclear strikes. Credible is not necessarily the same as true.
However, the need for credibility impelled Schlesinger to approve the requests of the U.S. armed forces for new nuclear weapons – the Air Force’s B-1 bombers and MX and cruise missiles, and an “improved accuracy program” for the Navy’s missiles that would lead to the Trident II – all of which featured an increased ability to strike Soviet counterforce targets. It was all tied up with the importance of perceptions: Schlesinger’s estimate of what Soviet strategists would perceive as convincing evidence of U.S. strategic resolve. Thus, when it became clear that the Soviet Union would follow the United States’s example by MIRVing Soviet missiles – which were larger than American missiles and could carry more and bigger warheads – Schlesinger felt compelled to approve an equivalent large U.S. missile, the MX. “I ordered MX to be designed in the summer of 1973,” he said, “as a way of showing the Soviets that we meant to make up the gross disparity in throw weights between their missiles and ours. My purpose was to persuade the Soviets to get their throw weights down. MX was my bargaining chip.” [45]
For those who genuinely believed in the feasibility of a disarming first strike, the appearance of MIRVed Soviet missiles that could carry many more warheads than existing U.S. land-based missiles was an alarming development. The large “throw weight” of Soviet missiles and their growing accuracy led these American strategists to imagine a Soviet counterforce first strike that would destroy almost all of America’s land-based missiles in a surprise attack and thus force the United States to choose between strategic surrender and engaging in a hideous counter-city war with its surviving, less accurate weapons. This hypothesis, which led to the prediction of the notorious “window of vulnerability” that plagued the subsequent Carter administration, assumed a positively heroic Soviet faith in American rationality – since this sort of Soviet first strike would kill at least ten million Americans, and the United States would retain the ability to strike back at Soviet cities with its submarine-launched missiles and its surviving bombers. But it was much favored by those who supported the big, accurate MX missile as a means of acquiring an equivalent American capability.
For a brief instant at the beginning of the Carter administration in 1977, the idea of abandoning the whole massive edifice of nuclear war-fighting technology and withdrawing to a strategy of minimum deterrence was raised once again at the highest level. President Jimmy Carter, a former submariner who had had no direct contact with orthodox U.S. military thinking on nuclear war for two decades, was taken aback when he was shown the U.S. war plan at a pre-inaugural briefing and learned that the SIOP now listed forty thousand potential targets in the Soviet Union. The U.S. Joint Chiefs of Staff were even more shocked, however, when Carter responded by suggesting that a mere two hundred missiles, all kept in submarines, would be sufficient to deter a Soviet attack on the United States. [46]
But it was not necessary this time for the defenders of American strategic orthodoxy to resort to emergency measures to convert the heretic, as William Kaufmann had seduced Robert McNamara with theories of limited nuclear war in 1961. Carter himself was swiftly drawn into the deep, dark pit, betrayed by his technocratic fascination with the elegance of the engineering and the theories that supported U.S. nuclear strategy. By the end of his term, all the developments implicit in the limited nuclear war theories of the early 1960s had become explicit doctrine.
This doctrine was enshrined in Carter’s Presidential Decision 59 of July 1980 and the accompanying revision of the targeting plan, SIOP-5D. One of the authors of that revision, General Jasper Welch, explained that the purpose “was to make it perfectly clear that nuclear weapons have a very rightful place in a global conflict, not just in a spasm of tit-for-tat.” Thus the SIOP had to provide a wide menu of selective and limited “nuclear options” permitting the use of nuclear weapons in an almost boundless and partly unforeseeable range of military contingencies: “Fighting may be taking place halfway between Kiev and Moscow, for all I know. Maybe it’s taking place along the Siberian border – which is a fairly likely place for it – with Americans, Chinese and Russians. But for the planning and construction of the thing, it doesn’t matter.” [47]
Zbigniew Brzezinski, President Carter’s national security adviser, claimed that the meaning of the new strategic policy was that “for the first time the United States deliberately sought for itself the capability to manage a protracted nuclear conflict.” He also took personal credit for introducing a new distinction into the SIOP, which gives the United States the option of choosing to kill ethnic Russians – the “real enemy”– while sparing other Soviet nationalities. (Brzezinski was of Polish descent.) But Defense Secretary Harold Brown insisted that the Carter administration’s changes were mainly a clarification and codification of existing U.S. strategic doctrine: “PD-59…is not a radical departure from U.S. strategic policy over the past decade or so.”
His predecessor as defense secretary, James Schlesinger, disagreed, claiming that PD-59 represented a shift in emphasis “from selectivity and signaling to that of victory…in a way that was still barely plausible on paper, but in my guess is not plausible in the real world.” After he left the Pentagon, Brown virtually conceded Schlesinger’s accusation, explaining that the administration had been divided between those who believed in the possibility of winning a protracted nuclear war and those who did not. The argument revolved around what was necessary to deter the Soviet Union effectively, with many people arguing that the Soviets had to believe that if they started a war, the United States would win it. “We started down that path and got into that morass,” said Brown. “And PD-59 was the result.” [48]
___________________________________________________________________________
By the early 1980s U.S. doctrine for fighting a nuclear war had become a structure of such baroque and self-referential complexity that it had only a distant relationship with the real world. It was almost as separated from reality as the missile crews who sat the long watches underground in their reinforced concrete command bunkers.
Q. How would you feel if you ever had to do it for real?
Well, we’re trained so highly in our recurrent training that we take every month in simulators like this, so that if we actually had to launch the missiles, it would be an almost automatic thing.
Q. You wouldn’t be thinking about it at the time?
There wouldn’t be time for any reflection until after we turned the keys. . . .
Q. Would there be reflection then, do you think?
I should think so, yes.
Conversation with Minuteman ICBM crew commander, Whiteman Air Force Base, 1982
Even bomber pilots used to see the cities burning beneath them (though not the people), but the commander of a Minuteman launch capsule is separated from the targets of his missiles by six thousand miles. The pleasant young Air Force captain who would not have had time for reflection until after he had turned the key that would send fifty nuclear warheads toward the Soviet Union was intellectually aware of the consequences, but they were so remote and hypothetical that imagination failed to make them real. His principal reason for volunteering for missile duty – like many of his colleagues– was that the uneventful twenty-four-hour watches in the capsule gave him ample time to work on a correspondence course for a master’s degree in business administration.
He wore a neatly pressed uniform, an amber scarf with lightning bolts, and a label on his pocket that said “combat crew,” but he did not fit the traditional image of the warrior. His job more closely resembled that of the duty engineer at a hydroelectric power plant, and even launching the missiles– “going to war,” as they quaintly put it– would have involved less initiative and activity than the duty engineer would be expected to display if a turbine overheated: “We’re taught to react, and we are not part of the decision-making process ourselves. We simply react to the orders we receive through the messages that come to us, and then reflect after we have taken our actions.”
Tens of thousands of clean-cut young men like him had their fingers on some sort of nuclear trigger during the Cold War. None of them seemed very military compared to your average infantryman, but then nuclear war is not really a military enterprise in any recognizable sense. By the early 1980s, the five nuclear powers had accumulated a total of over twenty-five hundred land- based ballistic missiles, well over a thousand submarine-launched ballistic missiles, and thousands of aircraft capable of carrying nuclear bombs, plus land-, sea- and air- launched cruise missiles and a panoply of battlefield nuclear weapons that ranged down to a fifty-eight-pound portable atomic explosive device intended to be carried behind enemy lines by commando teams. The large missiles could carry numerous separate warheads, and the sum of nuclear warheads in the world was over fifty thousand. During President Ronald Reagan’s first term of office from 1981 to 1984, the United States alone was building eight new nuclear warheads a day (though many were recycled from obsolete warheads).
The Reagan administration pretended to be more radical in its desire to confront the Soviet Union, but in nuclear matters it really just picked up the baton passed to it by Carter. Defense Secretary Caspar Weinberger’s Defense Guidance of 1982 talked frankly of the need for U.S. nuclear forces that could “prevail and be able to force the Soviet Union to seek earliest termination of hostilities on terms favorable to the United States…even under the conditions of a prolonged war.” Still greater stress was put on attacking the Soviet leadership by the revised SIOP-6, and RAND veteran Andrew Marshall, who presided over that revision of the war plan, even talked of protracted nuclear wars in which the opponents might launch nuclear strikes at each other at intervals of as much as six months. [49]
But the radicalism of the Reagan administration is easily overdone: these were mere refinements of a basic strategic policy that was already in place, as was the Reagan administration’s strong support for the guerrilla war being waged by Afghan rebels and Arab Islamist volunteers against the Soviet-backed government of Afghanistan. (It was Zbigniew Brzezinski, President Carter’s national security adviser, who first dreamed of creating “Russia’s Vietnam” in Afghanistan, and successfully goaded Moscow into sending its troops into the country in 1979 by arming and funding conservative Afghan tribesmen to attack the reformist, pro-Soviet government in Kabul.) The “anti-Communist” crusading in the Caribbean and Central America – the subsidization of “Contra” guerrillas waging a terrorist war against the Nicaraguan government, the unstinting support for right-wing regimes fighting left-wing guerrillas in El Salvador and Guatemala, the ridiculous invasion of Grenada – were simply a faint echo of the Kennedy administration’s obsession with overthrowing Castro in the early 1960s. The only truly new departure of the Reagan administration was the Strategic Defense Initiative (“Star Wars”) – and that was new only in technology, not in its basic intent.
The real goal of Star Wars was never to provide the United States with an impenetrable defence to a nuclear attack, for Bernard Brodie’s 1946 definition of the problem still held true: all air (and space) defence operates on the principle of attrition, which means that some portion of the attacking weapons will always get through – and if they are nuclear weapons, even a very small fraction is too many. American space-based defences could never hope to deal with the thousands of incoming warheads and accompanying penetration aids that would be involved in a Soviet first strike against the United States, and the more realistic supporters of SDI were well aware of this fact. But space-based defences might eventually be good enough to deal with a ragged retaliatory strike after the Soviet Union had already been devastated by a largely successful American first strike. As the Defense Science Board put it in 1981, “Offensive and defensive weapons always work together…” [50] The goal, as usual, was to make an American war-fighting nuclear strategy more credible and enhance its political utility.
To be fair to President Reagan, he seems never to have grasped this fact: the people who sold him on the concept of Star Wars played on his genuine aversion to nuclear weapons and his longing for some magical release from the threat of nuclear war. But people who had already been around this track several times sounded the alarm. “Such systems would be destabilizing if they provided a shield so you could use the sword,” stated Richard Nixon in 1984, and William Kaufmann simply described SDI as the latest manifestation of the search for the lost “nuclear Arcadia” of American nuclear superiority. [51} Since the United States was also seeking to introduce intermediate-range Pershing missiles into Western Europe at this time in order to cut Soviet warning time of a surprise attack, the Russian leadership was understandably alarmed.
On the face of it, laymen may find it even attractive as the President speaks about what seem to be defensive measures….In fact the strategic offensive forces of the United States will continue to be developed and upgraded at full tilt [with the aim] of acquiring a first nuclear strike capability….[It is] a bid to disarm the Soviet Union…. Soviet leader Yuri Andropov, 1983 [52]
It was not Reagan’s harmless rhetoric about the “evil empire” that worried the Soviets, but the confrontational nuclear weapons policies pursued by Reagan’s hardline secretary of defence, Caspar Weinberger, and the Cold Warriors around him. The danger was that this confrontation might abort the promising developments in the Soviet Union, which had begun a race for reform triggered by the death of long-ruling dictator Leonid Brezhnev in 1982.
The Soviet Union had experienced virtually no real economic growth (despite all the millions of tonnes of concrete that were poured) since the late 1960s. The Communist political and economic system had proved incapable of making further growth happen once the early days of virtually free labour flooding in from the countryside were over, and it would have crumbled under the burden of sustaining a global strategic confrontation with the United States at some point in the 1980s or the 1990s regardless of what the various administrations in Washington did. In practice, it was the steep decline in oil prices after 1981 that triggered the Soviet reform efforts: suddenly, the regime’s main source of foreign exchange collapsed, and coincidentally Brezhnev died.
In a bid to stave off imminent economic collapse, reformist leader Yuri Andropov was raised to power in Moscow, only to die unexpectedly in 1983. After a brief return to the old order under Konstantin Chernenko, who also died after only months in office, another reformer, Mikhail Gorbachev, was brought to power in 1985. At first his reforms were to be only economic, but Gorbachev realized that the stagnation had a political dimension, too, and initiated the political opening that ultimately (and to his lasting regret) swept the Communists away. Reagan’s “evil empire” speech and his defence budgets had virtually nothing to do with it – but it is hard to imagine that the process of reform that ultimately led to the freeing of Eastern Europe and the dismantling of the Soviet Union could have succeeded, or that Gorbachev could even have survived, if the Reagan administration’s intense hostility had not abated. Happily, it did.
.
In November, 1986, the Iran-Contra scandal broke. It was revealed that members of the administration had secretly orchestrated the sale of arms to Iran (even though the U.S. was actively backing Saddam Hussein’s Iraq in its war with Iran), in order to raise funds for the Nicaraguan “Contras” in defiance of a Senate ban on U.S. support for them. “Cap” Weinberger and his hard-liners were summarily dismissed (eleven junior members of the administration were ultimately convicted of felonies), and Mr Reagan’s popular approval rating dropped from 65 percent to 46 percent in a month, the steepest drop any president has ever experienced. He successfully maintained that he could not recall if he had been aware of the Iran-Contra deal, but he desperately needed to change the subject. One way was to embark on a reconciliation with the Soviet Union.
Despite his frequent inattention to detail, Ronald Reagan had always been genuine in his desire to end the threat of catastrophic nuclear war that had hung over his country and the world for most of his life, and he was willing to be far more radical in pursuit of that objective than any other post-war American president. He was not always equally clear on how that objective might be achieved – at his first meeting with Gorbachev in 1985, he had puzzled the Soviet leader by talking about how they might work together if there were an invasion of aliens from outer space – but even his Strategic Defence Initiative was well meant. The people behind “Star Wars” were seeking a partial defence not for American cities, but for the more defensible missile fields and other strategic installations from which the United States might one day try to wage and win a limited nuclear war – but in Reagan’s mind it truly was a program to protect American citizens from nuclear weapons. Moreover, he genuinely believed that the technology for Star Wars, once developed, should be made available to the Soviet Union as well.
Secretary General Gorbachev: Excuse me, Mr. President, but I don’t take your idea of sharing SDI seriously. You don’t want to share even petroleum equipment, automatic machine tools or equipment for dairies, while sharing SDI would be a second American Revolution. And revolutions do not occur all that often. Let’s be realistic and pragmatic. That’s more reliable.
President Reagan: If I thought that SDI could not be shared, I would have rejected it myself.
Reagan-Gorbachev summit, Reykjavik, Iceland, 11 October 1986 [53]
At the Reykjavik summit, Regan proposed the elimination of all offensive ballistic missiles (to the horror of his advisors), arguing that basing nuclear deterrence only on slow-moving bombers and cruise missiles would make the world a far safer place, but his unwillingness to abandon the Star Wars project aborted the deal. Once the Iran-Contra scandal broke the next month, however, he changed course radically. He never formally abandoned Star Wars (and Gorbachev, having realized that it was a technological pipe-dream that was very unlikely to happen, eventually dropped his opposition to it), but on every other issue Reagan was willing to make a deal. On Gorbachev’s first visit to the U.S. in 1987, the two men signed the Intermediate Nuclear Forces treaty, ending the panic over the introduction of a new generation of nuclear missiles in Europe. By the time Reagan visited Moscow in June, 1988, he declared that “of course” the Cold War was over, and that his “evil empire” talk was from “another time.” Even before the fall of the Berlin Wall in the following year, the United States and the Soviet Union were no longer strategic adversaries, though it took a while longer for the nuclear forces of the two countries to get over the habit of regarding each other as the enemy.
So the first lengthy military confrontation between two nuclear-armed powers ended peacefully, but it offered little consolation for those who were concerned about the future, for it came close to the actual use of nuclear weapons a number of times, and the very process of technological development continually unleashed new instabilities into the system. The stronger rival, the United States, had made almost continuous efforts to retain or regain some kind of numerical or technological superiority that would make its nuclear weapons usable. The Soviet Union, as befitted the weaker rival, pursued a more stolid policy and clung to a strategy of massive retaliation, but it was determined to get its retaliation in first if it concluded that war was inevitable.
There is no evidence that either side ever intended to launch a surprise nuclear attack against the other, but the fact that no nuclear weapons were used during the four decades that the confrontation lasted owed more to good luck than to good judgement. And it was only at the very end of the confrontation that everybody found out what would actually have happened if all those weapons had ever been used.
___________________________________________________________________________
We have, by slow and imperceptible steps, been constructing a Doomsday Machine. Until recently and then, only by accident – no one even noticed. And we have distributed its triggers all over the Northern Hemisphere. Every American and Soviet leader since 1945 has made critical decisions regarding nuclear war in total ignorance of the climatic catastrophe.
Carl Sagan [54]
At the time of the Cuban missile crisis in 1962, President John F. Kennedy controlled over six thousand nuclear weapons, many of them of even greater explosive power than those the United States deploys today; General Secretary Nikita Khrushchev probably had in the vicinity of eight hundred nuclear bombs and warheads under his command. For two weeks they hovered on the brink of nuclear war, acutely conscious that a single false step could condemn tens of millions of their countrymen to death. But they had absolutely no inkling that the use of those weapons might precipitate a global catastrophe; they were thinking in terms of a super-World War II. Casualties might be three or four times greater, and it would all happen much quicker thanks to nuclear weapons – but there are not, to be crude, all that many ways to die, and apart from radiation sickness there were not many agonies that could befall the residents of cities hit by nuclear weapons that had not been experienced already by those who were caught in the Hamburg and Tokyo firestorms.
It was only in the early 1980s that scientists began to realize that since the early 1950s the world had been living under the permanent threat of a “nuclear winter.” The discovery of what a nuclear war would really do to our planet began in 1971, when a small group of planetologists who had gathered to analyze the results of the Mariner 9 observations of Mars found, to their intense frustration, that the entire planet was covered by an immense dust storm that lasted three months. With nothing better to do, they set about calculating how such a long-lasting dust cloud would alter conditions on the Martian surface. They concluded that it would lower the ground temperature drastically.
Intrigued, they then examined meteorological records here on earth to see if the relatively small amounts of dust boosted into the upper atmosphere by exploding volcanoes produced similar effects. They found that every time a major volcano has gone off over the past few centuries, there has been a small but definite drop in the global temperature, lasting a year or more. So they went on to examine the consequences of stray asteroids colliding with the earth and blasting vast quantities of dust into the atmosphere, as happened from time to time in the geological past – and found evidence of temporary but huge climate changes that caused mass extinctions of living things. Subsequently other scientists have concluded that up to half a dozen “extinction events” involving the disappearance of a large proportion of the species in existence at the time have occurred over the past billion years, and that the leading suspect in most of these events is a prolonged period of worldwide dark and cold caused by the dust thrown up by very big asteroid strikes.
The original, informal group of scientists who had been involved in the Mariner project in 1971 (they called themselves TTAPS, after the first letters of their last names) went their separate ways but stayed in touch. In early 1982 they were shown an advance copy of a paper written by two scientists working at the Max Planck Institute for Chemistry in West Germany that calculated that massive forest fires ignited by nuclear blasts would inject several hundred million tons of smoke into the atmosphere in a nuclear war, and that the smoke “would strongly restrict the penetration of sunlight to the earth’s surface.” That paper had not even considered the smoke from burning cities and the dust from groundbursts, but the American group saw the significance at once. In 1983 they published their results.
A major nuclear exchange, the TTAPS group concluded, would cover at least the northern hemisphere, and perhaps the entire planet, with a pall of smoke and dust that would plunge the surface into virtual darkness for up to six months and cause the temperature to drop by up to 40 degrees centigrade (104 degrees Fahrenheit) in the continental interiors (which would be far below the freezing point in any season) for a similar period. And when enough of the dust and soot particles had drifted down out of the stratosphere to let the sun’s light back in, the destruction of the ozone layer by thermonuclear fireballs would allow two or three times as much of the harmful portion of ultraviolet spectrum (UVC) to reach the surface. This could cause lethal sunburn in exposed human beings in less than half an hour and would cause blindness in a short time. However, the scientists added comfortingly, “we have tentatively concluded that a nuclear war is not likely to be followed by an ice age.” [55]
The anticipated and accepted consequences of a major nuclear war already included several hundred million dead in the NATO and Warsaw Pact countries, plus the destruction of most of the world’s industry and the artistic, scientific, and architectural heritage of mankind. Fallout and the disruption of the existing infrastructure were expected to damage northern hemisphere agriculture to the point where hundreds of millions more would succumb to famine and disease in the aftermath. It was hardly a pleasant prospect, but most of humanity would survive, and in the southern hemisphere most societies would probably emerge from the ordeal basically intact. Perhaps the new great powers – South Africa, Brazil, Indonesia, and Australia – would find a way to avoid repeating the experience in another couple of generations. At any rate, history would not come to an end, although that would be small consolation to the surviving Russians and Americans.
But the prospect of a “nuclear winter” transformed these calculations. Now the cold and the dark were forecast to persist worldwide for half a year after a major nuclear war, killing off entire species of animals and plants already gravely weakened by high doses of radioactivity – and when the gloom finally cleared, ultraviolet radiation, starvation, and disease would account for many others. In April 1983, a symposium of forty distinguished biologists considered the effects of the predicted post-nuclear climate changes on living things and concluded that
Species extinction could be expected for most tropical plants and animals, and for most terrestrial vertebrates of north temperate regions, a large number of plants, and numerous freshwater and some marine organisms….Whether any people would be able to persist for long in the face of highly modified biological communities; novel climates; high levels of radiation; shattered agricultural, social, and economic systems; extraordinary psychological stresses; and a host of other difficulties is open to question. It is clear that the ecosystem effects alone resulting from a large-scale thermonuclear war could be enough to destroy the current civilization in at least the Northern Hemisphere. Coupled with the direct casualties of perhaps two billion people, the combined intermediate and long-term effects of nuclear war suggest that eventually there might be no human survivors in the Northern Hemisphere.
Furthermore, the scenario described here is by no means the most severe that could be imagined with present world nuclear arsenals and those contemplated for the near future. In almost any realistic case involving nuclear exchanges between the superpowers, global environmental changes sufficient to cause an extinction event equal to or more severe than that at the close of the Cretaceous when the dinosaurs and many other species died out are likely. In that event, the possibility of the extinction of Homo sapiens cannot be excluded. [56]
The basic physical processes that would produce these consequences were not in question. As to how many nuclear weapons would be needed to produce these effects, the “base-line” case of a war in which five thousand megatons of nuclear weapons were exploded, 57 percent as groundbursts against “hard targets” like missile silos and 20 percent as airbursts over urban and industrial targets, would probably suffice. (The total stockpile of the United States and the Soviet Union in the mid-1980s was about thirteen thousand megatons.) Calculations were complicated, however, by the fact that the overcast screening out the sun would have two components: dust from soil particles vaporized in groundbursts, and soot from burning cities, forests, and grasslands ignited by airbursts.
It takes considerably more dust than soot to produce the same screening effect: two thousand to three thousand high-yield groundbursts would probably be needed. However, that was precisely the range of detonations that would be needed for one side to make a successful first strike on the other’s missile silos, so even a “splendid first strike” that utterly disarmed the enemy, with no attacks on cities and no retaliation, was likely to result in a nuclear winter in the conditions prevailing during the Cold War. The millions of tons of soot given off by burning cities would be a far more efficient screening agent, especially if firestorms produced huge convection columns that drew most of the soot up into the stratosphere where it would remain for many months, and in that case as little as one hundred megatons on one hundred cities could be too much [57]. Even India and Pakistan could be approaching that threshold within the next decade or so – and it is unrealistic to imagine that cities would really be spared in a nuclear war: too many of the vital leadership, command and control, and industrial targets are embedded in them. Cities would be struck, and they would burn.
There was a great deal of research done on “nuclear winter” in the later 1980s, and the hypothesis held up despite major official efforts to discredit it. In 1990 the TTAPS group summarized the research in Science [58], and reported that “the basic physics of nuclear winter has been reaffirmed through several authoritative international technical assessments and numerous individual scientific investigations.” In a book published in 1990, Sagan and Turco concluded that the situation was in some respects even worse than their first estimates: “The industrial, urban and petroleum targets are characterized by combustible materials highly concentrated at relatively few sites; this is why global nuclear winter may be generated with only a few hundred detonations or less….Indeed, with something like a hundred downtowns burning…even a substantial nuclear winter seems possible.” [59] But no further research of any kind has been done on the subject of nuclear winter since 1990. It is symptomatic of the sudden and total loss of interest in the subject of nuclear war after the collapse of the Soviet Union – as though the nuclear weapons themselves had been abolished. But they have not. Most of them are still there, just as lethal as ever.
We are currently enjoying an extended holiday from the reality that war between great powers, in our technological era, means nuclear war. Unless there is a major change in the current international system, however, great-power military confrontations are bound to recur in the decades and centuries to come, and those new nuclear confrontations – between Indians and Pakistanis, between Israelis and Arabs, perhaps eventually between the present great powers once again in some new alliance constellation – will unfold with all the doctrinal mismatches, cultural misunderstandings, and technological hubris that marked the first one. It is no longer possible for the major powers to achieve anything useful against each other by means of war, but both their institutions and their mentalities still presume that military action is an option.
The problem we face was bound to arrive eventually: war is deeply ingrained in our culture, but it is lethally incompatible with an advanced technological civilization. Six decades after Hiroshima we have a clearer grasp of the precise nature of our fate if we fail to solve the problem, but the essence of our dilemma was already obvious to Albert Einstein in 1945: “Everything has changed, except our way of thinking.”