Arguments that Count: Physics, Computing, and Missile Defense, 1949-2012

By Rebecca Slayton. Cambridge, MA: MIT Press, 2013. 325 pp. $35.00 (hardcover). ISBN: 9780262019446

Arguments that Count is an original, deeply researched, and clearly written account of debates about the role of software in US missile defense both during and after the Cold War. Slayton, trained as a physical chemist before turning to science and technology studies and history of technology, interviewed a number of the protagonists and explored numerous archives along the road to writing this superbly informed book, which presents both a historical narrative and a conceptual analysis of how expertise is configured, and reconfigured over time, in policy debates of the utmost gravity.

Slayton’s historical narrative concerns computing in the service of air defense of the continental United States. The story begins with SAGE (Semi-Automatic Ground Environment), the 1950s project for a radar-based, computerized airplane tracking and interception system, designed to defend against Soviet bombers. These took many hours to reach U.S. airspace, leaving a few hours’ breathing space for calculation and (crucially) course correction and error detection by human operators. SAGE required the world’s first truly complex software, and led to numerous insights — most of them pessimistic —into the difficulty of organizing software work on such a scale. As I argued in The Closed World: Computers and the Politics of Discourse in Cold War America (Edwards 1996), SAGE became an archetypal pattern for a whole series of Cold War military projects. Indeed, its linking of electronic sensors to computerized tracking and targeting presaged virtually all of modern high-tech warfare, of which the Bush-Obama drone assassination programs are merely the most recent incarnation.

In the 1960s, intercontinental ballistic missiles reduced the time available for detecting and responding to a Soviet nuclear attack to a mere half-hour or so. Despite arms control negotiations, superpower nuclear arsenals grew to staggering sizes. In this tense and terrifying environment, proposals for anti-ballistic missile (ABM) defense reached decision points during the Nixon administration. Two proposals — Sentinel, a city defense, and Safeguard, designed to protect missile silos in order to preserve a retaliatory capability — each used ground-to-air rockets guided by computer systems to try to shoot down incoming Soviet missiles. The Sentinel system was developed but never deployed, while one Safeguard site was briefly established in North Dakota. The Soviet Union also deployed an ABM system designed to protect Moscow.

The idea of ABM defenses regained the spotlight in the early 1980s, when physicist Edward Teller piqued Ronald Reagan’s imagination with his plan for space-based weapons such as X-ray or chemical lasers that could target missiles in their boost phase. This became the Strategic Defense Initiative (SDI). Reagan characterized SDI as an umbrella-like “peace shield” that might protect not just a few cities or missile silos, but also the entirety of the United States and potentially Europe as well. Reagan even proposed to share SDI technology with the Soviets, which he saw as a way to shift the whole basis of nuclear deterrence from offensive to defensive weaponry.

A vigorous SDI research program began in 1983. It is often credited (perhaps correctly) as a factor in the breakthrough 1986 Reykjavik summit between Reagan and Soviet premier Mikhail Gorbachev, which led to a major nuclear arms reduction agreement the following year. The research begun under SDI has continued under every subsequent administration, although its focus has changed from defense against a massive nuclear attack by thousands of missiles to small-scale attacks by rogue nations such as Iran or North Korea. By some estimates, the development of computer software for these systems consumed 10-20 percent of these programs’ budgets, making them one of history’s largest military software efforts.

Slayton’s account of these events is tightly centered on changing perceptions of the capabilities of software, and especially on differences among disciplinary groups. She focuses on two such groups: physicists, whose public prestige and hyper-confident sensibility led them to prominent positions as technical advisors to government and military decision makers during the Cold War, and “technologists,” a term Slayton uses as a catch-all for engineers, programmers and software development managers. The arc of her narrative begins in the 1950s with can-do electrical engineers and confident physicists, many of whom seemed certain that software’s putative “flexibility” would make it easy to design trouble-free systems. Frustrating experience with SAGE and, a bit later, with IBM’s long-delayed and buggy OS/360, soon showed otherwise. But difficult is not the same as impossible, and early hiccups in software did not deter optimists from viewing the software problem as essentially solvable.

In the 1960s, growing unease among computer professionals — then a newly minted employment category lacking both professional and technical standards — led to a “software crisis” and the 1968 NATO conference on software engineering, often cited as a watershed in computer science. Effective software engineering practices took far longer to develop than optimistic rhetoric suggested. Especially after the advent of microcomputers in the mid-1970s, proponents of software engineering fought a running battle against rampant “amateur” coding practices and organizational cultures, a battle which continues to this day (Brooks 1995; Raymond 1999; Kelly 2007; Easterbrook and Johns 2009; Easterbrook 2010; Ensmenger 2010).

Both the confidence of physicists and the skepticism of many “technologists” were on prominent display in ABM debates of the late 1960s and early 1970s. On Slayton’s account, many of the physicists and most policy discussions focused on the extreme difficulty of targeting missiles in flight and the so-called “offensive advantage”:  the fact that inexpensive countermeasures such as decoy balloons, jamming signals, or the then-emerging MIRVs (multiple independently-targeted re-entry vehicles, i.e. multiple nuclear warheads mounted on a single missile) could easily confuse ABM tracking and targeting systems. Many physicists found these problems insoluble, at least for the foreseeable future, and opposed the ABM on those grounds without looking further.

But Slayton draws our attention to another, nearly forgotten participant in the debate: Computer Professionals Against the ABM (CPAABM), a small group of mostly elite computer scientists who questioned whether the software necessary to control an ABM system could ever be made to work. Some members of CPAABM, such as MIT’s Joseph Weizenbaum, argued that reliable software could not be produced without many rounds of debugging, real-world testing, and revision — even in stable, well-understood situations. This group pointed out that no ABM system could ever be tested under fully realistic conditions and would, furthermore, face a relentless elaboration of countermeasures by the opposing side, each requiring adjustment (i.e., new code). Incredibly, ABM proponents countered that “there is nothing about the system that essentially hasn’t been done before” and that the necessary computer programs could be tested with simulations (129). 

Slayton does not argue that the issue of software complexity sealed the ABM’s fate; rather, the computer scientists’ critique was “marginalized” by physicists and policymakers alike. For the latter, cost and effectiveness concerns trumped all else, ultimately leading the US and the USSR to conclude a “stop-in-place” ABM treaty (1972) that limited each side to two ABM complexes with a maximum of 100 missiles each.  Instead, Slayton contends that the ABM debate taught the Defense Department a more sophisticated view of software, one which resulted in a much stronger push within the military to establish “software engineering” principles and quality control. (This led, among other things, to the poorly conceived, kitchen-sink computer language Ada, promoted as the single language that might replace the hundreds supported by the DoD in the 1980s.)

The CPAABM episode was eerily repeated in the mid-1980s, as the SDI built up steam. Once again, hyper-confident military planners, egged on by Edward Teller and other physicists, proposed computer-controlled anti-missile systems. Now, however, the time frame for their operation was a matter of seconds to a few minutes, since a critical goal of SDI was to knock out most missiles as they rose from their launch pads, emitting an intense heat signature that could be seen and targeted by satellites. With over a decade of software engineering experience (and rhetoric) behind them, SDI planners even asserted that bugs in computer programs could be “prevented by design” and that programs would be demonstrated to be bug-free and up to spec by means of automated program-proving techniques (Edwards 1996).

Protagonists in the SDI debates replayed all the same arguments about the “offensive advantage” (Office of Technology Assessment 1985). This time, however, the software issue became much more public and important. A new organization, Computer Professionals for Social Responsibility, formed to oppose both SDI and a related initiative, DARPA’s Strategic Computing Program. Slayton devotes several pages to software expert David Parnas’s widely publicized 1985 resignation from an SDI advisory panel. Parnas came to believe that the necessary software could never be adequately tested, even if it could be built, which he also doubted. Parnas’s view accorded with the experience of computer industry guru Fred Brooks, who described the “essence” of software as “arbitrary complexity” — a phrase to which Slayton returns several times (Brooks 1987). In contrast to the earlier debates, in which physicists had often maintained their confidence in software (or ignored the problem) despite the warnings of computer professionals, this time some physicists joined the chorus of opposition to the SDI based on a mature appreciation of the messages of Parnas, Brooks, and other computer scientists — as well as on a new generation’s much greater direct experience with computers.

A final chapter brings us up to date on the scaled-down, but still highly significant missile defense programs of the George W. Bush and Obama administrations, each of which struggled with European and other opposition, as well as with the perennial difficulties of testing under real-world conditions.

The rich, deeply researched discussions in Arguments that Count build on these stories to make larger, and very important, claims about how experts come to understand problems and take positions in public policy debates. Using the contrasting examples of physicists and “technologists,” Slayton seeks to show “how different kinds of arguments become persuasive in relation to particular contexts of policymaking, and how this in turn shapes what counts as authoritative public knowledge” (10). Debates over strategic defenses pitted arguments based on precise calculations and simulations against expert judgment that could never, quite, be reduced to a rational calculus. For example, in correspondence about the ABM proposals in 1969, computer scientist Joseph Weizenbaum wrote: “I personally suspect [that the anti-ballistic missile system] won’t work. My suspicion is strong to the point of being belief. I don’t think that my statement as a professional that I hold this belief obligates me to a mathematical proof” (125). Parnas’ argument, over 15 years later, was essentially the same: SDI software might work, but in Parnas’ professional judgment it would be folly to rely upon a system that could never be fully tested nor fully specified, since it would need to continually adapt to the opposition’s changing systems and countermeasures.

Why were the Weizenbaum and CPAABM critiques marginalized in the 1970s, while the nearly identical critiques mounted by Parnas and CPSR in the 1980s proved highly effective? In Slayton’s view, the “disciplinary repertoires” of physicists and technologists, which “allow experts to rhetorically distinguish subjective, politically controversial aspects of a problem from putatively objective, technical realities” (2), had evolved on both sides. Physicists had greater respect for the “arbitrary complexity” of software, while computer scientists had developed a “repertoire” of language, analogies, and rhetorical strategies that enabled them to defend their expert judgment more effectively, even while lacking definitive proof.

One problem with Slayton’s argument may be that there is a whiff of the “culture dope” problem. Optimists and skeptics alike could always be found among both physicists and technologists, as her own examples show, and neither side was ever fully victorious in policy debates. A second, related issue is that by seeking the differences between physicists and technologists in their disciplinary repertoires, Slayton passes over Donald Mackenzie’s elegant notion of the “certainty trough”: the idea that those closest to a site of knowledge production (in this case, computer scientists) are likely to see the knowledge as more uncertain than are users, managers, and others with an institutional or research commitment to that knowledge (in this case, the physicists). (MacKenzie 1990)

Deeply grounded in the secondary literature as well as original research, this excellent book deserves to be widely read, and it could serve a variety of academic purposes, from history to public policy to science & technology studies. It would benefit almost any kind of course on the history of computing. Slayton’s detailed, attentive account of the changing perceptions of software reliability would fit well in a survey course on the history of technology. Her analysis of the formation and uses of expert judgment should find a place in political science and public policy courses, while the story of missile defense would make it highly appropriate for any course on the history of nuclear weapons and the Cold War.

 

Paul N. Edwards

Professor of Information and History

University of Michigan

 

References

Brooks, Fred P. 1987. “No Silver Bullet: Essence and Accidents of Software Engineering.” IEEE Computer 20 (4): 10–19.

Brooks, Frederick P. 1995. The Mythical Man-Month: Essays on Software Engineering, Anniversary Edition (2nd Edition). New York: Addison-Wesley.

Easterbrook, S.M. 2010. “Climate Change: A Grand Software Challenge.” Proceedings of the FSE/SDP workshop on Future of software engineering research 99–104.

Easterbrook, Steve M., and Timothy C. Johns. 2009. “Engineering the Software for Understanding Climate Change.” Computing in Science & Engineering 11 (6): 65–74.

Edwards, Paul N. 1996. The Closed World: Computers and the Politics of Discourse in Cold War America. Cambridge: MIT Press.

Ensmenger, Nathan. 2010. The Computer Boys Take Over: Programmers and the Politics of Technical Expertise. Cambridge, MA: MIT Press.

Kelly, D.F. 2007. “A Software Chasm: Software Engineering and Scientific Computing.” IEEE Software 24 (6): 120–19.

MacKenzie, Donald. 1990. Inventing Accuracy: A Historical Sociology of Ballistic Missile Guidance. Cambridge, MA: MIT Press.

Office of Technology Assessment. 1985. Ballistic Missile Defense Technologies. Washington, DC: US Government Printing Office.

Raymond, Eric. 1999. “The Cathedral and the Bazaar.” Knowledge, Technology & Policy 12 (3): 23–49.