Computing War Narratives
Peer-reviewed article published in APRJA 6.1: MACHINE RESEARCH.
Over the two decades following the French withdrawal from Vietnam in 1954, the United States found itself increasingly enmeshed in a war that evaded expectations. In order to fulfil their commitment to their stated policy of stopping the spread of the Soviet sphere of influence as outlined in the Truman Doctrine, the US had to invest heavily in sustaining their presence in the region. In anticipation that the fall of Vietnam to communism would trigger a chain-reaction of Soviet-backed governments rising to power across Asia (the so-called Domino Theory), a visible display of military presence gained a strong sense of urgency. What quickly became apparent to military strategists in the Pentagon was that this conflict did not follow their assumed 'conventions' of regular warfare: the heavy machinery of the US Military was destabilised by the networked nature of the Vietcong insurgency in South Vietnam. In his 1985 book War Without Fronts, Thomas C. Thayer describes how, despite its colossal human and machinic capital, the US Military was faced with a hindered ability to even accurately evaluate how the Vietnam war was progressing on a regional level, let alone contain the spread of communist influence (4). Without any headquarters and with networks of cadres operating across the rural villages and jungles of the country, it became difficult for the US Military to know where to concentrate its attention. The military strategists believed that the solution to this problem could at least in part be solved with machines.This approach, articulated in General Westmoreland's concept of "the electronic battlefield", can be more generally summed up as a systems-oriented perspective on conflict, where anything from supply chain logistics to the political disposition of rural villages can be quantified, managed, and controlled.
In this text, I will unpack the workings of a particular technological apparatus applied in South Vietnam during the war, contextualising it in the culture of systems-analysis which became prevalent in US defence strategy following the Second World War. This apparatus - called the Hamlet Evaluation System - was in formal operation from 1967 until 1973, and aimed to provide US Forces with a vital narrative of progress in their "pacification programmes" in Vietnam. With its disruptive use of computers, the immense scale and scope of its task, and its affordance of a managerial approach to warfare, this system raises a number of issues around the role of the computer as bureaucratic mediator - in this case, tasked with converting complex insurgencies into legible, systematic narratives. What kind of insights did it provide into the operations of the Vietcong insurgency? How does it fit into the wider ecologies of command and control in the US Military during the first few decades of the Cold War? As the Hamlet Evaluation System, almost fifty years after its inception, is still considered the "gold standard of [counterinsurgency]" (Connable 113), it remains an important case study for those trying to understand how computers structure the institutional bureaucracy of war, and how they are imagined as epistemological tools that can somehow reveal objective truths about the complex, dynamic reality of war.