Introduction by Thomas Haigh (preprint)

This is a preprint. Please cite and quote from the published version, as Thomas Haigh, “Introducing the Early Digital” in Thomas Haigh (ed.) Exploring the Early Digital (Charm, Switzerland: Springer, 2019), pp. 1-18.

At the workshop held to prepare this book, Paul Ceruzzi noted that the digital computer was a “universal solvent.” This idea comes from alchemy, referring to an imaginary fluid able to dissolve any solid material. In the 1940s the first digital computers huge, unreliable, and enormously expensive, and very specialized. They carried out engineering calculations and scientific simulations with what was, for the time, impressive speed. With each passing year digital computers have become smaller, more reliable, cheaper, faster, and more versatile. One by one they have dissolved other kinds of machine. Some have largely vanished: fax machines, cassette players, telegraph networks, encyclopedias. In other cases the digital computer has eaten familiar devices such as televisions from the inside, leaving a recognizable exterior but replacing everything inside.

For those of us who have situated ourselves within the “history of computing” this provides both a challenge and an opportunity. An opportunity because when the computer is everywhere the history of computing is a part of the history of everything. A challenge because the computer, like any good universal solvent, has dissolved its own container and vanished from sight. Nobody ever say down in front of their television at the end of the day, pulled out their remote control, and said “let’s do some computing.” Our object of study is everywhere and nowhere.

The startling thing is that “computer” stuck around so long as a name for these technologies. The word originally described a person carrying out complex technical calculations, or “computations.” (Campbell-Kelly and Aspray 1996) The “automatic computers” of the 1940s inherited both the job and the title from their human forebears and retained it even when, after a few years, their primary market shifted to administrative work. Well into the 1990s, everyone knew what a computer was. The word stuck through many technological transitions: supercomputers, minicomputers, personal computers, home computers, pocket computers. Walking through a comprehensive computing exhibit, such as the Heniz Nixdorf MuseumsForum or the Computer History Museum, one passes box after box after box. Over time the boxes got smaller and toggle switches were eventually replaced with keyboards. Computing was implicitly redefined as the business of using one of these boxes to do something, whether or not it involved computations.

Even then, however, other kinds of computers were sneaking into our lives, in CD players, microwave ovens, airbags and antilock brakes, ATMs, singing greeting cards and videogame consoles. Within the past decade the box with keys and a screen has started to vanish.. Laptop computers are still thought of as computers, but tablets, smartphones, and high definition televisions are not. To the computer scientist such things are simply new computing platforms, but they are not experienced in this way by their users or thought of as such by most humanities scholars.

Instead many people have come to talk of things digital: digital transformation, digital formats, digital practices, digital humanities, digital marketing, even digital life. Other newly fashioned areas of study, such as algorithm studies and platform studies, also define themselves in terms of distinctly digital phenomena. In some areas “digital” now denotes anything accomplished by using computers which requires an above average level of technical skill. This is the usual meaning of “digital” in the “digital humanities” and in various calls for digital STS, and the like.

Using “digital” as a more exciting synonym for “computerized” is not wrong, exactly, as modern computers really are digital, but it is arbitrary. Historians of computing have so far been somewhat suspicious of this new terminology, only occasionally using it to frame their own work (Ensmenger 2012). I myself wrote an article titled “We Have Never Been Digital” (Haigh 2014). Yet historians of computing cannot expect the broader world to realize the importance of our community’s work to understanding digitality if we are reluctant to seriously engage with the concept. Instead the work of conceptualizing digitality and its historical relationship to computer technology has been left largely to others, particularly to German media scholars (Kittler 1999, Schröter and Böhnke 2004).

In this volume, we approach digitality primarily from within the history of computing community, rethinking the technologies of computation within a broader frame. Subsequent efforts will build on this reconceptualization of computational digitality as an underpinning to the study of digital media. This book therefore makes the case that historians of computing are uniquely well placed to bring rigor to discussion of “the digital” because we are equipped to understand where digital technologies, platforms and practices come from and what changes (and does not change) with the spread of the digital solvent into new areas of human activity. Hence the title of our book, “Exploring the Early Digital.”

Digital Materiality

Let’s start with what digitality isn’t. In recent usage, digital is often taken to mean “immaterial.” For example, entertainment industry executives discuss the shift of consumers towards “digital formats” and away from the purchase of physical disks. The woman responsible for the long running Now That’s What I Call Music series of hit music compilations was recently quoted (Lamont 2018) as saying that songs for possible inclusion are now sent to her by email, unlike the “more glamorous… analogue era, when labels sent over individual songs on massive DAT tapes by courier.” Such statements make sense only if one adopts a definition of “digital” that excludes all disks and tapes That is a stretch, particularly as the D in DVD stands for Digital. So does the D in DAT.

The recent idea of “the digital” as immaterial is both ridiculous and common, deserving its own historical and philosophical analysis. Langdon Winner’s classic Autonomous Technology, which explored the history of the similarly odd idea of technology as a force beyond human control, might provide a model. Some important work in that direction has been done in “A Material History of Bits” (Blanchette 2011). Although the modern sense of “digital” was invented to distinguish between different approaches to automatic and electronic computing, the characteristics it described are much older. The mathematical use of our current decimal digits began in seventh century India before picking up steam with the introduction of zeros and the positional system in the ninth century. Devices such as adding machines incorporated mechanical representations of digits. For example, in his chapter Ronald Kline quotes John Mauchly, instigator of the ENIAC project and one of the creators of the idea of a “digital computer” explaining the new concept with reference to “the usual mechanical computing machine, utilizing gears.”

Most of the chapters in this book deal with specific forms of digital materiality, emphasizing that the history of the digital is also the history of tangible machines and human practices. Ksenia Tatarchenko’s contribution deals with Soviet programmable calculators. These displayed and worked with digits, like the machines Mauchly described, but exchanged gears for electronics.

Other digital technologies, used long after the invention of electronic computers, avoided electronics entirely. Doron Swade provides a close technical reading of the ways in which a complex mechanical odds-making and ticket selling machine represented and manipulated numbers. 

Paul Cerruzi’s chapter explores Zatocoding, a digital method of categorizing and retrieving information using notches cut into the side of punched cards. Its creator, Calvin Mooers had early experience with digital electronics and used information theory to help formulate a highly compressed coding scheme able to combine many possible index terms. His work was foundation for modern information retrieval systems, including web search engines. Yet when Moores went into business he was more excited by the promise of paper-based digital information media. The cards represented information digitally, through different combinations of notches, which were read using what Ceruzzi calls a “knitting needle like device.”

Digital vs. Analog

The antonym of digital is “analog,” not “material.” As Kline explains, this distinction arose during the 1940s, with the spread of automatic computers. He locates it in discussions between the creators of digital computers, tracing its initial spread through enthusiasts for the new metascience of cybernetics, building on his work in (Kline 2015). Both kinds of machine could automate the solution of mathematical problems, whether at the desk, in the laboratory, or, as control equipment, in the field. The two kinds of computer represented the quantitates they worked on in fundamentally different ways.

Digital machines represented each quantity as a series of digits. mechanisms automated the arithmetic operations carried out by humans, such as addition and multiplication, mechanizing the same arithmetic tricks such as carrying from less significant digits to more significant digits or multiplying by repeated addition. Within the limits imposed by their numerical capabilities the machines could be relied upon (when properly serviced, which was not a trivial task) to be accurate and to give reproducible results. Machines with more digits provided answers with more precision.

In analog machines, in contrast, each quantity being manipulated was represented by a distinct part of the machine such as a shaft, a reservoir, or an electrical circuit. As the quantity represented grew or diminished the component representing it would change likewise. The shaft would spin more or less rapidly, the reservoir empty or fill, the voltage across the circuit rise or fall. This explains the name “analog computer.” An analogy captures the relationship between things in the world, defining a specific correspondence between each element of the analogy and something in the system being modelled.  Some, like model circuits used to simulate the behavior of electrical power networks, were essentially scale models. Others substituted one medium for another, such as water for money. The accuracy of an analog computer was a matter of engineering precision. In practice analog computers were specialized for a particular kind of job, such as solving systems of differential equations. They were often faster than digital computers, but usually accurate to only a few significant figures.

Once the categories of analog and digital computers were established it became natural to project the idea of analog vs. digital back onto earlier technologies. In these broader terms any discreet representation of numbers appears digital, whereas continuous representations appear analog. Kline notes that some computing specialists of the 1950s were wary of “analog” for this reason, prefer the prevision gained by speaking of “continuous” representations. Adding machines, calculating machines, cash registers, tabulating machines, and many other common technologies were digital. These machines typically represented each digit as a ten-faced cog, which rotated to store a larger number. Newer, higher speed, devices stored numbers as patterns in electromagnetic relay switches or electronic tubes. Other calculating devices, going back to ancient navigational tools such as the astrolabe, were analog. So was the once-ubiquitous slide rule (approximating the relationship between a number and its logarithm). Automatic control devices before the 1970s were, for the most part, analog: thermostats, the governors that regulated steam engines, and a variety of military fire control and guidance systems. As David Mindell has shown (Mindell 2002), engineers across a range of institutions and disciplinary traditions developed these techniques long before the mid-century fad for cybernetics provided a unified language to describe them.

Although the application of “digital” to computing and communication was new in the 1940s and bound up with automatic computers, many of the engineering techniques involved were older and arose in other contexts. In his chapter, Doron Swade explores several technologies from the nineteenth and early twentieth centuries including early recreational machines which embedded computational capabilities: a golf simulator, and “automatic totalizator” machines used by dog racing tracks to calculate odds in real time based on ticket sales. Swade notes that these machines have been left out of traditional master narratives in the history of computing, which focus on scientific calculator and office administration as the primary precursors of digital computer technology. His paper demonstrates the benefits of moving beyond the limitations imposed by this traditional frame, and taking a broader approach to the study of computational technology. Indeed, even the categories of digital and analog, according to Swade, are sufficiently tangled in engineering practice for him to challenge the “faux dichotomous categories used retrospectively in the context of pre-electronic machines.”

Computer Programs Are Inherently Digital

Programmability, often seen as the hallmark of the computer, is itself a fundamentally digital concept. As John von Neumann wrote when first describing modern computer architecture, in the “First Draft of a Report on the EDVAC,” an “automatic computing system is a (usually high composite) device which can carry out instructions to perform calculations of a considerable order of complexity….” (von Neumann 1993). In that formulation, the device is a computer because it computes: it carries out laborious and repetitive calculations according to a detailed plan. It is automatic because like a human computer, but unlike a calculating or adding machine, it goes by itself from one step in the plan to the next.

Kline’s contribution notes that digital and analog were not the only possible terms discussed during the 1940s. Some participants advocated strongly for the established mathematical terms “continuous” (instead of analog) and discrete (instead of digital). These distinctions apply not only to number representations, which analysis has usually focused on, but also to the way the two kinds of computer carry out their calculations. The latter distinction is perhaps the more fundamental, as it explains the ability of digital computers to carry out programs.

Analog computers work continuously, and each element does the same thing again and again. Connections between these components were engineered to mimic those between the real world quantities being modelled. A wheel and disc linked one shaft’s rotation to another’s, a pipe dripped fluid from one reservoir to another, an amplifier tied together the currents flowing in two circuits.

The MONIAC analog computer (Figure 1) designed by William Philips to simulate the Keynesian understanding of the economy illustrates these continuous flows. Different tanks filled or emptied to represent changing levels of parameters such as national income, imports, and exports. Adjustable valves and plastic insets expressing various functions governed the trickling of water from one chamber to another. This gave a very tangible instantiation to an otherwise hard to visualize network of equations, as economic cycles played themselves out and the impact of different policy adjustments could be tested by tweaking the controls.

Fig. 1.1 The Philips Machine, or MONIAC, illustrates two key features of analog computing: the
“analogy” whereby different parts of the machine represent different features of the world and the
fixed relationships between these parts during the computation, which consisted of continuous
processes rather than discrete steps. (Reproduced from (Barr 2000), courtesy of Cambridge
University Press)

In a very obvious sense, analog computations occur continuously. In contrast a computer, or to use the vocabulary of the 1940s an automatic digital computer, breaks a computation into a series of discrete steps and carries them out over time. At each stages in the computation the mechanism may work on a different variable. For example, most early digital computers had only one multiplying unit, so every pair of numbers to be multiplied had first to be loaded into two designated storage locations. Over the course of the computation the numbers loaded into those locations would refer to completely different quantities in the system being modelled.

The first step in planning to apply a digital computer to a problem was to figure out what steps the machine should carry out and what quantities would be stored in its internal memory during each of those steps. Two of the earliest efforts to plan work for automatic computers were made by Lovelace and Babbage in the 1830s and 1840s for the unbuilt Analytical Engine (Figure 2) and by the ENIAC team in 1943 to plan out the firing table computations for which their new machine was being built (Figure 3). When I explored these episodes in collaboration with Mark Priestley we were startled to realize that both teams came up with essentially the same diagramming notation when setting out sample applications for their planned computers: a table in which most columns represented different storage units of the machine and each row represented one step in the algorithm being carried out. A single cell thus specified the mathematical significance of an operation being carried out on one of the stored quantities.

Fig. 1.2 This 1842 table, prepared by Ada Lovelace, is a trace of the expected operation of
Babbage’s Analytical Engine running a calculation. Each line represents 1 of 25 steps in the computation (some of them repeated). Most of the columns represents quantities storied in particular parts of the engine.

Fig. 1.3 A detail from the ENIAC project diagram PX-1-81, circa December 1943. As with the
Babbage and Lovelace table, the rows represent discrete steps in the calculation, and the columns
(32 in the full diagram) represent different calculating units within ENIAC.

The word “program” was first applied in computing by the ENIAC team (Haigh and Priestley 2016). Our conclusion was that its initial meaning in this context was simply an extension of its use in other fields, such as a program of study, a lecture program, or a concert program. In each case the program was a sequence of discreet activities, sequenced over time. An automatic computer likewise followed a program of operations. The word was first applied to describe the action of a unit within ENIAC that triggered actions within other units: the master programmer. (The same term is given to the electro-mechanical control unit in an automatic washing machine, though we are not sure which came first). Quickly, however, “program” came to describe what von Neumann called “The instructions which govern this operation” which “must be given to the device in absolutely exhaustive detail” (von Neumann 1993). “Programmer” became a job title instead of a control unit. Thus “programmer” and “computer” passed between the domains of human and machine at around the same time, but in opposite directions.

Because each part of an analog computer carried out the same operation throughout the computation, analog computer users did not originally talk about “programming” their machines, though as digital computers became more popular the term was eventually applied to configuring analog computers. In contrast, digital computers build under the influence of von Neumann’s text adopted very simple architectures, in which computations proceeded serially as one number at a time was fetched from memory to be added, subtracted, multiplied, divided or otherwise manipulated. Such machines possessed a handful of general purpose logic and arithmetic capabilities, to be combined and reused as needed for different purposes.

As an automatic computer begins work its instructions, in some medium or another, are present within it and its peripheral equipment. If the computer is programmable, then these instructions are coded in a form that can be changed by its users. In operation it translates some kind of spatial arrangement of instructions into a temporal sequence or operations, moving automatically from one task to the next.

Early computers stored and arranged these instructions in a variety of media. ENIAC, the first programmable electronic computer, was wired with literal chains and branches, along which control pulses flowed from one unit to another to trigger the next operation. In his chapter, Tristan Thielmann uses ideas from media theory to explore ENIAC’s user interface, specifically the grids of neon bulbs it used to display the current content of each electronic storage unit. Because ENIAC could automatically select which sequence of operations to carry out next, and shifted between them at great speed, its designers incorporated these lights and controls to slow down or pause the machine to let its operators monitor its performance and debug hardware or configuration problems.

The chapter Mark Priestley wrote with me for this volume explores the range of media used to store programs during the 1940s and the temporal and spatial metaphors used to structure these media into what would eventually be called a “memory space.” Several computers of the mid-1940s read coded instructions one at a time, from paper tape. These tapes could be physically looped to repeat sequences. Computers patterned after von Neumann’s conception for EDVAC stored coded instructions in one or another kind of addressable memory. Whether this was a delay line, tube memory or magnetic drum had major implications for the most efficient way of spacing the instructions over the medium. Like ENIAC, these machines could branch during the execution of a program, following one or another route to the next instruction depending on the results of previous calculations. These media held both instructions and data, laying the groundwork for later systems that encoded text, audio, and eventually video data in machine readable digital forms.

The fundamental technology of digital computers and networks takes many different shapes and supports many different kinds of practice. In his chapter, Martin Campbell-Kelly explores the variety of use practices that grew up around one of the earliest general purpose digital computers, the EDVAC. Its users were the first to load programs from paper tape media into electronic memory, quickly devising a system that used the computer itself to translate mnemonics into machine code as it read the tape. The ability of computers to treat their own instructions as digital data to be manipulated has been fundamental to their diffusion as the universal machines of the digital age. Some practices from the first computer installations, such as the preparation of data and instructions in machine readable digital form, or the practice of debugging programs by tracing their operation one instruction at a time, spread with the machines themselves into many communities. Others were specific to particular areas of scientific practice and remained local.

Almost all of the earliest projects to build automatic digital devices were sponsored in some way or another by governments money. Charles Babbage was bankrolled by the British government, as was Colossus. Konrad Zuse relied on the patronage of the Nazi regime while ENIAC was commissioned by the United States Army. With the exception of the (widely misunderstood) connection of what became the Internet to the military’s interest in building robust networks the role of the state in the later exploitation and improvement of digital technology is less widely appreciated. Yet, as William Aspray and Christopher Loughnane show in their chapter in this volume, the state remained vitally important in structuring the early use of digital computers as a procurer of digital technologies, a sponsor of research, and a regulator of labor markets. Their chapter illustrates the contribution broader based historical analysis can provide to understanding the spread of digital technology, in contrast to popular history with its focus on brilliant individuals, as demonstrated by the title of the recent blockbuster “The Innovators: How Geniuses, Geeks, and Hackers Created the Digital Revolution” (Isaacson 2014).

Digital Information

The chapters collected here give a new and broader idea of the material culture of digitality. Nothing is immaterial. Yet there is something special about the relationship of bits to their material representations: different material representations are, from a certain viewpoint, interchangeable. Digital information can be copied from one medium to another without any loss of data, and the same sequence of bits can be recovered from each. Transcribe the text of a book into a text file, save that file, compress it, email it, download it, and print it out. The text has been represented in many material forms during this process, but after all those transformations and transcriptions one retains the same series of characters. Matthew Kirschenbaum called this the “formal materiality” of digitality (Kirschenbaum 2007). Discussion of “digital formats” as alternative to material media, misleading as it is, captures something about the truth of this experience.

Claude Shannon’s “Mathematical Theory of Communication” (Shannon and Weaver 1949), popularized as “information theory” is foundational to our sense of “the digital” and to the modern sense of “information” (Kline 2006, Soni and Goodman 2017). Maarten Bullynck’s chapter in this volume examines the early adoptions and development of Shannon’s earlier efforts to “synthesize” networks of relay switches from logical descriptions defined using Boolean algebra. Such circuits provided the building blocks of digital machines, including early computers. He suggests that it took a decade of work by practicing engineers, and the creation of new craft practices and diagramming techniques, to turn his work into a practical basis for digital electronic engineering. This also reminds us that Shannon’s work had a very specific institutional and technological context, looking backward to new digital communication techniques developed during WWII and forward to anticipate the generalization of these as a new basis for routine telecommunication.

Over time, the connection of digitality and information with computer technology grew ever stronger and tighter. “Information” had previously been inseparable from a process in which someone was informed of something, it now became what Jeff Nunberg memorably called an “inert substance” that could be stored, retrieved, or processed (Nunberg 1997). “Information” became a synonym for facts or data – and in particular for digitally encoded, machine readable data. This processes gained steam with the spread of the idea of “management information systems” within corporate computing during the 1960s (Haigh 2001), followed by the widespread discussion of “information technology” from the 1970s and the introduction of the job title “chief information officer” for corporate computing managers in the 1980s. I suspect that the root of all this is in the digital engineering practices used to build computers and other devices during the 1950s. Shannon-esque digital communication was taking place within computers, as memory tanks, tape drives, printers, and processing units swapped signals. This the context in which it became natural to think of a process of information occurring without human involvement and, with a slight linguistic and conceptual slippage, to think of the stored data itself as “information” even when it was not being communicated.

When Was the Early Digital?

Our sense of what, exactly, “early digital” means shifted during our discussions. It originally appealed as something indicating an era of historical interest, not unlike the “early modern” period referred to by historians of science. This volume focuses primarily on a time period from the 1930s to the 1950s, an era that provided the original locus for work on the history of modern computing. It is during this era that the concepts of “analog” and “digital” were invented, as were the technologies such as programmable computers and electronic memories that we associate with “the digital.” It is the era in which digital computational technologies are most clearly defined against analog alternatives, and a period in which their unmistakable and sometimes monumental materiality makes it clearest that digitality does not mean invisibility.

Yet “early digital” has an attractive temporal flexibility, and encompasses other devices that are not always considered to be “computers,” stretching back in time to Babbage’s planned difference engine and wartime devices such as the codebreaking Bombes. The phrase initially appealed to me because, like “modern” and “early modern,” its boundaries are obviously permeable. Tatarchenko, for example, looks at a Late Soviet version of the Early Digital, which spread during the early 1980s and centered on a more obviously digital technology: the programmable calculator. Users coded programs, including games, as sequences of digits displayed on tiny screens. From the viewpoint of a future in which humans have given up corporeal existence to live forever in cyberspace, the present day would seem like the very early digital.

As our thinking evolved over the course of several workshops, we came to think of “early digital” less as something defining a general epoch in a society and more as a very local designation describing the transformation of a specific practice within a specific community. In particular, we do not believe that there was a single “early digital” epoch, or that one can follow those who talk about “digital revolutions” into a view of the world in which a single event or invention creates a universal rupture between digital and pre-digital worlds.

The first instinct of the responsible historian is to challenge assumptions of exceptionalism, whether made for nations or for technologies. Discourses of the kind Gabrielle Hecht termed “rupture talk” (Hecht 2002) have grown up around many new technologies. These claim that the technology represents a break with all prior practice so dramatic that historical precedents are irrelevant. The now fashionable idea of the “post digital” is likewise premised on the idea that we are currently on the far side of some kind of digital rupture.

Recognizing that rhetoric of a “digital transformation” parallels claims made for nuclear power or space exploration as the defining technology of a new epoch, the careful historian should begin with a default assumption that computer technology is neither exceptional nor revolutionary. Yet all around us we see the rebuilding of social practices around computers, networks, and digital media. Even  the most careful historian might be moved to entertain the hypothesis that some kind of broadly based “digital transformation” really is underway. The challenge is to find a point of engagement somewhere between echoing the naïve boosterism of Silicon Valley (Kirsch 2014) and endorsing the reflex skepticism of those who assume that digital technology is just a novel façade for the ugly business of global capitalism and neoliberal exploitation.

As the papers gathered in this volume begin to suggest, technologies and practices did not become digital in a single world-historic transformation sometime in the 1940s or 1950s (or the 1980s or 1990s) but a set of localized and partial transformations enacted again and again, around the world and through time, as digital technologies were adopted by specific communities. Within those communities, one can further segment the arrival of the early digital by task. The EDVAC users discussed by Campbell-Kelly were using a digital computer to solve equations, reduce data, and run simulations but it would be decades before they could watch digital video or count their steps digitally. From this viewpoint, the early digital tag indicates the period during which a human practice is remade around the affordances of a cluster of digital technologies.

The early digital is also structured geographically. For most of humanity it arrived within the past five years, with the profusion of cheap smartphones. The poorest billion or two people are still waiting for it to begin.

Warming Up To the Early Digital

Our opportunity as historians of computing confronting a so-called “digital revolution” is to explain, rigorously and historically, what is really different about digital electronic technology, how the interchangeability of digital representations has changed practices in different areas, and how the technological aspects of digital technologies have intertwined with political and social transformations in recent decades. This means taking the “digital” in “digital computer” as seriously as the “computer” part.

At one of the workshops in the Early Digital series the phrase “I’m warming up to the Early Digital” was repeated by several participants becoming, by the end of the event, a kind of shared joke. The new phrase was beginning to feel familiar, useful as a complement to more established alternatives such as “history of computing” and “media history.”

The identity of “history of computing was adopted back in the 1970s, at a time when only a small fraction of people had direct experience with digital electronic technologies. Its early practitioners were computer pioneers, computer scientists, and computer company executives – all of whom identified with “computing” as a description of what people did with computers as well as with “the computer” as a clearly defined artifact.

The history of computing community devoted a great deal of its early energy to deciding what was and what was not a computer, a discussion motivated largely by the desire of certain computer pioneers, their family members, and their friends to name an “inventor of the computer.” As I have discussed elsewhere (Haigh, Priestley et al. 2016) this made the early discourse of the field a continuation of the lawsuits and patent proceedings waged since the 1940s. Though these disputes alas continue to excite some members of the public, they have little to offer scholars and were resolved to our satisfaction (Williams 2000) by issuing each early machine with a string of adjectives its fans were asked to insert between the words “first” and “computer.”

This did not resolve the larger limitation of “the history of computing” as an identity, which is that it makes some questions loom disproportionately large while banishing others from view. Our inherited focus on the question of “what is a computer,” defined with the fixation of some historically minded computer scientists and would-be philosophers on “Turing completeness,” has forced a divorce between closely related digital devices. Digital calculators, for example, have been discussed within the history of computing largely for the purposes of discounting them as not being computers, and therefore not being worthy of discussion. Yet, as Tatarchenko’s chapter shows, electronic calculators (the most literally digital of all personal electronic devices) shared patterns of usage and practice, as well as technological components, with personal computers.

Neither can the history of computing cut itself off from other historical communities. Computing, informing, communicating, and producing or consuming media can no longer be separated from each other. Thirty or forty years ago that statement might have been a provocative claim, made by a professional futurist or computer scientist looking for a lavish book deal. Today that digital convergence is the taken for granted premise behind much of modern capitalism, embodied in the smartphones we carry everywhere. Yet the histories of these different phenomena occupy different literatures, produced by different kinds of historians writing in different journals and in many cases working in different kinds of academic institution. “Information history” (Black 2006, Aspray 2015) happens for the most part within information schools, media history within media studies, and so on.

My own recent experience writing about the 1940s digital electronic codebreaking machine Colossus makes clear the distorting effect this boundary maintenance has had on our historical understanding. Since its gradual emergence from behind government secrecy since the 1970s Colossus has been claimed by its vocal proponents (Copeland 2013) to have been not just a computer, in fact the first fully operational digital electronic computer, but also programmable. These claims, first made (Randell 1980) at a time when its technical details were less well documented than there are today, do not hold up particularly well – the basic sequence of operations of the Colossus machines was fixed in their hardware and they could carry out no mathematical operation more complicated than counting. So “computer” is a bad fit, whether applied according to the usage of the 1940s (carrying out complicated series of numerical operations) or that of later decades (programmable general-purpose machines). The Colossus machines did, however, incorporate some complex electronic logic and pioneer some of the engineering techniques used after the war to build early electronic computers. Their lead engineer, Tommy Flowers, spent his career in telecommunications engineering dreaming of building an all-electronic telephone exchange (Haigh 2018). Decades later, he was still more comfortable viewing the devices as “processors” rather than computers. They applied logical operations to transform and combine incoming bitstreams, anticipating some of the central techniques behind modern digital communications. The Colossus machines played an appreciable, if sometimes exaggerated, part in determining the course of the Second World War. Within the frame of the “history of computing,” however, they matter only if they can somehow be shoehorned into the category of computers, which has motivated a great deal of special pleading and fuzzy thinking. Positioning them, and other related machines used at Bletchley Park, as paradigmatic technologies of the “early digital” requires no intellectual contortions.

The “early digital” is also a more promising frame than “the history of computing” within which to examine digital networking and the historical development of electronic hardware. Historians of computing have had little to say about the history of tubes, chips, or manufacturing – these being the domain of the history of engineering. While important work (Bassett 2002, Lecuyer 2006). (Thackray, Brock et al. 2015) has been done in these areas by scholars in close dialog with the history of computing, the material history of digital technologies has not been integrated into the mainstream of the history of technology. Overview accounts such as (Campbell-Kelly and Aspray 1996) have focused instead on computer companies, architectures, crucial products, and operating systems.

The need to engage with the history of digital communications is just as important. Perhaps mirroring the origins of the Internet as a tool for the support of computer science research, scholarly Internet history such as (Abbate 1999) and (Russell 2014) has fallen inside the disciplinary wall surrounding the computer as a historical subject. On the other hand the history of mobile telephony, which in the late 1990s becomes the story of digitally networked computers, has not. At least in the United States (and particularly within the Society for the History of Technology) historians of communications have so far focused on analog technologies – though anyone following the story of telephony, television, radio, or the successors to telegraphy past a certain point in time will have to get to grips with the digital. So far, though, the history of computing and history of communications remain largely separate fields despite the dramatic convergence of their respective technologies and industries since the 1980s. Some scholars within media and communication studies have been more engaged than historians of computing in situating digital technologies within broader political contexts (Mosco 2004, Schiller 2014)  something from which we can surely learn.

Media studies and media archaeology (Parikka 2011) have their own active areas of historical enquiry. This work is often written without engagement with the history of technology literature, and in some cases, such as (Brügger 2010), has deliberately eschewed the disciplinary tools and questions of history to situate the exploration of “web history” as a kind of media studies rather than a kind of history. Enthusiasm for “platform studies” (Montfort and Bogost 2009) has similarly produced a new body of historical work only loosely coupled to the history of computing and history of technology literatures. Neither have the authors of pathbreaking work at the intersection of digital media studies, cultural studies, and literature such as (Chun 2011) and (Kirschenbaum 2016) found it useful to identify as historians of computing. The “history of computing” does not resonate in most such communities, whereas study of “the early digital” may be more successful in forging a broader scholarly alliance.

Conclusions

As the computer dissolves itself into a digital mist we are presented with a remarkable opportunity to use lessons from decades of scholarship by historians of computing to bring rigor and historical perspective to interdisciplinary investigation of “the digital.” By embracing, for particular questions and audiences, the frame of the Early Digital as a new way of looking at the interaction of people with computers, networks, and software we can free our work from its historical fixation on “the computer” as a unit of study.

The papers gathered here form a series of provocative steps in this direction, spreading out from different starting points to explore different parts of this new terrain. Our position recalls Michael Mahoney’s description, a generation ago, of the challenge faced by the first historians of computing: “historians stand before the daunting complexity of a subject that has grown exponentially in size and variety, looking not so much like an uncharted ocean as like a trackless jungle. We pace on the edge, pondering where to cut in.” (Mahoney 1988) Today we face in “the digital” a still larger and more varied jungle, but like so exotic places that daunted Western explorers it is already inhabited. To comprehend it fully we will need to find ways to communicate and collaborate with the tribes of scholars who have made their homes there.

References

Abbate, J. (1999). Inventing the Internet. Cambridge, MA, MIT Press.

Aspray, W. (2015). “The Many Histories of Information.” Information & Culture 50(1): 1-13.

Barr, N. (2000). The History of the Phillips Machine. A.W.H. Phillips: Collected Works in Contemporary Perspective. New York, NY, Cambridge University Press: 89-114.

Bassett, R. K. (2002). To The Digital Age: Research Labs, Start-Up Companies, and the Rise of MOS Technology. Baltimore, Johns Hopkins University Press.

Black, A. (2006). “Information History.” Annual Review of Information Science and Technology 40: 441-473.

Blanchette, J.-F. (2011). “A Material History of Bits.” Journal of the American Society for Information Science and Technology 62(6): 1042-1057.

Brügger, N., Ed. (2010). Web History. New York, Peter Lang.

Campbell-Kelly, M. and W. Aspray (1996). Computer: A History of the Information Machine. New York, NY, Basic Books.

Chun, W. H. K. (2011). Programmed Visions: Software and Memory. Cambridge, MA, MIT Press.

Copeland, B. J. (2013). Turing: Pioneer of the Information Age. New York, NY, Oxford University Press.

Ensmenger, N. (2012). “The Digital Construction of Technology: Rethinking the History of Computers in Society ” Technology and Culture 53(4): 753-776.

Haigh, T. (2001). “Inventing Information Systems: The Systems Men and the Computer, 1950-1968.” Business History Review 75(1): 15-61.

Haigh, T. (2014). “We Have Never Been Digital.” Communications of the ACM 57(9): 24-28.

Haigh, T. (2018). “Thomas Harold (“Tommy”) Flowers: Designer of the Colossus Codebreaking Machines.” IEEE Annals of the History of Computing 40(1): 72-78.

Haigh, T. and M. Priestley (2016). “Where Code Comes From: Architectures of Automatic Control from Babbage to Algol.” Communications of the ACM 59(1): 39-44.

Haigh, T., M. Priestley and C. Rope (2016). ENIAC In Action: Making and Remaking the Modern Computer. Cambridge, MA, MIT Press.

Hecht, G. (2002). “Rupture-talk in the Nuclear Age:  Conjugating Colonial Power in Africa.” Social Studies of Science 32(6).

Isaacson, W. (2014). The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution. New York, Simon and Schuster.

Kirsch, A. (2014). “Technology is Taking Over English Departments: The False Promise of the Digital Humanities.” The New Republic.

Kirschenbaum, M. (2007). Mechanisms: New Media and the Forensic Imagination. Cambridge, MA, MIT Press.

Kirschenbaum, M. G. (2016). Track Changes: A Literary History of Word Processing. Cambridge, MA, Harvard University Press.

Kittler, F. A. (1999). Gramophone, Film, Typewriter. Stanford, CA, Stanford University Press.

Kline, R. (2015). The Cybernetics Moment, Or Why We Call Our Age the Information Age, Johns Hopkins University Press.

Kline, R. R. (2006). “Cybernetics, Management Science, and Technology Policy: The Emergence of ‘Information Technology’ as a Keyword, 1948-1985.” Technology and Culture 47(3): 513-535.

Lamont, T. (2018, 23 June). “‘You Can’t Judge a Generation’s Taste’: Making Now That’s What I Call Music.” from https://www.theguardian.com/music/2018/jun/23/generation-making-now-thats-what-i-call-music.

Lecuyer, C. (2006). Making Silicon Valley: Innovation and the Growth of High Tech, 1930-70. Cambridge, MA, MIT Press.

Mahoney, M. S. (1988). “The History of Computing in the History of Technology.” Annals of the History of Computing 10(2): 113-125.

Mindell, D. A. (2002). Between Human and Machine: Feedback, Control, and Computing Before Cybernetics. Baltimore, Johns Hopkins University Press.

Montfort, N. and I. Bogost (2009). Racing the Beam: The Atari Video Computer System. Cambidge, MA, MIT Press.

Mosco, V. (2004). The Digital Sublime: Myth, Power, and Cyberspace. Cambridge, MA, MIT Press.

Nunberg, G. (1997). Farewell to the Information Age. The Future of the Book. Berkeley, University of California Press: 103-138.

Parikka, J. (2011). “Operative Media Archaeology: Wolfgang Ernst’s Materialist Media Diagrammatics.” Theory. Culture & Society 28(5): 52-74.

Randell, B. (1980). The Colossus. A History of Computing in the Twentieth Century. N. Metropolis, J. Howlett and G.-C. Rota. New York, Academic Press: 47-92.

Russell, A. L. (2014). Open Standards and the Digital Age: History, Ideology and Networks. New York, NY, Cambridge University Press.

Schiller, D. (2014). Diigtal Depression: Information Technonology and Economic Crisis. Champaign, IL, University of Illinois Press.

Schröter, J. and A. Böhnke (2004). Analog/Digital – Opposition oder Kontinuum? Zur Theorie und Geschichte einer Unterscheidung. Bielefeld, Germany, Transkript.

Shannon, C. E. and W. Weaver (1949). The mathematical theory of communication. Urbana, University of Illinois Press.

Soni, J. and R. Goodman (2017). A Mind at Play: How Claude Shannon Invented the Information Age. New York, NY, Simon & Schuster.

Thackray, A., D. Brock and R. Jones (2015). Moore’s Law: The Life of Gordon Moore, Silicon Valley’s Quiet Revolutionary, Basic Books.

von Neumann, J. (1993). “First Draft of a Report on the EDVAC.” IEEE Annals of the History of Computing 15(4): 27-75.

Williams, M. R. (2000). A Preview of Things to Come: Some Remarks on the First Generation of Computers. The First Computers: History and Architectures. R. Rojas and U. Hashagen. Cambridge, MA, MIT Press: 1-16.