Four Ways Women Made It Easy for You to Code

This is one of the essays I delivered to my patrons last month. If you want to support more work like this, and see it earlier, you can sign up here.

Computer programming is one of those fascinating fields in which we got to watch work become less pink collar over time. It started as women’s work because the prestige was thought to be in hardware engineering, not “computing”, which was really just dressed-up math. (Yes, World War II required governments to recognize the math skills of their female citizens, just like they required the U.S. to recognize the skills of its black citizens.)

Then, as women developed the field of programming, the private sector started to understand just how much work it would be possible to get computers to do. Programmers gained status and pay and–over the course of a couple of decades–the idea that the work should be done by men. Women have always continued to program, particularly in government service, but they came to be seen as anomalies instead of the people who defined the field.

Before that could happen, however, women led the way to making programming practical and accessible. In honor of Ada Lovelace Day, here are four ways they did it. And no, Ada Lovelace isn’t even on the list, as awesome as she was, because it’s easy these days to find out more about her.

ENIAC and Iteration

Before there were programming languages worthy of the name, there was assembly language, which was essentially a set of instruction to a computer telling it which switches to flip. Before assembly language, however, programming involved the flipping of physical switches. Debugging at this point also involved testing vacuum tubes and wiring connections to find flaws in hardware.

Two women, one standing, one crouching, in front of an array of panels of switches and cables.
Lichterman and Wescoff, U.S. Army photo

This was, needless to say, grueling work that some of today’s coders would be physically unable to do and many more would be unwilling to do. Of course it fell to women. The original programming crew of ENIAC, the first computer designed to be able to do an array of tasks, was Fran Bilas, Betty Jennings, Ruth Lichterman, Kay McNulty, Betty Snyder, and Marlyn Wescoff.

These women were “computers” themselves, women who had spent the war running calculations by hand, and were now working to put themselves out of business by automating their jobs. Instead, they built a whole new field and industry from scratch. That wasn’t easy, and not just because they were doing everything for the first time.

Every application required a massive amount of physical work. That, in turn, provided an excellent incentive to streamline the work, as did the physical limitations of the computer. The women, who also converted the calculations needed into algorithms and programs, did just that. The ENIAC team is one of our first sources of programmatic subroutines and loops, standard programming structures to this day.

The next time you write a loop instead of figuring out how to build that functionality long-hand, thank the women of ENIAC.

Read more: “Walter Isaacson on the women of ENIAC“, “Finding the forgotten women who programmed the world’s first electronic computer

Something Like Language

If you’ve ever looked at assembly language, it’s ugly. More importantly, however, it’s only sort of a language. It is a language in the sense that arrangements of letters mean the same thing across uses. Some “words” are even built on the same language root, though many are arbitrary. On the other hand, space and assembler limitations make assembly language annoyingly unlike the natural languages we use every day.

That causes problems for programmers. We can’t transfer many of our language skills (spotting typos, the treating of whole words as “chunks” of meaning) to assembly language until we are familiar enough with it to start to see it as its own language instead of continually translating from ours. While I know a couple of people who have programmed in assembly language for fun, most programmers consider this fiddly, exhausting work.

Luckily, the vast majority of us will never have to program in assembly language. Admiral Grace Hopper took care of that for us while working on the development of UNIVAC.

When we see Hopper mentioned as part of efforts to celebrate women in computing, we most often hear the story of how she dubbed computer glitches “bugs” after a moth found in a machine that was having problems. That’s a pity, both because the story is apocryphal and because Hopper is responsible for so much more.

Hopper did coin a common computing word, but it wasn’t “bug”. It was “compiler”. She had naming rights by virtue of originating the concept and building the first compiler. Lest you underestimate its importance, compilers are what allowed us to write in the same programming language for multiple machine designs. The language that first compiler translated wasn’t anything we would recognize as a modern programming language, but that was only a few short steps away.

Grace Hopper’s goal was not to program with her translatable mathematical language but the development of programming languages that were as close to plain English as possible. Through the 50s, she led a team that created MATH-MATIC and FLOW-MATIC. When FLOW-MATIC became the main base on which COBOL was built, Hopper consulted on the development of the language that would go on to be so influential that the U.S. had to import COBOL programmers as it scrambled to fix Y2K bugs in time.

If you enjoy using real words and descriptive variable names when you program, thank Grace Hopper.

Read more: Check out the several biographies listed by the National Women’s History Museum.

RTFM

Could we all just learn programming via trial and error? Maybe, but not very well. Could we turn programming into a modern guild, with knowledge passed down through apprenticeship? Nope. At least, we couldn’t do that and still produce the number of coders needed to maintain a modern software industry. At some point, we had to write all this down.

Luckily, with programming we started early. In fact, the first programming manual was written as the first real programming occurred.

Adele Goldstine was married to ENIAC engineer Herman Goldstine. A mathematician in her own right, she was among those “computers” trained in the trajectory calculations that would be one of ENIAC’s primary tasks. With that experience under her belt, she was tasked with training the crew who would operate the machine full time. Stories from the women she trained say she did so with aplomb and a cigarette in her mouth.

Then she created a manual. In 1946, the U.S. Army published (for internal use only, I’m sure) the first operator’s manual for the first flexible-purpose computer. The manual was written by Goldstine. Because the programming of ENIAC was done physically on the machine itself, that also makes this the first programming manual.

Early ENIAC stories involve those early programmers pigeonholing engineers to learn the intricacies of various parts of the machine. With Goldstine’s work, they and subsequent programming teams were able to stop doing that and get on with programming on their own schedules, rather than the engineers’.

These days, formal technical writing is often undervalued, perhaps because the field has a much higher percentage of women working in it than other technical fields. In practice, however, programmers who want to avoid recreating the wheel tend to be highly reliant on documentation.

The next time you resolve your coding problem with documentation created by another programmer, thank Adele Goldstine.

Read more: The Engineering and Technology History Wiki has a nice bio of Goldstine, whose work tends to be neglected because she died young, before oral histories of these programs were collected.

A Booming Business

The military was the first main consumer of computer programming. Business and scientific concerns entered the field very shortly thereafter, as they did with much of the technology developed during WWII. The consumer market for software didn’t develop until the 1970s.

The consumer market was important, however, in that it significantly broadened the software market by changing how people used computers. It’s taken several decades, but we’ve recently reached a point where the games industry rivals or beats the movie industry in income.

Then we have the internet and the way the internet has enabled the data age. What do people use their computers (including their smart phones) for these days? Everything. Keeping track of everything. Sharing everything. All the everything.

That means everything has to be coded. Barring mismanagement, that means computer programmers getting paid. And as fun as it can be to code as a hobby, nothing enables coding like getting paid to spend a significant chunk of your life at it.

“Dot-com” jobs aren’t necessarily stable, and we may be on the brink of another collapse as investors try to figure out how to cash in on their enthusiasm, but even with contraction, the internet has enabled a significant growth in coding over the last couple of decades, even before we talk about how it’s enable open source collaboration. And a good chunk of the conceptual work that makes the internet work was done by Radia Perlman.

Photo of Radia Perlman standing behind a podium with her face projected on a screen beside her.
“Radia Perlman” by Jalisco Campus Party, CC BY 2.0

Perlman started her work decades after the women already mentioned here, in a time when most women had already been pushed out of programming. In the late 1970s, she left her graduate program in math to program. A few years later, she was an architect working in the early development of networking protocols, the processes by which data is accurately and efficiently transferred between linked computers.

It was during this time that she developed Spanning Tree Protocol (STP), which governs the creation of paths in a complex local network. More importantly for the broader development of the internet, in 1992, she published Interconnections: Bridges and Routers. This was the book on network protocols, and it brought clarity to field Perlman has described as “really murky, full of jargon and hype”.

When you do coding work facilitated by the internet or that depends on the existence of the internet, thank Radia Perlman.

Read more: “Radia Perlman: Don’t Call Me the Mother of the Internet“, “The Many Sides of Radia Perlman

As you can see from the links, Perlman doesn’t embrace the title “the mother of the internet”. Though she doesn’t downplay her own skills, she recognizes how much being a pioneer depends on time and place. If she hadn’t developed STP, someone else would have, although there may have been differences.

This is true for all the women featured here. Grace Hopper was working on subroutines at the same time the ENIAC programmers were. Other people were developing languages that would require functional compilers when Hopper completed hers. Documentation was an inevitability when academia and the military joined forces (though not guaranteed to be good).

The same is true, however, for the men we credit with being pioneers of computing. We don’t dispute that we owe them credit and thanks for their work. Let’s not do that to women either. Ada Lovelace was only the first of many women to critically shape programming for the better.

Want to see more work like this? Support me on Patreon.

{advertisement}
Four Ways Women Made It Easy for You to Code
{advertisement}

9 thoughts on “Four Ways Women Made It Easy for You to Code

  1. 1

    In defense of Assembly language

    >Before there were programming languages worthy of the name, there was assembly language, which was essentially a set of instruction to a computer telling it which switches to flip.

    In modern assembly languages the above is almost defamation using a strawman argument.

    >If you’ve ever looked at assembly language, it’s ugly. More importantly, however, it’s only sort of a language. It is a language in the sense that arrangements of letters mean the same thing across uses. Some “words” are even built on the same language root, though many are arbitrary. On the other hand, space and assembler limitations make assembly language annoyingly unlike the natural languages we use every day.

    A fragment of my ARM Cortex-M4 ugly assembly code. (In C terms it’s the switch part of a switch/case.)

    cmp r6,#17
    badScv bge badSvc ;die hard (should we kill task?)
    tbb [pc,r6]
    jbytes dc8 (svc0 -jbytes)/2 ; 0 "rcl" force task switch
    dc8 (svc1 -jbytes)/2 ; 1 suspendMe()
    dc8 (svc2 -jbytes)/2 ; 2 wait r0 milliseconds

    It *is* “unlike the natural languages we use every day.” But it’s the native language of a well under $10 microcontroller. Software engineers use that “only sort of a language” when speed and space are important. Writing an svc handler for an RTOS in C would make the code more twice the size and less than half the speed. And the C code would have to find a way of saving registers not already pushed on the stack by the svc instruction. In assembly that saving is 4 instructions.

    >Luckily, the vast majority of us will never have to program in assembly language.

    And that is a pity. Or perhaps a job opportunity for me.
    (Post did not preview will.)

  2. 3

    In modern assembly languages the above is almost defamation using a strawman argument.

    What in the ever loving hell are you talking about? What is so much better about “modern” assembly languages than “old” assembly languages? I suppose you could argue that some CPU architectures have more powerful instructions these days (i.e. fewer instructions to produce more well arranged flipped switches), but that has nothing to do with assembly language itself, that’s due to the CPU. I suppose the ability to add a long “human language” comments to each line of assembly was a good enhancement, but that’s not a modern development.

    It *is* “unlike the natural languages we use every day.” But it’s the native language of a well under $10 microcontroller. Software engineers use that “only sort of a language” when speed and space are important. Writing an svc handler for an RTOS in C would make the code more twice the size and less than half the speed. And the C code would have to find a way of saving registers not already pushed on the stack by the svc instruction. In assembly that saving is 4 instructions.

    Assembly language is also the native language of a 5 cent microcontroller and a multi thousand dollar Intel Xeon CPU. In other breaking news, the blue sky is blue. The OP doesn’t state that assembly language is completely useless these days, just that it isn’t a great language for humans to understand.

    Try this analogy:

    I could communicate with you by writing individual letters on a dry erase queue card, one at a time, erasing the last before writing the next. If we’re sitting together in a quiet room trying to discuss politics, not only would this be annoying, it would be incredibly inefficient. But, in the rare case that we’re in opposite corners of a crowded and noisy conference hall and you wanted to let me know some small, important piece of information, the dry erase queue card approach would work well. So, yea, assembly is great at optimizing instruction space and in tight timing situation (say in a small device for a real time application, or an ISR), but you wouldn’t sit down in front of your text editor to write assembly for an email client web GUI or social networking mobile app. Well, you could, and I have a bridge to sell whoever is willing to pay you to do that.

    And that is a pity. Or perhaps a job opportunity for me.

    Not too many jobs working on compiler optimization, and only a few OS/systems level companies out there. There’s the realtime/embedded market which still employs “decent” numbers of people. All in all, though, it seems a rather narrow skill these days, at least in software development employment terms. I’ve done ARM, Atmel, PIC, 8051, 8086 assembly programming, and learning how/what a C compiler does, and using inline assembly when you have to, will save you a ton of time, even on very small devices with very limited resources. Getting shit done = very employable. Being a tool/method/language snob = first one out during the next RIF.

  3. 4

    Luckily, the vast majority of us will never have to program in assembly language.

    Kids these days don’t know how lucky they are. I remember this one time I tried programming in machine language. The code was OK, but the carriage returns were killing me. By comparison, switching to assembly language was a picnic in the park.

  4. AMM
    5

    A few random thoughts:

    1. Did they actually use what we now think of as “assembly language” back in 1951? I.e., text files with symbolic names for addresses, instructions, and blocks of memory, and a translator to binary machine code? I wasn’t around back then, but I’d always had the impression they coded stuff up as binary numbers on paper tape (or cards?) FWIW, I remember doing that for my first attempt to write a program back in 1965 or so, though it was entirely for the fun of doing so, nobody who was actually trying to get work done wrote machine code by then.

    2. I looked up “A-0”, and, as you say, it was one small step towards programming languages as we know them. However, one should not scoff at small steps. It is a myth that science and technology advance by breakthroughs, every so-called breakthrough is but the final (or maybe intermediate) step in a long series of small steps, and Grace Hopper certainly deserves credit for starting us on the path to, say, g++. Besides, I doubt that a program written in C++ or Java (or whatever your favorite high-level language is) could be compiled into something that would have fit on the computers they had back then. Modern compilers certainly wouldn’t. A-0 was probably what we’d call “appropriate technology.”

  5. 6

    OP: Nice article. While reading through one of Admiral Hopper’s papers from the time, I noticed that she credited the women who came up with the generator subroutines (subroutines that generated new subroutines based on parameters) that were included in the A-series compilers subroutine library. Other computer sites usually presented their subroutine libraries as just kind of popping into existence.

    AMM@5: I’m not a computer historian, but I have spent some time digging through the archives at bitsavers.org.

    Did they actually use what we now think of as “assembly language” back in 1951? I.e., text files with symbolic names for addresses, instructions, and blocks of memory, and a translator to binary machine code?

    It depended on what they did at that particular computer installation. The MIT Whirlwind 1 computer installation used a conversion program to translate paper tapes containing mnemonic instruction codes and decimal numbers to binary and to resolve relative addressing. The conversion program also seemed to do some basic meta-programming. For example, “DITTO-THRU” was used to set a range of registers to same value. The typewritten program listing appears similar to modern assembly, though instead of textual labels, the suggestion for Whirlwind 1 programmers was to label the entry joint or jump point with the address of the instruction word that it was going to jump from, with the exception of the starting instruction which was labeled “start”.

    The equivalent of modern labels and variable names were tags, which at some point between 1951 and 1955 evolved from being simply numerical values to being alphanumeric.

    Macros-instructions and immediate addressing (here called literal addressing) appeared later with the IBM Autocoder assemblers, the first of which was for the IBM 702 in 1955. Of course, the primary input method to the IBM 702 was a card reader, so again, you would only have the “text file” view of the program from the typewritten listing, if there was one.

    AMM, which machine were you programming at the time? Were you using a manual punch or a teletype to do the coding to tape or card? I’m always interested in stories of how things were.

  6. AMM
    7

    Maya @6:

    which machine were you programming at the time? Were you using a manual punch or a teletype to do the coding to tape or card?

    IBM 1620 (a decimal computer, i.e., each memory location was a single BCD digit. plus a “flag” bit that was set to indicate the high-order digit of a number.)

    It used 80-column (Hollerith) punch cards, which we punched on a keypunch. Output was also usually via cards, since the only printing device was a heavily-modified IBM Executive typewriter; there was a separate patch-board programmed device that would print reports from the punch cards. I believe there were paper tape readers and punches for this model of computer, but the 1620’s I had access to didn’t have them.

    The computer itself was a large console, the size of a large desk for an executive or of a small conference table. The card reader/punch was a separate free-standing appliance, as was the additional memory (if you wanted more than 20,000 decimal digits of memory.)

    They could be programmed in assembler (one instruction or pseudo-instruction per punch card) or a higher-level language like Fortran or Cobol. I never had contact with Cobol, but compiling Fortran required running the deck twice through the card reader; on the second pass, an object deck would be punched (which included a bootstrap program and the addition and multiplication tables at the beginning.) So there ended up being at least 4 passes: one to read in the compiler, two to compile, and one more to read in the compiled program and any data.

  7. 8

    AAM, the IBM 1620 was my first real machine. As a student (chemistry) I programmed the IBM 1620 in machine language with “self-loading” punch cards – one instruction per card. Prior to that I learned an artificial machine designed for tutorial purposes. TUTAC, I think it was called. More for fun that anything else I wrote a TUTAC simulator using those self-loading cards.

    The college computer center director had written his own simulator that replaced TUTAC instructions with 1620 subroutine calls. But after chapter 12 instruction modification was introduced. His no longer worked. But mine did.

    There were political issues.

    I also wrote an IBM 1620 trace. One card was punched for each simulated instruction. Years later, I used it to find a bug in a 1620 simulator. (Remainder on divide was incorrect.)

    Does anyone know why the 1620 was called CADET?

  8. 9

    I appreciate your article. I was familiar with all of the history you mentioned with the exception of the contributions of Radia Perlman to the computer industry. I knew about STP as an early implementor of a IEEE 802.3 network but somehow learned nothing about who made that technology useful. I’m going to take a couple of hours to learn more about her contributions.

    I started programming on a Teletype model ASR 33 with paper-tape reader/punch as a high-school sophomore in 1976. My freshman year of college included programming a microprocessor by toggling switches on the front-panel. Ahh, the good old days when everything was better than it is now 🙂

Comments are closed.