Miriam Posner's post, "Some things to think about before you exhort everyone to code," has touched off a series of conversations on twitter and elsewhere. My own feeling is that she's nailing some things square on the head and, fortunately, doesn't conclude saying that we should banish coding from the digital humanities. We just need to be careful how we cast the need for coding.
I've tossed around a nugget in my mind for the last few weeks, and Mariam's post is making me focus more intently on it: A digital humanist afraid of the digital is like a scholar of French literature who is afraid of French. You can't be a digital humanist if you don't understand the digital. That doesn't mean you have to be able to code any more than being a scholar of French literature means you have to be able to write French literature. You just have to be able to understand the nuances of what you're studying and how you are studying it. Otherwise, how can you properly interpret the results?
I am squarely in the demographic that Mariam points out has had all the advantages in computing. I grew up white, middle class, and male. My family got its first computer when I was around ten or so. I learned BASIC, TMS9900 assembly, FORTH, Pascal, C, and Fortran before I graduated from high school. I was on the Internet before there was a web. I experienced UNIX, VMS, the Mac, and DOS before there was Windows. I lusted after a NeXT when I went to college because it came with a C compiler that let me control what I ran instead of relying on what someone else thought might sell. It also shipped with Mathematica.
I have an itch to write novels. I started trying to write about the same time that I started learning to program. I spent more time programming than writing, so my programming skills advanced while my writing skills didn't. I ended up getting a masters in creative writing because that's where I needed help. I didn't need help with the computer.
When I was in school, I'd overhear computer science majors complaining about all the non-programming classes they had to take. I also heard them wondering why their UDP protocol failed when the network became congested. Many undergraduate computer science majors just wanted to learn how to code. They thought that as long as they could write programs, they'd be happy. They didn't need to know how algorithms worked. They didn't need any of that complicated math. Nothing as crazy as symbolic algebra. They didn't realize the career they wanted revolved around symbols. "Just let us write!" they would cry.
What I have learned over the years is that programming and writing are different in one important aspect: programming scales differently than literature. This one important rule is what explains much of the confusion humanists have about programming, and it's something that programmers must learn if they are to rise above being just a programmer.
Literature scholars are used to literature. Books are singular objects that are self-contained. References to other books are in passing. Legal and respectable books aren't made by combining libraries of materials, cutting and pasting chapters from a wide range of sources.
That's how programming works, though. You want to minimize the amount of repetition. You don't want to cut and paste code. You want to share code across projects. To do otherwise is to increase bugs and decrease productivity.
The problem is that you can't think about digital projects in the same way you think about non-digital projects. You have to look for areas of commonality and split those out into their own projects. You have to know where to draw the lines across which the parts interact. You have to understand the limits of the computer as well as the strengths.
None of this thinking is coding. It doesn't matter what language the developer uses to write the program, you still need to draw the right lines. You still need to find the common elements and split them off into their own libraries. You need to understand the organizing principles at play and how they impact the project.
In literature, it's not the person with the largest vocabulary who writes the best books, but the best storyteller. You can be the best coder in the world, but if you choose the wrong structure, you'll produce a worse program than a poor coder who selects the right structure.
My feeling on coding in the humanities is that we shouldn't run from it, but neither should we make it the holy grail. While we need to recognize that it is important to make things, it's more important to understand how things should be made. Just as we wouldn't remove a desire to learn French if there was a gender inequity in scholars of French literature, we shouldn't remove the digital from digital humanities. Instead, we need to find ways to increase equity and diversity by looking at the entire pipeline feeding into the digital humanities and see where we can intervene. At the same time, it wouldn't hurt to emphasize the critical thinking skills which, I think, are more wide spread and accessible in the humanities than just coding.
There are some worrying signs though. In the reviews for a poster session for DH 2012, one reviewer worried that the proposed poster would be too technical, and thus the attendees wouldn't understand it. Thus, the reviewer recommended that it not go forward. I'm worried when proposals are rejected because they might be too advanced for the attendees. That's not how a field moves forward.
Unlike the sciences, where references are used only when needed to provide evidence for an assertion, the humanities seem to see references as a way to give props or shout outs in publications. Apparently, the proposal had too few references for the tastes of another reviewer. Kind of like the emperor telling Mozart he had too many notes. No specific notes needed to be removed. Any would do. Likewise, there wasn't anything in particular that needed referencing. There were just too few references. Apparently, sprinkling a few useless references in the proposal would have elevated it to believability.
The third comment that was bothersome was that the proposal focused on method instead of conclusions. It would seem that we shouldn't bother trying to improve how we do our work.
The other comments were constructive and useful, so I'm glad for them. These three were the ones that worried me. They reflect poor support for critical thinking in the digital humanities. If this were the first time I had seen these kinds of comments, I would be irked, but nothing more. Unfortunately, I've come to expect these kinds of comments from DH conference reviewers. I've tried to respond in subsequent proposals, but at this point, I don't see where doing so would provide any value to the field.
Instead of pushing coding, let's push critical thinking. How do we structure our projects? How do we build projects that can share code and data with other projects? How do we build things that others find compelling? How do we influence the world? Let's elevate the field to the point where it begins leading a community outside of the academy. That won't happen if all we do is write code or ignore the digital aspects of our work.