Seeing what happens when you collide the humanities with the digital
James is a software developer and self-published author. He received his B.S. in Math and Physics and his M.A. in English from Texas A&M University. After spending almost two decades in academia, he now works in the Washington, DC, start up world.
I’ve always been interested in photography. As a kid, I’d take my dad’s film SLR out for a spin every once in a while. I also had a bit of time in the early 1990s with another film SLR and another in the early 2000s. Now, I’m picking up a DSLR and getting back into the hobby.
On its face, photography is no more difficult than writing. If enough monkeys bang on enough typewriters for enough time, something interesting will emerge. The difficulty comes in limiting how much effort goes into producing each interesting thing.I’m approaching photography based on how I do writing. Practice. Practice. Practice. And a bit of editing and getting feedback from others. Continue Reading Returning to Photography
It doesn’t seem like it’s been over four years since I joined MITH and started working with Project Bamboo. Just because I’ve moved on to a startup and the project’s been mothballed doesn’t mean we can’t mine what was done.
The problems with Project Bamboo are numerious and documented in severalplaces. One of the fundamental mistakes made early on was the waterfall approach in designing and developing an enterprise style workspace that would encompass all humanities research activities rather than produce an agile environment that leveraged existing standards and tools. Top down rather than bottom up.However, the idea that digital humanities projects share some common issues and could take advantage of shared solutions is important. This is part of the reporting aspect of research: when we learn something new, we not only report the new knowledge, but how we got there to help someone else do similar work with different data. If we discover a way to distinguish between two authors in a text, we not only publish what we think each author wrote, but the method by which we made that determination. Someone else can apply that same method to a different text.Continue Reading Algorithmic Provenance of Data
As I start thinking about what should go into the core of a linked open code schema, I’m tempted to put a lot of high-level operations into the core so they run faster. History tells us that’s the wrong way to go.
I’ve been working off and on over the last six months on a programming language that sits on top of linked open data. Think of it as linked open code.
von Neumann Architecture
Before von Neumann made his observations about code and data, computers typically had some memory dedicated to code, and other memory dedicated to data. The processing unit might have a bus for each, so code and data didn’t have to compete for processor attention.
This was great if you were able to dedicate your machine to particular types of problems and knew how much data or code you would typically need.Von Neumann questioned this assumption. Why should memory treat code and data as different things when they’re all just sets of bits?Continue Reading Linked Open Code
The RDF equivalent of “If you can’t say anything nice, don’t say anything at all” is “If you can’t assert something, then don’t assert anything at all.”
I’m building a language designed to be natural with linked data just as programs today are natural with local memory. The result is highly functional and data-relative in nature and reminds me of how XSLT works relative to XML nodes (e.g., a current node, child nodes, ancestor nodes). I have quite a few tests passing for the parser and core engine, so now I’m branching out into libraries and seeing what I can do with all of the pieces.
A few months ago, I accepted a job outside the academy. This doesn’t mean that I’m abandoning digital humanities. In this post, I lay out what I want to do in DH going forward. The common thread through all this is that I believe linked open data is the best way to break down the silo walls that keep digital humanities projects from sharing and build on existing data. Continue Reading My DH Agenda
It’s that time of year again, when aspiring novelists around the world write a novel in a month. I skipped last year because I was continuing to work on a novel I started in the 2011 event. I haven’t finished it yet (I’m editing the first 70,000 words before moving on to the second half), but I wanted to take advantage of NaNoWriMo to start another novel. I’m too slow a writer to finish one before I start another.
Last month, I attended the Google Summer of Code mentor meetup and picked up a nice notebook as one of the giveaways. I’ve always done my writing on a computer, but this time I figured I’d try to write my novel longhand.
If we took stock of everything that we know and compared it to what we don’t know, we’d find that we know a lot about almost nothing.1 As we explore new things, we need tools which give us an idea of what we’re working with even when we don’t know what it is. In textual scholarship, we like to do close readings: understanding all the nuances of a text word by word so that we can tease out almost hidden meanings that rely on us understanding the text as well as its context.2 Sometimes, we don’t have a text or a context, but the effect of the text upon an audience. Or, to put it in more practical terms, we can’t tell what goes on inside an author’s mind, but we do have the resulting text. What can we learn about that mind from the text it produces?
In statistics, saying that something is “almost never” and “has zero probability” are pretty much the same. If we counted all the things that we know and divided it by the number of things that we don’t know, the result would be almost zero. It is ironic that the more we study, the closer the ratio gets to zero. ↩