James is a software developer and self-published author. He received his B.S. in Math and Physics and his M.A. in English from Texas A&M University. After spending almost two decades in academia, he now works in the Washington, DC, start up world.
I've been working off and on over the last six months on a programming language that sits on top of linked open data. Think of it as linked open code.
von Neumann Architecture
Before von Neumann made his observations about code and data, computers typically had some memory dedicated to code, and other memory dedicated to data. The processing unit might have a bus for each, so code and data didn't have to compete for processor attention.
This was great if you were able to dedicate your machine to particular types of problems and knew how much data or code you would typically need.
Von Neumann questioned this assumption. Why should memory treat code and data as different things when they're all just sets of bits?
I'm building a language designed to be natural with linked data just as programs today are natural with local memory. The result is highly functional and data-relative in nature and reminds me of how XSLT works relative to XML nodes (e.g., a current node, child nodes, ancestor nodes). I have quite a few tests passing for the parser and core engine, so now I'm branching out into libraries and seeing what I can do with all of the pieces.
A few months ago, I accepted a job outside the academy. This doesn't mean that I'm abandoning digital humanities. In this post, I lay out what I want to do in DH going forward. The common thread through all this is that I believe linked open data is the best way to break down the silo walls that keep digital humanities projects from sharing and build on existing data. Continue Reading My DH Agenda
It's that time of year again, when aspiring novelists around the world write a novel in a month. I skipped last year because I was continuing to work on a novel I started in the 2011 event. I haven't finished it yet (I'm editing the first 70,000 words before moving on to the second half), but I wanted to take advantage of NaNoWriMo to start another novel. I'm too slow a writer to finish one before I start another.
Last month, I attended the Google Summer of Code mentor meetup and picked up a nice notebook as one of the giveaways. I've always done my writing on a computer, but this time I figured I'd try to write my novel longhand.
Part of this is because at work, we recently released a digital edition of the original notebooks in which Mary Shelley wrote Frankenstein. As a writer, I find the draft process interesting. None of the deleted text is hidden. It's all there to be seen even when crossed out. While we don't have enough information to be certain about the exact order of the edits, having gone through the process of writing a novel (or two) helps give some insight into how the process works. By writing this month's novel in a paper notebook, I can gain some insight into how Mary might have experienced her writing process.
If we took stock of everything that we know and compared it to what we don't know, we'd find that we know a lot about almost nothing.1 As we explore new things, we need tools which give us an idea of what we're working with even when we don't know what it is. In textual scholarship, we like to do close readings: understanding all the nuances of a text word by word so that we can tease out almost hidden meanings that rely on us understanding the text as well as its context.2 Sometimes, we don't have a text or a context, but the effect of the text upon an audience. Or, to put it in more practical terms, we can't tell what goes on inside an author's mind, but we do have the resulting text. What can we learn about that mind from the text it produces?
In statistics, saying that something is "almost never" and "has zero probability" are pretty much the same. If we counted all the things that we know and divided it by the number of things that we don't know, the result would be almost zero. It is ironic that the more we study, the closer the ratio gets to zero. ↩
Last week, I wrote about how mobs might be predictable. One of the first tools that I mentioned was autocorrelation. This is a basic tool that we will use with the others in the list, so it's important to understand exactly what it does. That's what I want to explore this week.
Let's go back to high school geometry. We can define several properties and operations in terms of the angles and sides of the parallelogram to the right, though we'll need to dive into the cartesian coordinate system a bit to see how to move on to the next step towards the autocorrelation.
We want to look at what it means to do mathematical operations on these line segments. We know that we can add numbers together to get new numbers, but what does it mean to add line segments? If we take the segment from D to E, and add the segment from E to B, it's obvious that we end up with the segment from D to B. But what's not as obvious is that if we take D to E and add from E to C, we end up with D to C.
As a kid, I read Asimov'sFoundation series in which Hari Seldon develops a mathematical description of society called psychohistory. The science in the books is completely fictional, but it always sat at the back of my mind. What if there was a kernel of truth in the fiction? What if people could be predictable?
Psychohistory has two main axioms (taken from the Wikipedia entry):
that the population whose behaviour was modeled should be sufficiently large
that the population should remain in ignorance of the results of the application of psychohistorical analyses
The first axiom has an analogy in statistical physics: the number of particles should be sufficiently large. A single atom doesn't really have a temperature because temperature is a measure of how quickly disorder is increasing in a system. A single atom can't increase its disorder, but it can have an energy. It just happens that the rate of entropy increase is proportional to the average energy of a group of particles, so we equate temperature with energy and assume that a single atom can have a temperature. The entropy-based definition of temperature is more general than the energy-based definition: it allows negative temperatures.
The second axiom is similar to what you might expect for a psychology experiment: knowledge of the experiment by the participants can affect the outcome. For example, using purchasing data instead of asking someone outright if they are pregnant because sometimes the contextually acceptable answer will trump the truth.
The important thing is that people are predictable in aggregate. This is what allows a political poll to predict an election outcome without having to ask everyone who will be voting, though polls aren't perfectly predictable in part because someone will be more likely to tell a pollster what they think is socially acceptable, which might not show how they vote when they think no one is watching, thus reinforcing the need for the second axiom.
I've had the e-book edition of my novel, Of Fish and Swimming Swords, available for Kindle and Smashwords for two years. Now, I have a print edition.
You can order a print copy from CreateSpace. Use the discount code XMXXKGKU to get 25% off.
The print cover is different from the digital, but I still tried to put together a cover that was somewhat connected to the novel. The digital cover reflects the role of fours and a virtual world tree. In the case of the print edition, the artifacts resemble meshing gears, cycles enmeshed with cycles, and discarded materials half buried in the sand, similar to the layers of conspiracy in the story feeding off of each other and only half emerging from the text.
The next step is to match up the print and digital editions on Amazon so that you can get a copy of the digital edition when you buy a copy of the print through Amazon's Kindle Matchbook program.
Seeing what happens when you collide the humanities with the digital