Linked Open Code

I've been working off and on over the last six months on a programming language that sits on top of linked open data. Think of it as linked open code.

von Neumann Architecture

Before von Neumann made his observations about code and data, computers typically had some memory dedicated to code, and other memory dedicated to data. The processing unit might have a bus for each, so code and data didn't have to compete for processor attention.

This was great if you were able to dedicate your machine to particular types of problems and knew how much data or code you would typically need.

Von Neumann questioned this assumption. Why should memory treat code and data as different things when they're all just sets of bits?

Continue Reading Linked Open Code

Fun with Functions

English: This image shows the 3-6-9 hexagram o...
English: This image shows the 3-6-9 hexagram of the circle made of the digital root of Fibonacci numbers. (Photo credit: Wikipedia)

I'm building a language designed to be natural with linked data just as programs today are natural with local memory. The result is highly functional and data-relative in nature and reminds me of how XSLT works relative to XML nodes (e.g., a current node, child nodes, ancestor nodes). I have quite a few tests passing for the parser and core engine, so now I'm branching out into libraries and seeing what I can do with all of the pieces.

Continue Reading Fun with Functions

My DH Agenda

A few months ago, I accepted a job outside the academy. This doesn't mean that I'm abandoning digital humanities. In this post, I lay out what I want to do in DH going forward. The common thread through all this is that I believe linked open data is the best way to break down the silo walls that keep digital humanities projects from sharing and build on existing data.
Continue Reading My DH Agenda

Markov Chain Text Generation

With the recent postings elsewhere about Markov Chains and text production, I figured I'd take a stab at it. I based my code on the Lovebible.pl code at the latter link. Instead of the King James Bible and Lovecraft, I joined two million words from sources found on Project Gutenberg:

  • The Mystery of Edwin Drood, by Charles Dickens
  • The Secret Agent, by Joseph Conrad
  • Superstition in All Ages (1732), by Jean Meslier
  • The Works of Edgar Allan Poe, by Edgar Allan Poe (five volumes)
  • The New Pun Book, by Thomas A. Brown and Thomas Joseph Carey
  • The Superstitions of Witchcraft, by Howard Williams
  • The House-Boat on the Styx, by John Kendrick Bangs
  • Bulfinch's Mythology: the Age of Fable, by Thomas Bulfinch
  • Dracula, by Bram Stoker
  • The World English Bible (WEB)
  • Frankenstein, by Mary Shelley
  • The Vision of Paradise, by Dante
  • The Devil's Dictionary, by Ambrose Bierce

Continue Reading Markov Chain Text Generation

NaNoWriMo 2013

It's that time of year again, when aspiring novelists around the world write a novel in a month. I skipped last year because I was continuing to work on a novel I started in the 2011 event. I haven't finished it yet (I'm editing the first 70,000 words before moving on to the second half), but I wanted to take advantage of NaNoWriMo to start another novel. I'm too slow a writer to finish one before I start another.

Red notebook and pen as part of the swag from GSoC Mentor Meetup 2013.
Red notebook as part of the swag from GSoC Mentor Meetup 2013.

Last month, I attended the Google Summer of Code mentor meetup and picked up a nice notebook as one of the giveaways. I've always done my writing on a computer, but this time I figured I'd try to write my novel longhand.

Part of this is because at work, we recently released a digital edition of the original notebooks in which Mary Shelley wrote Frankenstein. As a writer, I find the draft process interesting. None of the deleted text is hidden. It's all there to be seen even when crossed out. While we don't have enough information to be certain about the exact order of the edits, having gone through the process of writing a novel (or two) helps give some insight into how the process works. By writing this month's novel in a paper notebook, I can gain some insight into how Mary might have experienced her writing process.

Continue Reading NaNoWriMo 2013

Phase Space

If we took stock of everything that we know and compared it to what we don't know, we'd find that we know a lot about almost nothing.1 As we explore new things, we need tools which give us an idea of what we're working with even when we don't know what it is. In textual scholarship, we like to do close readings: understanding all the nuances of a text word by word so that we can tease out almost hidden meanings that rely on us understanding the text as well as its context.2 Sometimes, we don't have a text or a context, but the effect of the text upon an audience. Or, to put it in more practical terms, we can't tell what goes on inside an author's mind, but we do have the resulting text. What can we learn about that mind from the text it produces?

Continue Reading Phase Space

  1. In statistics, saying that something is "almost never" and "has zero probability" are pretty much the same. If we counted all the things that we know and divided it by the number of things that we don't know, the result would be almost zero. It is ironic that the more we study, the closer the ratio gets to zero.
  2. See Borges's "Pierre Menard, Author of the Quixote" for a humorous example of text within context.

Autocorrelation

Last week, I wrote about how mobs might be predictable. One of the first tools that I mentioned was autocorrelation. This is a basic tool that we will use with the others in the list, so it's important to understand exactly what it does. That's what I want to explore this week.

Geometry

Parallelogram
Parallelogram (Photo credit: Wikipedia)

Let's go back to high school geometry. We can define several properties and operations in terms of the angles and sides of the parallelogram to the right, though we'll need to dive into the cartesian coordinate system a bit to see how to move on to the next step towards the autocorrelation.

We want to look at what it means to do mathematical operations on these line segments. We know that we can add numbers together to get new numbers, but what does it mean to add line segments? If we take the segment from D to E, and add the segment from E to B, it's obvious that we end up with the segment from D to B. But what's not as obvious is that if we take D to E and add from E to C, we end up with D to C.

Continue Reading Autocorrelation

Predictable Mobs

As a kid, I read Asimov's Foundation series in which Hari Seldon develops a mathematical description of society called psychohistory. The science in the books is completely fictional, but it always sat at the back of my mind. What if there was a kernel of truth in the fiction? What if people could be predictable?

Psychohistory has two main axioms (taken from the Wikipedia entry):

  • that the population whose behaviour was modeled should be sufficiently large
  • that the population should remain in ignorance of the results of the application of psychohistorical analyses

The first axiom has an analogy in statistical physics: the number of particles should be sufficiently large. A single atom doesn't really have a temperature because temperature is a measure of how quickly disorder is increasing in a system. A single atom can't increase its disorder, but it can have an energy. It just happens that the rate of entropy increase is proportional to the average energy of a group of particles, so we equate temperature with energy and assume that a single atom can have a temperature. The entropy-based definition of temperature is more general than the energy-based definition: it allows negative temperatures.

The second axiom is similar to what you might expect for a psychology experiment: knowledge of the experiment by the participants can affect the outcome. For example, using purchasing data instead of asking someone outright if they are pregnant because sometimes the contextually acceptable answer will trump the truth.

The important thing is that people are predictable in aggregate. This is what allows a political poll to predict an election outcome without having to ask everyone who will be voting, though polls aren't perfectly predictable in part because someone will be more likely to tell a pollster what they think is socially acceptable, which might not show how they vote when they think no one is watching, thus reinforcing the need for the second axiom.

Continue Reading Predictable Mobs

Seeing what happens when you collide the humanities with the digital