The emerging standard for sharing linked open data (LOD or LD) is JSON-LD mainly because it's easy to use and plays well with JSON-based REST APIs. That was by design. Consider JSON to be the modern XML, but for data rather than documents, and JSON-LD the modern RDF/XML. Everything I talk about in this post could be done with RDF/XML, but is a lot easier with JSON-LD.
A library is just a collection of functions. A class is a collection of functions operating on a common data structure (or type). We should be able to do both with LOC since LOD has the concept of types.
It doesn't seem like it's been over four years since I joined MITH and started working with Project Bamboo. Just because I've moved on to a startup and the project's been mothballed doesn't mean we can't mine what was done.
The problems with Project Bamboo are numerious and documented in severalplaces. One of the fundamental mistakes made early on was the waterfall approach in designing and developing an enterprise style workspace that would encompass all humanities research activities rather than produce an agile environment that leveraged existing standards and tools. Top down rather than bottom up.
However, the idea that digital humanities projects share some common issues and could take advantage of shared solutions is important. This is part of the reporting aspect of research: when we learn something new, we not only report the new knowledge, but how we got there to help someone else do similar work with different data. If we discover a way to distinguish between two authors in a text, we not only publish what we think each author wrote, but the method by which we made that determination. Someone else can apply that same method to a different text.
As I start thinking about what should go into the core of a linked open code schema, I'm tempted to put a lot of high-level operations into the core so they run faster. History tells us that's the wrong way to go.
I've been working off and on over the last six months on a programming language that sits on top of linked open data. Think of it as linked open code.
von Neumann Architecture
Before von Neumann made his observations about code and data, computers typically had some memory dedicated to code, and other memory dedicated to data. The processing unit might have a bus for each, so code and data didn't have to compete for processor attention.
This was great if you were able to dedicate your machine to particular types of problems and knew how much data or code you would typically need.
Von Neumann questioned this assumption. Why should memory treat code and data as different things when they're all just sets of bits?
English: This image shows the 3-6-9 hexagram of the circle made of the digital root of Fibonacci numbers. (Photo credit: Wikipedia)
I'm building a language designed to be natural with linked data just as programs today are natural with local memory. The result is highly functional and data-relative in nature and reminds me of how XSLT works relative to XML nodes (e.g., a current node, child nodes, ancestor nodes). I have quite a few tests passing for the parser and core engine, so now I'm branching out into libraries and seeing what I can do with all of the pieces.