Based on a Slashdot link, I read a blog post today about the Oracle/Google lawsuit over Java virtual machine patents.
Among the list of software patents that Oracle has decided to sue Google with are two that I’m fairly certain that I wrote prior art for. This was in 1983 or 1984 while I was a high school student. This work was published in a book by the University of Oklahoma Press, and I believe I still have a copy somewhere.
It seems that the bar for patents is not very high if patents are granted for something a high school student could come up with a decade earlier!
In ’83 or ’84, I wrote a hybrid interpreter/compiler for a Forth-like language.
I had read the book Threaded Interpretive Languages: Their Design and Implementation and decided to write such a language for my TRS-80 model 1. I gave it the snappy name, TIL16U, which stood for Threaded Interpretive Language, 16-bit unsigned ints. The only numeric datatype was the 16-bit unsigned integer, which also nicely represents a pointer datatype on the Z-80 processor.
I took a slightly different approach to writing the language than the book described. Initially, I wrote a simple single-pass compiler from the source language to a simple bytecode in TRS-80 Level II BASIC.
Once I could generate bytecode from ASCII source, I wrote an interpreter for it in Z-80 assembly. The interpreter for such a simple stack-based language is very small, and it was dwarfed by the inclusion of a text editor (incorporated from an article in 80 Microcomputing with a source listing) and cassette I/O routines to load/save my programs off to tape. I used the initial compiler written in BASIC to compile the runtime library for my interpreter.
My library data structure consisted of a routine name stored as an ASCII string, a length, a flag to indicate whether a given routine was implemented in Z-80 machine code or in bytecode, and then the routine body. While interpreting, the TIL16U interpreter would look at the bytecode for a call routine, and following this would be a routine name. The interpreter would look up the routine name in the library, and either jump to it directly (if it was in machine code) or continue interpreting at the start of the routine (if it was in bytecode).
At the time, I was very interested in writing videogames — my parents wouldn’t allow me to have an Atari 2600 — and the 1.77 Mhz processor on the TRS-80 was not particularly powerful. So the lookup step bothered me performance-wise. I knew I could implement a symbol table and a second pass in the compiler to patch in the addresses of routines, but this made the language less dynamic. I wanted to be able to type in code interactively and have it work, and it was also a hassle to store addresses in the code because whenever I built a new version of the interpreter or the runtime library they might have to change.
So I hit upon the expedient of making the interpreter selectively compile bits of the running program to machine code as it went and overwriting the bytecode with machine code that could be directly jumped to, and pushing the address of the interpreter on the stack so that when a Z-80 RET instruction executed, the interpreter continued executing. I thought it was cool that the more you ran a program, the faster it would go.
In particular, the bytecode to invoke a library routine could be replaced by a three-byte Z-80 CALL instruction (a one-byte opcode plus two bytes of address).
So when the interpreter found such a bytecode sequence, rather than execute it, it would look up the symbol address, overwrite it with the machine instruction, PUSH the return address of the interpreter on the stack and then jump to it. In the future, the interpreter would look at the opcode byte and know just to jump directly to the code sequence. (As an aside, it is interesting how much of the Z-80 instruction set is still burned into my brain. I can still remember an unconditional jump, or JP, is 0xC3, and the RET instruction is 0xC9, thanks to writing this code.)
I believe the symbolic routine name to numeric address is the mechanism described in one of the patents in the Oracle lawsuit, granted in 1992, and last updated on April 29, 2003.
Additionally, this patent, also referenced in the lawsuit, covers overwriting virtual machine instructions with native code and executing those instead of the bytecode. This patent was originally filed in 1997 and was last updated in 2005.
I’m not claiming that I have perfect prior art for these two patents — I haven’t even studied them very closely — but I believe the techniques I used are at least highly similar to what is patented. Only a patent court can decide if something infringes anyway.
I’d like to claim this was entirely because of my brilliance, but I think it is more likely that the U.S. Patent Office has been granting software patents that are so obvious that a reasonably bright teenager with a computer can come up with them.
To finish the story, the most successful application of TIL16U did actually turn out to be video games. My parents bought me a ChromaTRS color video board for my TRS-80 (“with 15 vivid colors!”), and I wrote several games for it, one of which I remember was a parachute jump game, where you jumped out of a plane and tried to land on a target. The plane speed and wind speed would vary, so you had to judge how the windsock was looking to get the high score.
Ben Mesander has more than 18 years of experience — not counting high school! — leading software development teams and implementing software. His strengths include Linux, C, C++, numerical methods, control systems and digital signal processing. His experience includes embedded software, scientific software and enterprise software development environments.