John Cayley, our guest lecturer in class this morning, used the term “hyperhistory” in reference to the rapid rate at which all things technological evolve these days. Indeed, it seems almost impossible for the hottest iPod/video game console/laptop/PDA to not be labeled “outdated” within a year and “obsolete” in about five. The codes we use to control our hardware are just as frequently updated. If a person refuses to update their technology, they can continue to function in their own computing environment, but, as N. Katherine Hayles puts it in “Speech, Writing, Code,” “they are increasingly marooned on an island in time, unable to send readable files or to read files from anyone else.” I can make all the birthday cards in Print Shop 2 I want, but if I want to send one to my grandma who's running Print Shop 20, I'll have to print it out and give her a hard copy.
I think Hayles' comparison between code and language then is especially interesting. She brings out the point that the changes in code are much more abrupt than in spoken language. We might find it easier to read Dan Brown than Shakespeare, but we can understand them both, despite Shakespeare's language being 500 years old. However, there are other interesting contrasts. Code is much more unforgiving of errors than language is. It doesn't matter if you mistype one character or the whole program. The machine will not accept it. As Hayles puts it, “regardless of what humans think of a piece of code, the machine is the final arbiter of whether the code is intelligible.” The human processing of language is more sympathetic. I (and probably many of y'all) have been forwarded several copies of an email that goes something like this:
Subject: I CANT BELIEVE I CAN READ THIS!!!!!1
Message: Aoccdrnig to a rscheearch at Cmabrigde Uinervtisy, it deosn't mttaer in waht oredr the ltteers in a wrod are, the olny iprmoetnt tihng is taht the frist and lsat ltteer be at the rghit pclae...
I think a really fascinating contrast can be made between the relative stability of code and human language on a computer. Imagine that I created three files over the last 20 years. First, in 1988, I enter the text of Ronald Reagan's State of the Union Address in an unformatted text editor in the old text-based operating system MS-DOS. In the same pure text editor, I write code for a program that will display the text of that address one word at a time in the BASIC programming language. In 1993, I type the text into a document in the WordPerfect word processing software in Windows 3.1 and add some boldness and underlining. Now, in 2008, I want to access all three files on my brand new Windows Vista laptop. Unless I have the right converter, the WordPerfect document will not open in any of the word processors on my computer, so that text will be lost. The text of the other files was unencoded, so I'll be able to view their contents. The BASIC code will be useless unless I somehow manage to find a program that runs on Vista and can interpret the obsolete language. The pure text of the address, however, will be exactly as legible when I double click on it today as it did when I first entered it in 1988.