I’ve had a lot of fun following The New Aesthetic and I think there are some neat parts of this that relate to digital preservation. If you haven’t seen much on this neologism, the post colon part of the recent SXSW talk is a bit more explanatory, The New Aesthetic: Seeing Like Digital Devices. For further reading I would suggest this, this, or this. In my work on digital preservation I tend to spend a good bit of time thinking about digital objects, and there are a few points of connection between that I wanted to spend a moment teasing out.
The part of the New Aesthetic that I am excited about is the recognition that the digital is fundamentally artifactual, not simply informational and that the formal and forensic materiality of digital and material objects leaves traces that offer potential for aesthetic, interpretive, and potential evidentiary provocation. The characteristics of mediums, of processes, of interfaces are all offering this potential.
Digitization is an Act of Artifact Creation Not of Information Translation
People (quite justifiably) love to get in a huff about poor scans in google books. We are losing considerable informational qualities of the books in the poor scans, or poorly processed scans. With that said, the information translation part of the project is only part of what is happening. The Art of Google Books does a great job at getting us to think about the artifactual qualities of the newly created digital objects and the process through which they were created. I find it particularly amusing that we use the term “artifact” in computing, in the sense of compression artifact to generally signal a failure to represent the thing. The New Aesthetic can be a way to think about those artifacts not as defects, but as an aesthetic in and of themselves.
Consider, “Image mistaken as the finger of an employee, with attempted autocorrect.” The piexlated section of the image that is removed tells us a bit about how the algorithm sees the image. We can try and fill in the gaps with our mind and think about what it was about this particular illustration that made Google think that he was a finger that should be removed.
Similarly, a “Black-and-white frontispiece photographed in color and through tissue” creates fundementaly different ways of seeing the black and white image. The accidentally colored in image looks great, and in the context of a black and white book is completely unexpected. The ghostly image through the tissue paper almost looks like a kind of static problem in the scanning process, but in context we know that we are actually seeing an attempt to scan through tissue paper. What is one supposed to do to digitize tissue paper? In both cases, we are reminded that digitization is not simply copying information, that through digitization we can see the pages of the book through different physical and computational processes.
Reading the Products of Reading Machines
Similarly, as our devices further bring our ability to read the world, to layer information on top of the world, the products of that layering can also be captured and reflected on.
So when I used Word Lens to attempt to translate things it shouldn’t translate I ended up learning about both how WordLens sees and found serendipitous reinterpretations of texts and environments. So Reading Machines: Toward an Algorithmic Criticism from translated from Spanish (a language it is not in) became Reading Machetes. Personally, I think reading machetes is a perfectly serviceable alternate title for the book. The image capture of that moment sort of captures part of the meaning of the book. We are now reading a text with a machine that is about reading and deforming texts with machines.
What is particularly fun about Word Lens is that it can even read non-text. For example, here is Word Lens attempting to read the mirrors above the mantel in my living room. What exactly is it that describes the grape? I’m not sure, but what I do know is that when you flip Word Lens on, turn it to the “wrong” setting and walk around the world you start to see how it sees. You start to see that you can get rows of square things to be read as text, and you start to be able to guess how it might read.
The Authenticity of Performing Music on the Gameboy
There is also something here about the recursive loop wherein particular computing devices, like game boys, become imbued with an authenticity. A really strange kind of authenticity. For example, when people compose chiptunes on Little Sound DJ on gameboys they are using the actual physical device to create the sound, but at the same time using a program that is not something from the era of the gameboy. While you can emulate the device, we end up wanting to see the video of the performance on the gameboy to really know that it was actually played on the gameboy.
I’m just excited to see these kinds of things being herded together, and optimistic that it is part of a broader move toward thinking about and playing with the thingy-ness of things.