Let me Google that for you (lmgtfy) is a snarky way to respond to someone asking an obvious question. It was created “for all those people that find it more convenient to bother you with their question rather than google it for themselves.” Lmgtfy has become a relatively popular way to respond in any number of web forums, but more broadly, I think it speaks to the kind of literacy that search is beginning to represent.
To break this down a little bit, when someone responds to your post asking how to pivot tables in excel, or how to tie a bow hitch on a web forum by posting a link to lmgtfy you are being told that the question you asked does not require a human to answer it. It has already been answered on the Internet and with a very simple search query, as demonstrated here, you could have found that answer. At the core of the idea of lmgtfy is the notion that a savoy digital citizen should be able to make specific assumptions about the kind of knowledge the web puts at their fingertips. Lmgtfu is supposed to be a shaming experience, and the possibility of that shame is predicated on a kind of literacy of collective intelligence.
Collective intelligence is a mushy term, in this case I am referring to Pierre Lévy’s notion. In Collective Intelligence (1997) Lévy proposed a vision for the kinds of changes the internet could generate in culture. Lévy suggested that in online culture “The distinctions between authors and readers, producers and spectators, creators and interpreters will blend to form a reading-writing continuum, which will extend from machine and network designers to the ultimate recipient each helping to sustain the activity of others.” (p.121) I think the shame lmgtfy is intended to evoke demonstrates a limited form of this collective intelligence.
Now lets be clear, while proponents of the idea of collective or distributed intelligence and cognition are often accused of proposing some magic brain in the sky that’s not what I’m referring to. Instead, the idea is that parts of the thinking process are always mediated by tools, pen and paper, print media, computer, or mobile device, each is embedded in the cognitive process of individual agents.
On one level, this is rather obvious. Many are advocating that search and google mean that trivia and facts are less important than the ability to find and interpret information. The point I am focusing on here is that there are few key elements involved relating to thinking like a search engine and generalizing from your experience to understand if the specific question is something the Internet should know.
Thinking like Google and Thinking like the Crowd
Who was president in 1832? What’s the best way to steam carrots? How does !important works in css? Which iPhone 4 case is best? Where can I find some good Indian food in Fairfax, VA? All of these questions have relatively straight forward online answers available. In each case, we have developed a sense of specific, limited, notions of collective intelligence and our internal representation of the kind of information that should be out there to help make a given decision. The successful individual searcher has internalized a representation, a map, of both the way a database organizes information (search terms, where google maps data comes from, etc) as well as what kind of people would share that information (the kinds of folks that review restaurants on Yelp, the extent to which a given problem would be shared, the biases of reviewers of bargain hunters on a given do-it-yourself home improvement forum). Effectively, information literacy is developing this model. In essence this is about knowing three things.
- Knowing what kinds of knowledge should be out there on the web. (This is a assumption about the generality of your problem and the nature of information that is put online)
- Knowing what kind of search query will get you there. (This is about understanding a bit about how search works, knowing what kind of keywords will get you where you need to go)
- Knowing what the limitations of that kind of information are both in terms of kinds of questions one can ask and the biases of the sources one encounters. (This is the interpretive part, and it is once again about your theory for why someone would post this information online)
At the core, each of these are about developing 1) a sense of how computers, and more specifically databases and search engines, structure and organize information and 2) a sense for the kind of people that share specific information in a given context.
Knowing online is internalizing the machine that is us/ing us
The two points, internalizing a sense of how a computer searches and internalizing a sense of what things people should have shared online to be searched is effectively internalizing a working model of the internet and it’s users in your mind. It is not that the internet is itself an intelligence, but instead that we are constantly updating our mental model of the web and its users through our own search experiences.
The following example of interpreting ratings on Yelp offers a furhter demonstration of how I am thinking of this and also offers a place to consider the idea of general notions of competence and their relationship to individual sites.
Site Specificity and Domain Generality of Collective Intelligence Heuristics
Like all knowledge domains there are idiosyncrasies of competence that are narrow and specific which are nested within broader notions of competence. For example, try this word problem on any Yelper. You want a sandwich, your Yelp search pulls up a restaurant with 4 stars and a restaurant with 5 stars. Which is the better restaurant? Answer: Insufficient information, I need to know how many total reviews there are for each establishment. In short, if the 5 star restaurant has that rating as the result of 3 reviewers and the 4 star restaurant has its score as the result of 124 reviewers it is likely that the 4 star restaurant is well established, and hey, your a Yelper, you know that for every 10 reviewers out there who give a great restaurant 5 stars there will always be a few snarks out there that feel like they can only give a 5 star review once every six months. Now, even if you are not a Yelper, but you are familiar with how reviews work on Amazon, you might have come to the same set of conclusions. In all likelihood the Yelper would have a better sense of how to read individual reviews, and reviewers profiles, in the process of making restaurant decisions. However, the individual with experience with Amazon’s similar system of reviews would transfers and translates that experience into a more general competence about interpreting online ratings and reviews.