Friday, October 28, 2011

Information Fatigue

The term “information fatigue” was first coined in the 1990s by British psychologist, David Lewis. But as history shows us, people have been complaining about information overload for thousands of years.

The topic is explored in Harvard historian, Ann Blair, in her latest book, “Too Much to Know.” Dr. Blair reveals through sources that humans have been feeling overwhelmed by the accumulation of knowledge in encoded form—as a scroll, as a handwritten codex, as a typeset book—basically since the moment these technologies created gluts of information. James Gleick traces the history of information technology and computer science in his book, “The Information,” about which he says in an interview on the Bat Segundo Show:

Gleick: …I’m hesitating to call it “problem” of information overload, of information glut — is not as new a thing as we like to think. Of course, the words are new. Information glut, information overload, information fatigue.
Correspondent: Information anxiety.
Gleick: Information anxiety. That’s right. These are all expressions of our time.

The question that really intrigues me is how to deal, as a modern human, with the double-edge sword of information ubiquity. On the one hand, this is what human beings have always craved, since the Stone Ages, when information was very hard to come by and major world-changing ideas came along only once every few thousand years. Nowadays, via the network effect, groundbreaking research is happening around the world, all the time. But on the other hand, though we live in a golden age of information availability, we don’t quite have the tools to deal with it, at least on an individual level. Personally, I think email is an example of a poorly designed and failed method of digital communications technology—simply the worst. We need information systems that truly work to enhance the individual and the society. 

The question becomes: how to have our cake and eat it too? (Or is the cake a lie?)

Friday, October 7, 2011

How Google "Sees" Me

I find this exercise very interesting because I teach a class at UW-Whitewater that I developed called Social Media Optimization and the New Web. One of the first things I ask students to do in this class is to Google themselves using a variety of modifiers, such as: Google Search your name, image search, video search, news search, add limiters such as “Wisconsin” or “Whitewater,” then try all these same techniques in Yahoo and in Bing, etc. Students are almost always weirded out by some the results they weren’t expecting. Often they’re disappointed to learn that they are the equivalent of cyber-ghosts, invisible to the web. In other words, they have no search visibility or social media influence. In building up my project, GameZombie TV, I used to search (or egosurf) “GameZombie” religiously, looking to improve the SEO and SMO of the project online. This assignment has given me the opportunity to egosurf myself, which I haven’t done in a while.

A social networking analysis of the term “Spencer Striker” returns a lot of results because I set virtually everything to public and have published content online for five years or so. I have linked tons of my social network profiles to my Google Profile which helps Google know which online identities are mine—it’s kind of like submitting your website to Google’s spider index: Google would have found it anyway, but this way there’s no ambiguity. SEO is still pretty imperfect, as I’m always totally frustrated by this image ad of a hammer that shows up when I search myself—it’s a hardware store bid on a “Spencer…Striker” hammer. Doh! And during image search, uploads to Google Plus show up as me, because they were uploaded by me, but of course they are not me—they are the subjects of the photos I have taken. This happens because Google is blind, and can only associate tagged words in an algorithmic attempt to generate relevancy. This tech will get better and better in the future and we should all keep an eye on how Google “sees” us.

Understanding Infrastructure: Dynamics, Tensions, and Design

For this week, I read the article called “Understanding Infrastructure: Dynamics, Tensions, and Design.” The article is in fact a “Report of a Workshop on “History & Theory of Infrastructure: Lessons for New Scientific Cyberinfrastructures,”” published in January of 2007 by the scholars, Paul Edwards, Steven Jackson, Geoffrey Bowker, and Cory Knobel. This report summarizes the findings of a workshop that took place in September of 2006 at the University of Michigan--a three-day National Science Foundation-funded “think tank,” so to speak, that brought together experts in social and historical studies of infrastructure development, domain scientists, information scientists, and NSF program officers. The goal was to distill “concepts, stories, metaphors, and parallels” that might help realize the NSF vision for scientific cyberinfrastructure.

To begin, this workshop and report on cyberinfrastructure is highly technical, so I will attempt to translate some of the work and findings that are directly relevant to our class, LIS 201: the Information Age, as presented by Professor Greg Downey. The authors utilize Steward Brand’s notion of the “clock of the long now” to remind us to step back and look at changes occurring before our eyes that are taking place on a slower scale than we are used to thinking about. Citing Brand, the authors argue that the development of our current cyberinfrastructure has occurred over the course of the past 200 years during which time an exponential increase in information gathering and knowledge workers on the one hand and the accompanying development of technologies to sort information on the other, has led to a “cyberinfrastructure.” Manuel Castells, a Spanish born and highly influential sociologist and communications researcher—whom Dr. Greg Downey mentioned in class—argued that the roots of contemporary “network society” are new organizational forms created in support of large corporations. While James Beniger—another scholar Professor Downey mentioned in class—described the entire period from the first Industrial Revolution to the present as an ongoing “control revolution.” As we have seen in class from such examples as the old corporate education films and Charlie Chaplin’s “Modern Times,” the control revolution describes the trend in society toward efficiency, commodification, compartmentalization, specialization, and of course control—of both information flow and how people carry out their work and lives. The authors ultimately define cyberinfrastructure as the set of organizational practices, technical infrastructure, and social norms that collectively provide for the smooth operation of science work at a distance. The cyberinfrastructure will collapse if any of those three pillars should fail.

I find this last thought particularly interesting because the very idea of a functioning modern cyberinfrastructure depends upon the implicit “buy in” or “cooperation” of the society. It reminds me of what the great biologist, E.O. Wilson once said, that if all the ants were suddenly removed from the world, our entire ecosystem and the world as we know it would collapse. The same is true of human beings’ presumed complicity with the rules, regulations, and norms that comprise our modern cyberworld—if we suddenly stopped playing by the rules, the whole house of cards would come crashing down.