Friday, November 4, 2011

As We May Think - And the World's Waking Up

As We May Think, first published by Vannevar Bush in The Atlantic Monthly in 1945, is a mind-blowing work of prognostication on the future potential of technology. Bush reminds me very much of Ray Kurzweil who in many ways is the futurist/technology profit of our times. It's interesting to think that Vannevar Bush's conceptualization of the Memex as a collective memory machine was in part inspired by the horrors of war. I have often thought about mankind's pattern of generational wars in this way. It's as if each successive generation loses the memories of the previous generation and feels instead the barbaric impulse for war that is endemic to our mammalian nature. This leads more or less directly to the thought that if we, as a society, had a collective memory machine then horrible apocalyptic wars, such as WWI and WWII, would not be possible, (since the current generation would remember all the terrible lessons of the previous generation, and so on).

Vannevar Bush anticipated the Internet and the World Wide Web as well as hypertrext, computers, speech recognition and Wikipedia. All have come into being with greater or lesser efficacy and usefulness for people. But have these tools given rise to a collective memory that will end all wars? These technologies have taken strides in this direction, one might argue. The world is connected in a way never before seen in human history. But we're in an experimental phase. Consider Julian Assange leaking all those top secret cables. This creates a world of open information but it also creates fissures in the fabric of world order that can be exploited--it's unclear whether such actions lead to a safer or a more dangerous world. Ray Kurzweil, our Vannevar Bush, envisions the singularity, the downloading of our brains, our merging with machines, and the world's waking up. Based on the unlikely success of Bush's ideas, have we any reason to doubt Kurzweil's?

Friday, October 28, 2011

Information Fatigue

The term “information fatigue” was first coined in the 1990s by British psychologist, David Lewis. But as history shows us, people have been complaining about information overload for thousands of years.

The topic is explored in Harvard historian, Ann Blair, in her latest book, “Too Much to Know.” Dr. Blair reveals through sources that humans have been feeling overwhelmed by the accumulation of knowledge in encoded form—as a scroll, as a handwritten codex, as a typeset book—basically since the moment these technologies created gluts of information. James Gleick traces the history of information technology and computer science in his book, “The Information,” about which he says in an interview on the Bat Segundo Show:

Gleick: …I’m hesitating to call it “problem” of information overload, of information glut — is not as new a thing as we like to think. Of course, the words are new. Information glut, information overload, information fatigue.
Correspondent: Information anxiety.
Gleick: Information anxiety. That’s right. These are all expressions of our time.

The question that really intrigues me is how to deal, as a modern human, with the double-edge sword of information ubiquity. On the one hand, this is what human beings have always craved, since the Stone Ages, when information was very hard to come by and major world-changing ideas came along only once every few thousand years. Nowadays, via the network effect, groundbreaking research is happening around the world, all the time. But on the other hand, though we live in a golden age of information availability, we don’t quite have the tools to deal with it, at least on an individual level. Personally, I think email is an example of a poorly designed and failed method of digital communications technology—simply the worst. We need information systems that truly work to enhance the individual and the society. 

The question becomes: how to have our cake and eat it too? (Or is the cake a lie?)

Friday, October 7, 2011

How Google "Sees" Me

I find this exercise very interesting because I teach a class at UW-Whitewater that I developed called Social Media Optimization and the New Web. One of the first things I ask students to do in this class is to Google themselves using a variety of modifiers, such as: Google Search your name, image search, video search, news search, add limiters such as “Wisconsin” or “Whitewater,” then try all these same techniques in Yahoo and in Bing, etc. Students are almost always weirded out by some the results they weren’t expecting. Often they’re disappointed to learn that they are the equivalent of cyber-ghosts, invisible to the web. In other words, they have no search visibility or social media influence. In building up my project, GameZombie TV, I used to search (or egosurf) “GameZombie” religiously, looking to improve the SEO and SMO of the project online. This assignment has given me the opportunity to egosurf myself, which I haven’t done in a while.

A social networking analysis of the term “Spencer Striker” returns a lot of results because I set virtually everything to public and have published content online for five years or so. I have linked tons of my social network profiles to my Google Profile which helps Google know which online identities are mine—it’s kind of like submitting your website to Google’s spider index: Google would have found it anyway, but this way there’s no ambiguity. SEO is still pretty imperfect, as I’m always totally frustrated by this image ad of a hammer that shows up when I search myself—it’s a hardware store bid on a “Spencer…Striker” hammer. Doh! And during image search, uploads to Google Plus show up as me, because they were uploaded by me, but of course they are not me—they are the subjects of the photos I have taken. This happens because Google is blind, and can only associate tagged words in an algorithmic attempt to generate relevancy. This tech will get better and better in the future and we should all keep an eye on how Google “sees” us.

Understanding Infrastructure: Dynamics, Tensions, and Design

For this week, I read the article called “Understanding Infrastructure: Dynamics, Tensions, and Design.” The article is in fact a “Report of a Workshop on “History & Theory of Infrastructure: Lessons for New Scientific Cyberinfrastructures,”” published in January of 2007 by the scholars, Paul Edwards, Steven Jackson, Geoffrey Bowker, and Cory Knobel. This report summarizes the findings of a workshop that took place in September of 2006 at the University of Michigan--a three-day National Science Foundation-funded “think tank,” so to speak, that brought together experts in social and historical studies of infrastructure development, domain scientists, information scientists, and NSF program officers. The goal was to distill “concepts, stories, metaphors, and parallels” that might help realize the NSF vision for scientific cyberinfrastructure.

To begin, this workshop and report on cyberinfrastructure is highly technical, so I will attempt to translate some of the work and findings that are directly relevant to our class, LIS 201: the Information Age, as presented by Professor Greg Downey. The authors utilize Steward Brand’s notion of the “clock of the long now” to remind us to step back and look at changes occurring before our eyes that are taking place on a slower scale than we are used to thinking about. Citing Brand, the authors argue that the development of our current cyberinfrastructure has occurred over the course of the past 200 years during which time an exponential increase in information gathering and knowledge workers on the one hand and the accompanying development of technologies to sort information on the other, has led to a “cyberinfrastructure.” Manuel Castells, a Spanish born and highly influential sociologist and communications researcher—whom Dr. Greg Downey mentioned in class—argued that the roots of contemporary “network society” are new organizational forms created in support of large corporations. While James Beniger—another scholar Professor Downey mentioned in class—described the entire period from the first Industrial Revolution to the present as an ongoing “control revolution.” As we have seen in class from such examples as the old corporate education films and Charlie Chaplin’s “Modern Times,” the control revolution describes the trend in society toward efficiency, commodification, compartmentalization, specialization, and of course control—of both information flow and how people carry out their work and lives. The authors ultimately define cyberinfrastructure as the set of organizational practices, technical infrastructure, and social norms that collectively provide for the smooth operation of science work at a distance. The cyberinfrastructure will collapse if any of those three pillars should fail.

I find this last thought particularly interesting because the very idea of a functioning modern cyberinfrastructure depends upon the implicit “buy in” or “cooperation” of the society. It reminds me of what the great biologist, E.O. Wilson once said, that if all the ants were suddenly removed from the world, our entire ecosystem and the world as we know it would collapse. The same is true of human beings’ presumed complicity with the rules, regulations, and norms that comprise our modern cyberworld—if we suddenly stopped playing by the rules, the whole house of cards would come crashing down.

Thursday, September 29, 2011

Commodore 64 - and the fleetingness of cool

Love this retro video about the launch of the Commodore 64, originally broadcast in 1988.

"The Commodore 64 was the first computer for many families. This program looks at what you can do with the famous C-64. Demonstrations include The Wine Steward, Skate or Die, Strike Fleet, the Koala Pad, Master Composer, Tetris, and Berkeley Software's GEOS. Includes a visit to a Commodore Owners Users Group meeting and an interview with Max Toy President of Commodore."

It's so interesting to me because I was a kid when the 64 came out and my family bought one. I vividly remember playing lots of different games on the system--thinking at the time that the system was such a huge improvement over the Atari 2600. Of primary relevance to the study of the Information Society is the phenomenon of rapid technological advancement demonstrated by the very campiness of this video in 2011. The ideas being introduced in this machine include the use of color, a more user-friendly graphical user interface, the ability to run basic programs as well as games, and a consumer-friendly price point. All of these elements of the computer are alive and well in the current market. But the music, the production value, and the corny way in which the hosts talk about cutting-edge computer elements like the Basic programming language and floppy disks reveals to the modern observer the fleetingness of being on the cutting edge.