Defining Web 2.0
March 13th, 2008 Rafael AlvaradoAlmost as absurdly popular as the concept of Web 2.0 itself is the idea that the term can’t be defined. After listening to several EDUCASE podcasts from the ELI Conference in San Antonio this January, I am struck by how frequently folks remark on the inherent undefinability of the word. It’s almost as if this fuzziness is part of the Web 2.0 meme itself, an auxiliary meme designed to inoculate the idea from being dismissed out of hand for being too simple–or contradictory to the spirit of Web 2.0 itself. After all, a big part of the idea is that it is more social than cognitive, and to many, social means fuzzy. So, at the risk committing analytical murder on the idea, here is a definition:
Web 2.0 refers to (1) a wildly successful set of technologies that radically lowered the bar to creating content on the web (blogs, wikis, RSS, etc.) which tended to vastly increase the number of web reader-writers, (2) another set of wildly successful technologies that took advantage of the network effects of the increase of web reader-writers (Google anything, Wikipedia, de.licio.us, etc.) and generated value out of exposing the massive increase in web content and participation back to the client, and (3) to the emergent devices, genres, and structures of participation that resulted from this feedback loop (tag clouds, social bookmarks, feeds, etc.)
If looked at from a purely technical perspective, the addition of through-the-web editing and ready made information architectures (how blogs and wikis improved on building homepages with Netscape Composer) appear as differences in degree, not in kind. Same with the additions of smooth DHTML and AJAX to improve the user experience in working with web applications. But the net effect (pun perhaps intended) of these changes in degree was to sharply increase read-write “prosumption” on the web beyond some critical threshold where the emergent properties of the system changed in kind. At some point, folks began to think of what can be done with the web differently, both content prosumers and web application developers and companies. They began to get something of what Sir TBL intended when he created HTML and HTTP almost two decades ago. And that is Web 2.0.
Web 3.0 on the other hand has the other problem–all definitions and little fuzziness to show for it. I have hopes for it, though. If the threshold for creating structured micro-content can be lowered–by being built into our through-the-web authoring tools, as easily as tools for creating tags or trackbacks are, then search engines, aggregating tools, and network effectors will begin to seek out, privilege, and select for that kind of content. The pressure will then be on for prosumers to shape up their content for the Engines; distribution will pull production in its considerable draft. It’s being called SWEO and will hopefully have the same effect that SEO (Search Egine Optimization) has had on content production already.
Let me conclude by attempting to link Webs 1, 2, and 3 into something that unifies them beyond a technology trail that begins with HTML/HTTP 1.0.
In the beginning was the World Wide Hypertext. In this Web, all pages were in principle connected by a World Graph of links, but in reality they were not. Instead, the Web was comprised of some very large and dense graphs loosely connected at the edges, and there existed many structural holes and isolated islands of content. Then came the Search Engines, and Google in particular, to link everything together. Google itself became the de facto Central Node of a vast network, joining the separate graphs into One. Then came the Engines of Content–the blogs, the wikis–along with their built-in Linking Engines–the syndicators and aggregators–to create a Web of self-linking content. So the World Graph became effectively a reality. But the WG lacked structure–what some ventured to call “meaning”–a layer of mark-up that would make searching and using this vast sphere of Content both easier and more interesting. And, by virtue of Standardized Ontologies, it would convert the randomness of the crowd into something coherent. And so the Semantic Web was created.
Or something like that.