By Brad King, Ball State University
“There is no there there.” - Gertrude Stein
Social media doesn’t exist. At least not in the way it’s normally discussed.
I’ve made this statement to countless technologists in the past few years without much pushback. We discussed the evolution of modern technologies, the philosophy of digital tools and the rapid expansion of software applications now available for the “humans,” the name for which I’ve long dubbed those who prefer pushing buttons to learning underlying architectures. (In other words: normal people.)
Yet when I’ve taken this argument out to the masses, nothing I’ve said — short of my decade-long writing against Apple’s technology tactics — has generated more heated conversation than my statement about social media. The response has always been the fiercest from media practitioners who have built entire cottage industries around selling a quick fix to people in search of easy answers to the technological revolution.
My position seems even less defensible when you consider that I teach two courses with the term “social media” in the titles: one in the digital media minor at Ball State University where I teach and one with the New York Times Knowledge Network as part of a partnership with my university and that news organization.
If I truly believed my statement, the rational objector would point out, why would I build courses around the idea of social media, training people to do the very thing I say doesn’t exist?
The short answer is this: I believe in the bait-and-switch.
The long answer is a bit more nuanced than that and requires us to travel back to the dawn of the modern, digital networked age: 1945.
***
By July 1945, the end of World War II was near. The Allied powers dealt the Germans and Italians haymakers in April, and by month’s end Benito Mussolini and Adolph Hitler were dead. As major combat in the European theater closed, the Allies began a relentless march toward Japan. The outcome of the war was no longer in doubt (although the particulars of how it would come to an end were still unforeseen).
With peace looming, scientists began to wonder what would happen next. Many had been pulled away from their research, directed to create modern technologies that may aid the Allied forces. What would happen, many wondered, after the war?
Vannevar Bush, one of the scientists who organized the Manhattan Project and who held great political sway, penned the article “As We May Think” for The Atlantic where he articulated this problem. The explosion of research and information had reached a tipping point. It was impossible for the modern scientist to keep abreast of all the research and information being created in their field. The war, the consolidation of scientific power and the practical applications were coming too fast.
That flow, he said, was only going to get worse, which was bad for society because information is only beneficial if people can find and access that information in a rational way.
His solution: Instead of sending all of these scientists back to their labs, why not search for a way to create an organization that would allow for a greater sharing of information and research? Better yet, he argued, why not begin to develop technologies that would allow those scientists to search and retrieve information more quickly and more efficiently so they could spend more time thinking and less time digging?
Why not, he suggested, build a network of information that used software tools, which hadn’t been made, to help publish, store, search and retrieve information in ways that make us more efficient — better — thinkers?
***
Fifteen years later, J.C.R. Licklider — the grandfather of the Internet — expanded on Bush’s thoughts in his work “Man-Computer Symbiosis.” Licklider argued that computers would need to help people find information in real time.
Just as important, Licklider said modern digital technologies would need to do more than simply complete tasks humans directed it to do. Modern software technologies would need to learn the kinds of problems humans didn’t even know they had and offer solutions to them. These technologies, he said, needed to help create the kinds of serendipity that is the hallmark of creativity.
While my summarization of his work sounds suspiciously close to artificial intelligence, Licklider clarified his thoughts in “The Computer as Communication Device” in 1965. He viewed the technologies not as free-thinking machines but as technologies more akin to the types of collaborative filtering you see today whenever you purchase a book from Amazon.com and are immediately prompted with a series of books that people who purchased the same book also bought.
***
By the early 1970s, young hackers at MIT and a handful of other universities around the United States were building the early software tools that would power the Internet and eventually lead to creation of the World Wide Web.
In his work Hackers: Heroes of the Computer Revolution, author Steven Levy chronicles the development of these ingenious software programs who built the beta version of the modern, digital networked world on the principles of Bush and Licklider (although it’s relatively clear that this wasn’t a pre-planned event as much as a continuation of the battle to create the best network and software).
Some of the guiding principles that evolved during this development — the Hacker Ethic as Levy dubbed it — contained ghosts of the thinking that Bush and Licklider laid out: decentralization of power, open protocols, and tools that were easy to use in order to create art and beauty.
What emerged through the late 1970s as these software tools began to make their way across the globe and more people engaged with the network was the birth of the Intergalatic Computer Network that Bush first described and that Licklider conceptualized and named.
***
Which brings me back to where we started: my contention that social media doesn’t exist.
For many, social media has come to be defined in two ways: either as a series of software tools (e.g., “Twitter is social media”) or as a series of platitudes and techno-babble (e.g., “social media is Web 2.0 applications that allow for two-way communication”). These strike me as wildly inaccurate and intellectually lazy definitions, the very kind that, if my students used them, would immediately raise my blood pressure.
Instead, I’ve argued that to make sure our students can navigate the world of emerging, social technologies, we must define the actions of so-called social media through the intentions of the philosophical and practical developers of the Internet. When I speak of social media during the bait portion of the bait-and-switch, I say this:
We’re in the human evolutionary stage of the modern, digital networked world. Today, humans now have the ability to do the thing that Bush and Licklider pushed for years ago: an ability to use technology to think better.
Today, we can:
- Use digital tools
- Publish and share our creations
- Store and archive information
- Search and retrieve that information
- Use a real-time network
- Aggregate and visualize information and data
- Make better decisions
During the switch portion of my class, I explain to my students that they will not be learning tools. They will be learning how to find all the tools available, figure out which tools to use, when to use them and why to use them. (The best example of why it’s important to teach the philosophy of tools and not the tools themselves: Brian Solis’ “Exploring the Twitterverse” chart. I could teach an entire semester just on the ways we can use Twitter.)
None of this is new. What is new is that humans can now easily use these tools. Without the foundational knowledge that this is what the network has always done, they believe they have discovered something.
But the term suggests an understanding that people don’t innately have. It suggests that a familiarity with the tools gives them a functional and foundational understanding of the ways in which these tools can be deployed for things that don’t yet exist.
This is how the network was designed. This is how the network and the software technologies that sit on top of the Internet have always worked.
Twitter and Facebook will eventually go away, just as Friendster and MySpace have stepped to the side before them. What name we call the software is irrelevant. What matters is learning what we can and should be doing with these tools in order to think better as Bush and Licklider imagined.
Interesting Texts on How and Why Tools Work:
- Tools for Thought, by Howard Rheingold
- Mediactive, by Dan Gillmor
- Where Wizards Stay Up Late, by Katie Hafner
Brad King is an assistant professor of Journalism and an Emerging Media Fellow at Ball State University. He is also on the advisory boards for South by Southwest Interactive and Carnegie Mellon’s ETC Press.