Marketing campaigns for digital technology have, since the 1990s at least, had the benefit of word-of-mouth spread as well as meta-narratives of post-industrial societies needing, for no clear reason, to invent and consume new technologies. Phones had to get better; cars had to get better; TVs had to get better. They could, so they should, so they did. Call it the Law of Jurassic Park, or Ian Malcolm's warning. The fictional character in that book/film critiques the capitalist who revives dinosaurs: "Your scientists were so preoccupied with whether or not they could that they didn't stop to think if they should."
There is a big wide road to walk down concerning all the times pop culture made smart comments on technology culture. We won't go there today, not much.
This sense of inevitable 'progress' thickly coated the arrival of social media sites starting in the late 90s and really taking off in the 2000s. Like most new consumer tech, they were solutions in search of a problem. And like most new consumer tech, they solved a different problem from the one they said they did.
I don't like to name companies in these essays for the simple reason that it dates the entry as well as promotes the company. But Facebook can be named because it's a perfect case study of a useless consumer tech sold as a solution to things that weren't problems, and it 'solved' these problems so well that it damaged the social fabric in remarkable ways.
Facebook said it was the solution to connecting with people, i.e. that it was common in a pre-digital era to lose touch with others as well as easy to miss every moment of other people's lives. Speaking as someone who lived in the pre-digital era, this was not a problem. When you lost touch with old friends, you made new friends. When you moved towns, you met new people. When you broke up with partners, you found new partners. Human sociality is, at its best, generative and expansive, allowing us to shift between social groups and thus social selves. On top of that, people change as they grow and age and need different people at different stages of their lives to teach them different things about what it means to be human.
Of course, the vast majority of humanity still rarely leaves the town or area of their birth, and stays in touch with the same people they've known their whole life. But this consistency only furthers my point that Facebook was solving nobody's problems: for the mobile who wanted to meet new people, they could and would. For the stationary happy where they were, they didn't need new connections.
The browsing internet of the late 90s, as a series of random but open chat rooms and hobbyist corners offered every lesson people are re-learning since the 2010s. Online, people are rude, weird, fringe, freakish, oversharing, and generous. Prior to Facebook, people were connected when they wanted to be. It wasn't that things were better. It's that they were closer to a manageable social fabric. We 'went online', like it was a trip to the museum or carnival, and we understood that our actions and the actions of others on there were constructed performances of versions of ourselves. Like a childhood game of tag or hide-and-seek, there was a distinct flow and set of rules to visiting chatrooms.
Before I go too long on romanticizing an era I only half-remember, I'll stop and say the early Internet was mostly boring, weird, and ugly. What made it superior to Facebook? The simple fact that it did not offer itself as a solution to nonexistent problems. The real problem Facebook was solving was that of advertisers needing new methods to micro-target persuadable consumers. It's legacy is not one of greater social connectivity but greater economic connectivity. It is a direct contributor to this era of delusional 'content creators', users who have been convinced by paltry earnings and fame that they can take part in the wealth extraction caused by a massive expansion of advertising and consumption. The story of our time, say from the 1950s until now, is overconsumption. Facebook, among others, has facilitated the hyper-advertising media culture that looks normal by now.
In the 2000s however, Facebook was sold as a solution to the problem of staying in touch with friends and family. In 2006-08, when I used the site the most, it was primarily populated by college students not really 'staying in touch' so much as they were performing hyper social versions of themselves. It was an extension of the house and dorm parties we went to. MySpace, another company worth naming and shaming, was an extension of the local music scene into a ratty, clunky online space. Just like today, social media are mere extensions of functioning social spheres. What they add takes away: adverts to pummel the eyes and divert the attention. We came to talk; we're made to shop. It is the revenge of the shopping mall. People had successfully conquered the ridiculousness of mall technology by the 1980s, using them not to shop but to socialize, kill time, loiter. I remember with still-bubbling resentment the anti-social policies of stores in my teen years, telling young should-be shoppers to buy something or get out, not lean on their windows, not be too loud in the aisles.
At some point in USA culture, we all agreed and accepted that the best things about malls came from other people. Not that people stopped shopping, but that the locus of activity shifted from being only a consumer to being something more like a human. Food courts are not popular for their 'food' - they're popular because people like sitting around looking at each other as they pretend to eat.
Social media fought hard to pretend it was the cool version of a mall rather than the capitalist one, particularly in its early years. It stole data, true, and processed this into something advertisers would want to buy, definitely, and that kept the digital malls free and open to users. I wager that in the next 50 years laws will shift to recognize the economic value, and thus property, of people's personal data, and in laws written in the 2040s or 2060s, what Facebook did in the 2000s would be considered illegal. They extracted and stole what is people's private property - their data. If this idea rubs some the wrong way, or feels immediately off and bizarre, think for a moment of the legal systems of the 1920s and ask if you'd like to live under those systems. Or consider this: if these new technologies truly are 'revolutionary' and have changed everything we know about life, why is it that our laws should merely address the old world that no longer exists?
Enough about that.
Facebook's true solutions lay in serving advertisers ways of increasing overconsumption. They've done exceedingly well in that regard. They are pathetic in everything else they say they attempt: spreading accurate news, creating positive community, protecting private data and mental health. This is because they are not really trying to do these things. Like shopping malls, they're interested in closing sales, not creating third spaces for positive sociocultural interaction or the manifestation of contemporary public squares or agoras for citizens of an open society to simply "be" with each other in a non-coercive, non-transactional role. This is not their business plan; this does not help their bottom line. And of course, like their other hyper-wealthy, detached corporate peers, it does not have to be. Facebook, Google, and Twitter do not have to save the world. They are economic instruments, not cultural progenitors, much less democratic bodies. They can operate anywhere in the world because they have no other values save profit. That is the nature of corporations, and that cannot change.
Following that nature, Facebook inadvertently found a problem to solve: that of friendship. The word - "friend" - was semantically bludgeoned almost to death by the platform. We could "friend" others, be "friended" by them, and find our "friends" on the platform. We could be mere "Facebook friends" with someone we barely knew. I don't think sexual relationships got any more or less casual as a result of the dilution and confusion of the word "friend" wrought by platforms stealing and centering it, but it still makes me smile to think of all those weird slippages: "friends with benefits", "fuck buddies", "just friends", "special friends". Etymologically the word has probably always been slippery and Facebook capitalized on its lubricated capacity.
By institutionalizing its slipperiness, though, I think Facebook did cause a very big problem: real friends on Facebook were visually and functionally equated with fake ones. Casual, childhood, sexual, family, and school friends all found a common and cheaply maintained home on one's Facebook "page". Context collapse is a good old word for that still rings nicely. Facebook helped cause this problem so that it could appear to be the solution for it: now that everyone you knew was lumped into the same, foggy category, Facebook could help you reconstruct your social distances and hierarchies through dolling out 'likes', 'comments', and mentions. Whereas before you had to privately sort and rank your social connections (exhausting!), Facebook and other icky platforms like it will offer the tools and algorithmic nudges to tell you exactly who you like and how much.
Facebook thus became the solution to the normal friction of social relationships, particularly "friendship". In not liking someone's post, you could now offend them. In reposting something a stranger said, you could now "friend" them. In consuming all that your ex- or potential-lover has to offer online, you could now "stalk" them. A whole new - but old - slew of antisocial behaviors were now possible to express digitally.
Thankfully, in about two generations people have done to Facebook what they did to shopping malls: accurately relocated the trash where it belongs. That isn't to say Facebook has stopped causing problems. Like rundown malls taking up good land or causing traffic, Facebook still takes up too much space for too little gain. But it is better understood at a cultural level now than it was when it started. No one in their right mind imagines they could become an influencer off Facebook. There are some brave souls, like independent stores leasing space in shopping malls. These are fun, very human people bucking the trend but proving the rule by their exception.
Coming now to the second half of the title (at last), and either the second half or the conclusion of this essay (I'm not sure yet), let's shift our historical argument on shopping malls and Facebook to the new negligent and evil corporation trying very hard to expand overconsumption: OpenAI.
The thread is simple: malls, social media, and "AI" companies are primarily interested in expanding the part of ourselves we call "consumer" into the whole self. It is not a weird conspiracy to say so; the more people consume, the more money these companies make. It is pleasantly simple in its mathematics. We spend too much time imagining corporations are social, cultural, or political entities with views that can be altered or changed. At no time does a corporation change its interest. It has one interest: profit. That is the reason corporations exist. If it disappears, so does the corporation.
OpenAI is a solution in search of a problem. Or maybe we could say it's primary product of ChatGPT is a solution in search of a problem. Like Facebook, it will find that problem by creating it. The problem it is creating is the degradation and destruction of the value of human labor. It is making peoples' jobs into a joke by doing them faster, at lower costs, and acceptable levels of quality. In diminishing the value of human work, OpenAI will oversee drops in real wages and salaries, as the upper class watches their economy - the stock market - grow and rise. We know this story already; you can read about it in the news.
Trying to find a new way to talk about what many people are talking about is hard, but let's keep at it. Before things get too uncomfortable, I want to suggest that chatbots will not be the end of human jobs. I don't agree with boosters that say it will improve everything across the board; rather I think it will recall earlier eras of mass skill-shift, similar to when people un-learned farming and learned manufacturing. Ugly, painful, riven with tension and class conflict, maybe it'll end as the industrial era did with a few decades of real government action (i.e., the miserable era starting in the1820s ending in a good-for-most-people 1940s-70s socialism in the USA). I don't know - maybe from the 2000s until the 2060s we'll have to ride rich people's economic idiocies out, until unprecedented conflict leads to a couple decades of swift repair and accurate wealth distribution (i.e., the most wealth to the most people).
OpenAI is operating with the same greasy cynicism of Facebook of the early 2000s, selling people on a solution to a problem that doesn't exist. For Facebook, it was trying to convince people they didn't really have friends unless they "friended" them. For OpenAI, it's trying to tell people their job is so bad that they should not do it. Many jobs are bad, but white collar office work is generally not bad. Writing emails, sitting through meetings, and filling out forms is not bad. It is work. As too many cooks behind the line or clerks behind the desk has quipped, if it was fun they wouldn't pay you to do it.
OpenAI and its ilk are capitalizing on cultural trends in the USA and elsewhere for complaining about mundanity. It is tedious, yes, to drive in traffic to a job you don't like to pay for things you don't need. The monologues of Fight Club are apparently doomed to be chopped up and infantilized by aspiring content creators for the foreseeable future.
Facebook 'won' the concept of 'friend'. OpenAI will 'win' the concept of 'work'. And just as Facebook wrought untold damage to social fabrics everywhere it infiltrated, OpenAI will wreak havoc on workplaces, productivity, and creativity. At its root, OpenAI trivializes people's basic job functions. What happens to the workers who see their jobs done as though by magic by a robot that doesn't get tired? Think for a moment on how it felt to see your social circles illustrated and quantified on Facebook. Previously unsystematized, unstructured practices that had become as natural as breathing were suddenly thrust into numbered arrangements. Everything 'natural' about friendships became a tangible and visible performance. Do I like this post? Or that one? How long between likes? Did they like my post? Why not?
As mentioned already, people have gradually 'gotten over' Facebook. Or, more accurately, I think that culture and politics has 'right-sized' social media, called it what it is (an advert database that pretends to be more), and slotted into its correct hole: a new version of old TV, rife with ads and crap, and sometimes you see people you know in between the enjoyable trash. Laws are also catching up: protections for data, minors, and a new narrative addressing genuine mental health issues caused by these apps. Ten years ago, much of this was unimaginable. Can we imagine ten years from now?
OpenAI has at this moment the same terrifying chokehold on our collective imaginations. We couldn't imagine the limits of Facebook's reach and influence in the 2010s. Now we can name those limits pretty quick: fake news, crazy relatives, fringe politics, old memes, constant ads, irritating talentless creators. In the 1990s people mused about television taking over the world. Maybe it has; maybe Facebook and its deformed offspring are doing just that. But the 1990s didn't end with Y2K, as so many feared (or wished). Instead the human race just kind of...kept going, however heavily mediated and medicated. If it were a competition to see which era or generation is most cynical, I'd set any doomscrolling memes of the 2020s against the punk rock nastiness of the 90s. Mark Fisher versus Howard Zinn - no winner here, but let's not imagine doomers were only born after 2000.
So, to the issue of OpenAI: we are not doomed. But the extent of the damage will depend on how rapidly we can figure out the problems actually created and solved by this technology. We recovered from Facebook in part thanks to its own flagrant destruction of social norms and safeties: Cambridge Analytica, QAnon, national populism, race supremacist movements. These are Facebook's legacy, along with its pressure to consume. When it comes time for the last Facebook office to close, the last employee to leave, the last account to close, and perhaps for Mark Zuckerberg to pass away in prison or in bed, history will be free to remember it as it was, not as it was advertised. This is a corporation of violence for profit, destruction for dominance, gluttony and greed. It fell apart, though we are stuck with its corpse.
OpenAI has in recent days joined the war efforts of governments. This is the first of what will be many, I think, reveals of what the company actually is rather than what it says it is. In ten years, it's possible the chatbots will all be shut down, having been replaced by the superior option for users to download and privately own offline, suitable replacements for the networked, extractive and ad-ridden versions. Instead, OpenAI might be an out-and-out military contractor or integrated into the government forces itself, as a kind of meta-department arranging things according to overriding protocols. That it will jumble up, overproduce and confuse data, and glitch out to massive, damaging effect is, I believe, inevitable. Like all great technologies, its complexity is its honey and salt, its magic and its misery. Both taste weirdly good: it's fun to get the answers we want, but also pretty fun to see the thing break on us.
Imagine for a second an ever-updating, constantly needing-patching generativeAI "agent" being asked to process the tax returns of an entire US state. Or better yet, count and process votes for one of the elections. Would we trust it to do these things? Do you trust it to answer your every whim and question right now, as well as do half or more of your job for you?
How much did we trust Facebook in the 2000s, and then the 2010s? Which events helped you to finally question it? In those questionings lie the seeds of real understanding. Google is not a search engine for users; it's a search engine for advertisers. Every other online platform follows the same logic, because Google clearly made a lot of money doing so. That the entire social web is just an extension and updating of advertising shouldn't be scary, it should be a relief. We need not locate our futures in this mess of corporations spying on us to sell us more t-shirts. Our future lies elsewhere, as usual unseen and complex. In the 2000s it could seem sometimes that all of life would soon be online. Then, by the late 2010s, people locked up their accounts, stopped oversharing, followed influencers instead of their friends, and spread their networks across and not within platforms, adapting to the patchwork of platforms by making patchworks of accounts. We are more and not less media literate at a base level. Each generation comes at smartphones with an instinct to adapt and integrate. My generation was shell-shocked by social media; the new one is shell-shocked by AI; the next one will have to handle whatever's new. The point is we always handle it; human beings somehow keep going on, despite the best efforts of self-destructive elites.
How long will the OpenAI destruction last? If we consider Facebook's rampage to have lasted from about 2010 to 2020, when it caused the most damage to the most people, then OpenAI's wrath might run from 2020 to 2030. These are silly numbers, of course. I'm as guilty as any armchair (or expert) historian attempting to lay onto events a meta-narrative with clean start and end dates the crime of waving ones hands like they were god when they're just another stupid human. But clean dates have optimism in them. It says something now pressing down on us must, at some point, end. This at least is true. One day, like malls and Facebook, OpenAI will shut down and this era that feels so important and definitive will be over. Ruins of older eras and civilizations are amazing things, and not usually what those societies would have liked to be remembered for.
Oh, to time travel and see what become of a society convinced first that friendships were a problem, and then that work itself was a problem, and to have both problem-solutions turn out to be fabricated by those who stood to profit by it. Or maybe we really are living in the world the elite tech CEOs tell us about, wherein technology will turn everything into a problem that can be solved. In their narrow but loud vision, we will gradually live in a world apparently free of problems. With no problems, there can be no solutions. Nothing will be broken, and the concept of it being 'fixed' will cease to be. What a wondrous world that would be! Friends, work, the body itself 'solved' by technology. Mere brains in glass jars watching YouTube, indeed.
Unfortunately, the world is shaped not by corporate promises but physical law. We'll do no better than the Aztecs, Egyptians, and Greeks in enforcing our visions of immortality and universal rule. I'm not sure if data centers will make attractive ruins, though perhaps we cannot predict or choose what we are remembered for.
Gradually the problems solved by new tech will be fixed by people. Cars were fixed by city planning; radio was fixed by rock and roll. Remnants of the breaches tech profited from remain: parking lots and hollow downtowns, demagogic talk radio. Most people most of the time survive despite and not because of new technology. To get clean water, we ripped up the earth; to get nice roofs, we burned down the forests. We think we imitate nature with our technology; hopefully we're getting closer. Mostly I think we settle for ersatz versions of it and call it progress because our version costs money.