Who’s Processing Whom?

Digital Commons, Digital Blinders, and a Fraught Social Future

This blog post is really too long, I admit it, at over 8500 words… but it’s based on nine different books that all spoke to each other that I read in the past months, and it all dovetails rather strongly with my own personal history, where this blog begins. At the end I list the nine books with links, and I encourage you to read all of them!

Random graffiti on Harrison Street in San Francisco, summer 2019.


“Are You Doing the Processing, or Are You Being Processed?” —Processed World #1, April 1981

This … signals the metamorphosis of the digital infrastructure from a thing we have to a thing that has us. (p. 204, Zuboff)

…“smart” is a euphemism for rendition: intelligence that is designed to render some tiny corner of lived experience as behavioral data. Each smart object is a kind of marionette; for all its “smartness,” it remains a hapless puppet dancing to the puppet master’s hidden economic imperatives. Products, services, and applications march to the drumbeat of inevitabilism toward the promise of surveillance revenues hacked from the still-wild spaces that we call “my reality,” “my home,” “my life,” and “my body.” (Zuboff, p. 238)

Almost four decades ago I was working at an “information desk” at 4th and Mission in San Francisco in the lobby of the Downtown Community College Center (it later became part of City College of SF). From there I was able to take a class learning a new skill: word processing! Upstairs we were taught to use magnetic cards in the shape of old IBM punch cards to record our typing. When we wanted to edit or fix errors on our document we reloaded the magnetic card next to the IBM Selectric typewriter, and by counting down lines and across words and letters, we arrived at what we hoped was the spot where the change was to be made, and inserted it. When we later printed the document again we could see if we were correct.

From this training, I was suddenly eligible to work for any of the many multinationals in downtown San Francisco who were hungrily seeking newly skilled modern office workers, and the starting wages were twice what I’d been getting at the info desk, $12 vs. $6 in 1980. Before long I was on a longterm “temp” job at Bank of America at 1455 Market (a building that weirdly is now home to Uber’s HQ and San Francisco’s Department of the Environment) where I worked on a glowing green CRT terminal connected to a DEC minicomputer. I worked on a team of word processors producing manuals to train bank tellers in Florida how to use BofA’s computer systems (this was still several years before interstate banking was deregulated and in 1980 BofA did not yet formally own any banks in Florida).

When that gig ended, I hopped around among Arthur Anderson accountants, T. Rowe Price brokerage, and other forgettable corporate offices. When my friends in Berkeley asked if I would be interested in working at their hippie computer collective (Community Memory) I said yes, provided it was a four-day week. I became the 3rd employee of their for-profit marketing company Pacific Software, and spent the next year and a half producing marketing literature for their two state-of-the-art software programs, a relational database system and a packet-switching program that facilitated communications across the early internet. I typed and printed an endless procession of nondisclosure agreements for everyone from defense contractors to banks and government agencies to other software companies, all eager to see the best software computer-loving hippies could make.

Community Memory had started in the mid-1970s as an effort to create a public access computer network with public terminals in places like libraries, community centers, and even Leopold’s Records. The assumption back then was that the government, the military, and IBM would never allow computers to become available to the general public, and there was little sense that a vast publicly accessible Internet could grow based on telecommunications hardware and the worldwide web (first invented in the early 1990s). Hobbyists and tinkerers around the 1970s Homebrew Computer Club (which later begat Apple and Microsoft among many others) were all trying to invent small, accessible machines that anyone could build and play with, without really knowing what they would be used for. This was also the post-Watergate era when Senate hearings had revealed vast spying by government agencies on citizens, sparking outrage and promises of reform. Behind the scenes, the Pentagon’s efforts to advance cybernetics, computing, and networks were proceeding rapidly, and the Arpanet connected a number of prestigious universities and research facilities—students at those universities were among the early experimenters, including the Berkeley-based Community Memory group.

By the time I became the secretary for its marketing arm, the collective had developed two very sophisticated pieces of software for their planned public network, that were also at the cutting edge of the beginnings of the commercial computer business. When I went to work there, too, I had already been publishing, as part of a different collective, Processed World magazine for almost a year.

In the pages of Processed World we gave voice to our expectations that as the newly automating office developed, proletarianized white collar workers would band together to resist, subvert, and sabotage the new organization of work, and hopefully find the collective power to shortcircuit capitalism at the point of circulation. My early word processing skills helped me learn phototypesetting, and through serendipitous events we had acquired a machine and it was in our house. This was the foundation on which Processed World could publish, and a few years later, it was the foundation of my self-employed small business life that I turned to after Community Memory, and spared me years of languishing in the dungeons of corporate America.

At Pacific Software I had an early experience of the now-familiar saga of the tech startup. All of us were given shares in the company but one day in 1981 came the Monday morning massacre… after rapid expansion to over 25 employees and great expectations of future profitability, the company ran out of money and backers and we went from 25 to 5 employees in one brutal wave of firings. Our stock was worthless of course. I was supposed to remain and do the work of five of the fired workers. That sounded pretty bleak, so I waited til the next day to announce my resignation, with an offer: I’d stay for three weeks to train three people to do the work they expected of me, provided they laid me off and did not contest my unemployment benefits claim at the end of the three weeks. That bargain was struck, and I never had a “real” job again, though self-employed small business life is certainly full of its own compromises and dissatisfactions. But I always controlled my own time and from then on, any time-saving efficiencies resulting from my skills and personality benefited me directly without harassment from bosses or coworkers who expected me to “look busy” when I finished tasks early.

During those short years as a temporary corporate nomad (I even worked briefly in the Boston area for a big defense contractor Bolt, Beranek and Newman) I honed my bad attitude towards the stupidity of modern work. The activities that I carried out on my various jobs were nearly always pointless. It was hard to fathom how these big-name corporations could be so completely inefficient and redundant at every turn. The obsession with behavior, attitude, appearances, etc., overwhelmed any concern for the purpose of the work, or carrying it out in a timely manner. Bosses were always dumber than the temps, and were usually sad individuals with very limited horizons for whom bossing the temps was a brief high point where they had some authority and power. The pettiness of their reigns was the most prominent characteristic of the office environment (captured well years later in TV’s The Office). For the most part, though, their efforts to assert control and to establish their credibility as small-time tyrants rarely succeeded. It was very easy to hide both on and off the job in those days.

Since that confusing period at the dawn of neoliberalism, things have definitely gotten much worse. Processed World weathered the 1980s only to finally run out of steam around 1994 (with a couple of surprising returns to form for two issues in 2001 and 2005, after which the effort ended for good—I recently wrote a brief political history of the magazine here). During the 32 issues we published steadily from 1981-1994 we covered every angle we could on the reorganizing of the modern workplace, as well as the occasional eruptions of dissent and organized and disorganized revolt in that period. We knew that workplace surveillance was growing with keystroke counting and automated systems of observation. We knew that government surveillance was ongoing, tracking movements against nuclear war, nuclear power, and dirty wars in Central America and the Middle East, as well as ongoing domestic policing. This earlier surveillance system depended on public and private contractors who were spying on political activists and groups. But the gross incompetence of the average corporation informed our sense of what was certainly a parallel incompetence by government and private surveillance efforts. We didn’t really fear repression in that era, so much as find it ridiculous.

Sandcastle festival at Parksville, British Columbia Community Park… a Russian, Dimitry Klimenko, and an American, Sue McGrew, had some fun together making this..
Who took Lenin’s head on the day after the festival?

By the time the 1990s began, and the Cold War collapsed with the demise of the Soviet Union and the Warsaw Pact, it was clear that the U.S. was becoming giddily belligerent on the world stage. The first Gulf War gave the warmongers and their cheerleaders on CNN and network TV a brief sense that they had finally overcome the Vietnam Syndrome’s public aversion to war. The rise of Clinton and Blair as pseudo-progressive avatars of neoliberal hegemony reinforced the trajectory launched by Reagan and Thatcher towards hyper-individualism, a breakdown in social solidarity, and an ever more frayed sense of community and connection among atomized people who no longer knew their neighbors or coworkers very well, if at all. The enormous disruptions in formerly stable economic lives resulting from the rapid globalization of the 1990s and 2000s, with its attendant race to the bottom that predictably emerged as formerly unionized work was shipped out to low-wage regions like China and Mexico, left a much more unequal society in its wake.

Jarett Kobek’s scathing novel I Hate the Internet captures perfectly certain moments and sensibilities in San Francisco’s relationship with the tech booms, and is well worth a read, especially if you want to revisit our growing contempt for the world that was engulfing us as the 20th century came to an end.

1996 was defined by being the year during which the Internet economy exploded into the collective consciousness.
San Francisco had spent much of the Twentieth Century in decline, which meant that it was a bad place for people who like doing business but a wonderful place for people who were terrible at making money.
San Francisco had been defined by the culture of people who were terrible at making money. It had become a haven for the misfits of America, most of who were living in the city’s fabulous old houses.
When the Internet economy exploded into the collective consciousness, these people proved that resisting social change was the only thing at which they were less adept than earning money. (p. 27)

By the mid-1990s a so-called New Economy based on the Internet was becoming visible. A Gold Rush mentality quickly took over with a frenzy of frothing investments in vaporware and cyber-fantasies of all sorts. A few got very very rich before the storied bust of 2001. Public policy further exacerbated the concentration of wealth and the rise of dire poverty. The “miracle” of computer riches hovered over the Bay Area even while the vast majority of the population struggled on in the same jobs with the same wages, if they had work at all. But the inflation of housing costs thanks to the tidal wave of new wealth that poured into real estate radically disrupted the daily lives of millions.

During the early years of the century few of us knew that a whole new model of wealth accumulation was being developed behind the shiny noise of the New Economy. Surveillance capitalism was born in the advertising trenches, primarily at Google, but soon expanded upon by the likes of Facebook, Microsoft, Twitter, and other behemoths. As Shoshana Zuboff aptly characterizes it:

Google had discovered a way to translate its nonmarket interactions with users into surplus raw material for the fabrication of products aimed at genuine market transactions with its real customers: advertisers. The translation of behavioral surplus from outside to inside the market finally enabled Google to convert investment into revenue. The corporation thus created out of thin air and at zero marginal cost an asset class of vital raw materials derived from users’ nonmarket online behavior… We are no longer the subjects of value realization. Nor are we, as some have insisted, the “product” of Google’s sales. Instead, we are the objects from which raw materials are extracted and expropriated for Google’s prediction factories. Predictions about our behavior are Google’s products, and they are sold to its actual customers but not to us. (p. 93-94)

Yasha Levine, in his excellent analysis of the long symbiotic relationship between the Pentagon, war-making, surveillance, and the Internet comes to the same conclusion:

For many Internet companies, including Google and Facebook, surveillance is the business model. It is the base on which their corporate and economic power rests. Disentangle surveillance and profit, and these companies would collapse. (p. 268)

And by the way, if you use TOR as your browser, or Signal as your text message app, in the hopes of escaping the surveillance dragnet, Levine goes deep into the funding of TOR to show that it’s actually a Pentagon-sponsored project, and has proven incapable of hiding criminals from law enforcement (to say nothing of political dissidents). Signal, while probably created with the best of intentions, is incapable of blocking the routine surveillance of your “smart” phone that allows companies and spooks to read your communications before you send it via Signal.

As for the sudden marketing ubiquity of so-called “smart technologies,” Evgeny Morozov has given us a mnemonic to help us: “Surveillance Marketed As Revolutionary Technology.” The spread of the Internet of Things, connected devices such as refrigerators, toasters, cars, doorbells, etc., is sold as convenience to the user, but all of them are actually devices for the collection of data. Here’s Zuboff again:

… activities that appear to be varied and even scattershot across a random selection of industries and projects are actually all the same activity guided by the same aim: behavioral surplus capture. Each is a slightly different configuration of hardware, software, algorithms, sensors, and connectivity designed to mimic a car, shift, cell phone, book, video, robot, chip, drone, camera, cornea, tree, television, watch, nanobot, intestinal flora, or any online service, but they all share the same purpose: behavioral surplus capture. (p. 129)

In Silicon City, Cary McClelland interviews a range of characters caught in the web of this new economic imperative, though few of them understand the driving force as clearly as Zuboff and Levine do. But they’re living it and sometimes they catch glimmers of it in spite of their commitment to seeing the contemporary techno-frenzy through rose-colored glasses.

[Saul Griffith, founder of OtherLab] … a whole bunch of libertarians want to think that it’s their genius. No. What they’ve done is a really clever socioeconomic hack. In fact, it’s not even that clever. It’s cynical. They are exploiting loopholes and taking infrastructure for granted.
These are robber barons in the traditional sense.
Because it’s this weird, extractive industry of eyeballs and attention. I do think Google tries to “do no evil” and all the rest, but merely trying not to do evil is not enough. So far they’ve really failed at doing anything except advertising… I know very few assholes who work at Google. But the collective Google … is an asshole. (p. 184)

Echoing this point, Corey Pein spends most of his book hanging around San Francisco and Silicon Valley trying to figure out how people get their startups funded, how some of them become terribly rich in a short time, and why so many of them are such awful people. His blistering Live Work Work Work Die helped me reconnect to the daily lives of people in the “processed world” in a way that I had been always a few steps away from for the past couple of decades. As a self-employed historian, writer, desktop publisher, tour guide, etc., I meet people all the time, but do not work in the corporate pixel mines directly. Pein along with Kobek’s novel, assured me that my sense of the Catch-22 absurdity and fundamental pointlessness of it all was as well grounded as ever. Here’s Pein and Zuboff on the basic criminality of Silicon Valley and the uniquely awful role of Google:

Studying the example of all these successful tech companies helped me better understand the day-to-day work of startup founders and venture capitalists, stripped of jargon and euphemism: Entrepreneurs devised new ways to break the law, while investors spotted and bankrolled the most promising schemes. That was the secret of Silicon Valley. (Pein, p. 137)


To state all this in plain language, Google’s invention revealed new capabilities to infer and deduce the thoughts, feelings, intentions, and interests of individuals and groups with an automated architecture that operates as a one-way mirror irrespective of a person’s awareness, knowledge, and consent, thus enabling privileged secret access to behavioral data. (Zuboff, p. 80-81)


The troubling legacy of the crackpot eugenicist racism that defined Gold Rush California lives on in the biotech startups sprouting up in and around Silicon Valley. These companies promise a better world through applied genetics. The most famous … is Google’s 23andme, which sells mail-order genetic sequencing services to the general public…. In 2013, Google obtained a patent for “gamete donor selection based on genetic calculations”–a tool for selecting “allowable permutations” in “hypothetical offspring.” … In plain language, Google had patented a tool to create “designer babies.” (Pein, p. 261-262)

As San Francisco chokes, its streets clogged with thousands of Uber and Lyft vehicles (most of which drive in from hours away every day), dozens of luxury buses serving only specific tech workers, and now its sidewalks cluttered with e-bikes and e-scooters, all products of law-breaking disrupters, the failure of public governance is stark. From the misuse of public thoroughfares to the passive acceptance of the broken housing market as the arbiter of price and availability of a basic human right (shelter), San Francisco’s government has ducked when it should have aggressively asserted a public interest against the disrupters and fraudsters.

Jump e-bikes (now owned by Uber) and e-scooters clog the public bike racks at Battery and Market, right over the historic marker of the original shoreline…

In spite of all the extravagant claims of making the world a better place, the preponderant wealth accumulated during this time has been on the basis of online advertising and stock speculation. The destruction of local journalism, along with the demise of countless retail outlets, that are direct products of this disruption have found no public response. Corey Pein shows how even the ostensible success is itself based on fraud:

…online advertising—the basis for the attention economy that fueled all speculative investment in digital media, from giants like Google on down to low-rent email marketers—was a racket… The mechanics of the fraud are complex and technical, but it boils down to this: Companies that place online ads think they are paying based on how many potential customers will see their messages, but the ads are ineffective in actually reaching consumers. Companies in fact frequently pay for ads that are “seen” only by automated computer programs known as bots, or by low-wage workers toiling in offshore “click farms.” (p. 114-115)

Revealing that the emperor has no clothes has done little to slow the runaway train that continues to line the pockets of the already rich. Harry Shearer on his weekly Le Show radio hour has repeatedly read aloud the reports from the trade press about the lack of documentation provided by Facebook and Google on actual results from advertising. By never confirming any successes (and hiding what would seem to be the general failure of online advertising) they go on raking in the advertising dollars of companies too afraid and bewildered not to keep buying.

But there is a longer game here, and it’s the one Shoshana Zuboff has probably gone the furthest to unmask. The insatiable efforts to automate the collection of human activity as “behavioral surplus” creates a raw material pipeline to the growing engines of automated or artifical intelligence. This is a gross misnomer of course, since it is not intelligence at all, but an enormous aggregation of data that programmers have gotten better and better at extracting actionable information from… and once they decide on what they want to do, it becomes easy to automate the same analysis and actions going forward. According to Zuboff and most of the writers of the books I’m referencing in this essay, the larger goal of all this is to engineer society through behavior modification. Not in the old cartoonish Big Brother way, where a totalitarian government controls all thoughts, but in the early 21st century way, where giant corporations are able to nudge, corral, and herd people to predictable behaviors that produce profits for specific companies. That the government is along for the ride, and as Ed Snowden’s revelations showed, is able to piggyback on the same private data gathering to build its own surveillance and policing operations, is just further reason to object.

B.F. Skinner Never Dies, He Keeps Getting Recycled

Given the invisibility of the data gathering and herding being carried out all the time by algorithmic manipulations, we have to ask ourselves if we are already in a world where free will has been fatally eroded? If not, how can we tell? Several writers go back to behaviorist B.F. Skinner and his infamous “utopian” writings that promoted a smoothly engineered future in which conflict and hardship were eliminated by a technocratic order that also eliminated dissent.

Jenny Odell has written a marvelous book How To Do Nothing which belies its provocative title to advocate for engaged subjectivities that reject the addictive pull of the attention economy. She spends a lot of time in gardens and bird-watching, activities that to software engineers probably look like “doing nothing” but to anyone who is fully alive and in the moment, can be the most enriching and soul-nourishing of activities. The personal subjectivity and free will required for any meaningful political process is at the heart of her concerns.

Politics necessarily exist between even two individuals with free will; any attempt to reduce politics to design (Peter Thiel’s “machinery of freedom”) is also an attempt to reduce people to machines or mechanical beings. So when Thiel writes of “new technologies that may create a new space for freedom,” I hear only an echo of Frazier [fictional founder of the commune in B.F. Skinner’s Walden Two]: “Their behavior is determined, yet they’re free.” (p. 52)

Yasha Levine’s careful reconstruction of the military history of the Internet brings him face to face with the engineers who designed some of the earliest components. What he discovered is that many of them were inspired by their fantasies of well-oiled and smoothly running machines to imagine that society too could emulate the perfect functioning of their ideal machinery, a place “where computers and people [are] integrated into a cohesive whole, managed and controlled to ensure security and prosperity.” At MIT and other early computer labs, these brilliant technicians imagined that engineering could solve the gnarly political conflicts that were erupting outside of their 1960s laboratories. A substantial subset of them were employed developing the technologies of counterinsurgency that were heavily implemented in Vietnam, Cambodia, Thailand, and Laos—an incredible real-life laboratory to try out their best efforts to engineer behavior. For others, the key was to avoid politics altogether and apply scientific principles based on mathematical proofs to solve social problems.

J.C.R. Licklider, Ithiel de Sola Pool, and other ARPA and military engineers were deploying cybernetic ideas to build computer networks, while dreaming of building prediction technology to run the world and manage political strife out of existence. The hippies were doing the same thing with their cybernetic communes. Except, where ARPA and the military were industrial and global, communes were small-scale, boutique. (p. 111)

Levine’s conflation of hippie communes and military research echoes Fred Turner’s important book From Counterculture to Cyberculture, and he, like Turner, sees Stewart Brand and his Whole Earth Catalog as a key influencer in bridging the two seemingly disparate cultural impulses. Levine gets even a bit closer when he briefly examines the late 1970s failure of some communes, quoting a fleeing member as saying he felt like there was “spyware running in the background.” If that’s what it felt like in a 1970s cybernetic utopia gone bad, Levine notes, “it is also an accurate description of the world Google and the Internet have made today.”

Corey Pein finds an uncritical adulation for B.F. Skinner among the tech bros he’s surrounded with during his forays into recent start-up culture: “Here, however, Skinner was a hero. With creative applications of the latest research in neuroscience and behavior as well as evolutionary psychology, startup marketers could make users respond as predictably as lab rats.” (p. 104) Once you delve into this a bit further you realize that Adam Greenfield’s description of the philosophy underlying much of the tech world as “unreconstructed logical positivism” is eerily accurate, and that most of these undeveloped minds imagine that they can achieve perfect knowledge.

As the chief data scientist for a much-admired Silicon Valley education company told me, “Conditioning at scale is essential to the new science of massively engineered human behavior.” He believes that smartphones, wearable devices, and the larger assembly of always-on networked nodes allow his company to modify and manage a substantial swath of its users’ behaviors. As digitial signals monitor and track a person’s daily activities, the company gradually masters the schedule of reinforcements—rewards, recognition, or praise that can reliably produce the specific user behaviors that the company selects for dominance:

“The goal of everything we do is to change people’s actual behavior at scale. We want to figure out the construction of changing a person’s behavior, and then we want to change how lots of people are making their day-to-day decisions. When people use our app, we can capture their behaviors and identify good and bad [ones]. Then we develop “treatments,” or “data pellets” that select good behaviors. We can test how actionable our cues are for them and how profitable certain behaviors are for us…”

In this phase of the prediction imperative, surveillance capitalists declare their right to modify others’ behavior for profit according to methods that bypass human awareness, individual decision rights, and the entire complex of self-regulatory processes that we summarize with terms such as autonomy and self-determination. (Zuboff,p. 296-298)

Much as I appreciated the precision of Zuboff’s analysis of surveillance capitalism, I did balk at parts of her argument. She is a retired Harvard business professor which can’t help but influence her framing of the questions she takes on. She curiously asserts that “the struggle for power and control in society is no longer associated with the hidden facts of class and its relationship to production but rather by the hidden facts of automated engineered behavior modification.” (p. 309) One can hardly claim that class and production are no longer central to the production and reproduction of our daily lives! I think in her zeal for having “discovered” this whole new paradigm of capital accumulation, on top of a life-long antipathy to Marxist political economics (even though she uses a lot of basic Marxian concepts to explain the new raw materials, products, and markets in terms of enclosures), she overstates the success of the surveillance capitalists. I doubt very much their ability to engineer reality on the scale she suggests they are trying to. This strikes me as the same old hubris of the nerds in white lab coats, so sure that they’ve planned for everything and that their best laid plans will come to fruition. The questionable efficacy of online advertising alluded to earlier by the work of Corey Pein also undercuts the main argument of Zuboff and others loudly warning us about the behavior modification agenda behind surveillance capitalism. I believe it’s quite likely that Google and Facebook et al are having some tangible success in driving people to buy some things, to shop in certain ways at certain times, etc. But that’s all within the realm of a highly consumerist, advertising saturated daily life. For the millions on whom this system of thought control and behavior modification has little effect, either because they’re too skeptical or too broke to shop, the argument frays.

In its latest incarnation, behavioral modification comes to life as a global digital market architecture unfettered by geography, independent of constitutional constraints, and formally indifferent to the risks it poses to freedom, dignity, or the sustenance of the liberal order… This contrast is even more distressing in light of the fact that in the mid-twentieth century, the means of behavioral modification were aimed at individuals and groups who were construed as “them”: military enemies, prisoners, and other captives of walled disciplinary regimes. Today’s means of behavioral modification are aimed unabashedly at “us.” (Zuboff, p. 327)

She goes on to argue that this new system, which she dubs not Big Brother, but Big Other, is the backbone of an unprecedented means of behavioral modification, and that the goal of this new “instrumentarian” power is to replace the 20th century utopian impulse to engineer souls with the streamlined goal of engineering behavior. Her book is very long and she takes the time it provides to meander through quite a lot of the characters and their workplaces where this new instrumentarian power is taking shape. It has its roots not only as Levine argues in the Pentagon and the Vietnam War’s counterinsurgency efforts, but also, perhaps not surprisingly, in animal labs where scientists have been employing Skinnerian techniques to induce behavioral changes among all sorts of species for decades. One such character is the director of the Human Dynamics Lab at MIT’s Media Lab, a guy named Alex Pentland, who refers to his theory of society as “social physics.” Pentland explicitly claims he is “developing the social systems that would work along the same lines as the machine systems, using behavioral data flows to judge the “correctness” of action patterns and to intervene when it is necessary to change “bad” action to “correct” action.” (p. 426)

Belying a seriously underdeveloped sense of politics and philosophy typical of this new priesthood, Pentland thinks from his lofty perch as a behavioral scientist he can jettison ideas rooted in the Enlightenment and contemporary political economics:

Pentland insists that the “old” social categories of status, class, education, race, gender, and generation are obsolete, as irrelevant as the energy, food, and water systems that he wants to replace. Those categories describe society through the lens of history, power, and politics, but Pentland prefers “populations” to societies, “statistics” to meaning, and “computation” to law. He sees the “stratification of the population” coded not by race, income, occupation, or gender but rather by “behavior patterns” that produce “behavior subgroups” and a new “behavior demographics” that can predict disease, financial risk, consumer preferences, and political views with “between 5 and 10 times the accuracy” of the standard measures.

Zuboff was once, briefly, a student of B.F. Skinner in the late 1960s, and his spirit and writing hover above much of the horror she expresses about the people and ideas she encountered during her decade-long research into the surveillance capitalist paradigm. She knows well Skinner’s hatred of the independent thinker:

The surrender of the individual to manipulation by the planners clears the way for a safe and prosperous future built on the forfeit of freedom for knowledge. Skinner was unrelenting on this point:
What is being abolished is autonomous man—the inner man, the homunculus, the possessing demon, the man defended by the literatures of freedom and dignity. His abolition has been long overdue… He has been constructed from our ignorance, and as our understanding increases, the very stuff of which he is composed vanishes…. And it must do so if it is to prevent the abolition of the human species. To man qua man we readily say good riddance. Only by dispossessing him can we turn… from the inferred to the observed, from the miraculous to the natural, from the inaccessible to the manipulable. (Beyond Freedom, 200, 205) (p. 439)

Who Is Making Our Digital World?

With such a basically misanthropic view of humans, why have we let this logic become so pervasive? It’s a complicated story and goes beyond the tech industry into the structures of thought and political power created during decades of neoliberalism. As Zuboff argues in her book, “[There has been a] decades-long elaboration and implementation of the neoliberal economic paradigm: its political economics, its transformation of society, and especially its aim to reverse, subdue, impede, and even destroy the individual urge toward psychological self-determination and moral agency.” (p. 31) The rejection of the social in favor of the private is a fundamental pillar of this, and the extreme transience of workers in workplaces and residents in neighborhoods have physically reinforced atomization and a sense that everyone is in it for themselves. Having a government that revels in its lack of empathy, and in its vengeful revanchism, further pushes people just trying to survive into an individualistic approach to what is evidently a dog-eat-dog world. This empathy-deprivied daily life has become the air we breathe as we step over the destitute while averting our gaze. Gaining a foothold in the corporate world looks to many like a survival imperative as much as an effort to build a career or to do something useful or meaningful. The steady recruitment of new workers for the paradisical world of a new corporate life has repeatedly attracted apparently enthusiastic devotees.

In Silicon City, Cary McClelland interviews Saad Khan, the venture capitalist who went on to found Change.org and several other quasi-philanthropic online ventures:

… the new generation is also super important for the economy … because people coming out of school will work 24/7. It’s the engine on which a lot of the value is built, this incoming talent cycle. So it’s important that they find it desirable to go and join these massive companies. It’s all part of the ecosystem. The whole complex feels vertically integrated: someone finishes undergrad, they have a path, they go straight to a big company, graduate school, then a firm. These institutions keep them insulated from much of the world, and the next thing you know, they’re a senior person in their field. They have resources. They have influence. But they’ve never actually worked outside of a pretty sheltered context. Google is like Stanford. They recruit a lot from Stanford, but they also re-created a college campus. (p. 105)

Corey Pein finds himself sharing a bunkbed dorm with a series of stereotypical tech bros. These dorms have proliferated in San Francisco over the past 20 years, usually started in former residential hotels that provided longterm low-cost rooms to poor people. The displaced have logically ended up in the streets of the City, much to the horror of the endless waves of arriving tech workers and tourists. To keep your foothold in any given job, no matter how shitty or short-term, the key is always attitude. As Pein learns, “In this milieu, a certain tolerance for phoniness was a prerequisite. It was not enough to have the right skills, put in your time, and get the job done—you had to be fucking pumped about your job, or else it was time to find a new one.” (p. 69)

At the top of Silicon Valley sits the royalty, Apples’ Steve Jobs and Google’s Larry Page and Sergey Brin, who ostensibly set the style and tone for the rest… and remember, these are the people who are behind the fantasy of engineering our lives to the point of a frictionless rhythm that leaves behind politics, poverty, and pain. As Ellen Ullman eloquently argues in her Life in Code (briefly looked at in this essay), the people writing the code are shaping a lot of our choices, and sometimes even our very ideas about what’s possible. So many nerdy, lonely white men with rather limited social skills (to be generous), often obsessed with video games and science fiction, are not my first choice for who should have a dominant role in re-engineering everyday life! Nor do I think the coders and entrepreneurs who happened to get rich quick have any particular claim to wisdom or expertise when it comes to the profound questions we face today. Jarrett Kobek, writing his novel in 2013, had no confusion about the kind of people they are:

The defining aspect of Steve Jobs was the marriage of his innate dickishness with gauzy Bay Area entitlement… Steve Jobs grew up reading The Whole Earth Catalog, a publication dedicated to the proposition that by spending your money in the right way, you could become the right kind of person. This was the mantra of the post-WWII economy, an unspoken ideology that cut across the social classes… It was a new kind of marketing, geared towards the insecure bourgeois aspirant. Steve Jobs sucked it in and shit it out … His promise was simple: … You can die ugly and unloved, or you can buy an overpriced computer or iPod and listen to early Bob Dylan … Your fundamental uncreativity will be masked by group membership. People will think you are interesting and beautiful and enlightened… Nothing says individuality like 500 million consumer electronics built by slaves. Welcome to Hell.


Larry Page was considered a good CEO because Google’s core business of advertising made so much money that no one noticed that Larry Page was bad at his job and operated off the principle that unexamined growth was a successful strategy for the future. Sergey Brin, the other co-founder… had rebranded himself as the head of Google X, Google’s nonsense experimental lab which developed faddish technologies like wearable computers and cars that could drive themselves and dogs that didn’t need to clean their genitals. These technologies would amount to nothing. They were banal visions of the future as imagined by the fans of Science Fiction. Google was an advertising company.

From “stars” like these, the culture that has descended on the Bay Area and had altogether too much influence globally, is easier to grasp. It is a fundamentally solipsistic worldview, where there is no elsewhere, no one else, just you in the moment in the infinitude of the hungry grasp… for power, for money, for love, for connection. Pein was bunked in Hacker Condo, an ideal environment, pre-ripened for behavior modification:

This was a world where scoring points on social media mattered more than getting to know the people you shared a bathroom with, where fulfillment in life was seen as the culmination of a simple, replicable process, like the instructions on the back of a box of macaroni and cheese. We were grown men who lived like captive gerbils, pressing one lever to make food appear out another for some fleeting entertainment—everything on demand Airbnb and Foodpanda served the flesh, Netflix and Lifehacker nourished the soul. (p. 18) The bullies were binge-drinking gym rats who, regardless of age, seemed perpetually twenty-five years old. Most were white, but Raj, a Desi, was Hacker Condo’s resident bully. By emulating the performative, coked-up machismo of their overlords in the finance sector, the bullies were determined to avoid the old stigma of the computer nerd as a simpering eunuch. p. 27

On Valencia Street, summer 2019.

Resistance is Life

Fortunately, there are many voices emerging to contest the inevitablism of our predetermined future. Jenny Odell is committed to hold and nurture her inner self, her free will, and her abiding dignity. She “stands apart” from this frenzy of marketing even while regularly plumbing the depths of social media and the daily journeys across the Internet. From her view somewhat askance, she develops her critical analysis of the world as it is, and gives herself room to imagine a very different everyday life. She recognizes the steady manipulations that many people take for granted, the ways we accept being reduced to our saleability, to a personal brand:

…at its most successful, an algorithmic “honing in” would seem to incrementally entomb me as an ever-more stable image of what I like and why. It certainly makes sense from a business point of view. When the language of advertising and personal branding enjoins you to “be yourself,” what it really means is “be more yourself,” where “yourself” is a consistent and recognizable pattern of habits, desires, and drives that can be more easily advertised to and appropriated, like units of capital. (p. 137)

It’s been my hope since the early 1980s days of publishing Processed World that there was and would be a growing number of office workers, tech workers, etc., who see through the propaganda and endless boosterism to the banal truth of a desperately lonely work life that is blindly producing a collapsing global ecology. We enjoyed a robust correspondence in the 1980s with fellow dissidents who saw through the stupidity of our daily lives, and recognized the ongoing barbarism that the United States was projecting under the rhetoric of freedom and liberty. But our ideas and our community that shared our sensibility didn’t really grow much. Perhaps our role was to carry a radical thread from the decades that preceded us to the decades that would follow. Perhaps this essay is part of that trajectory. But Zuboff cites the MIT Human Dynamics Lab director Alex Pentland for his disdainful claim that dissent is basically a “statistical blip.”

As Pentland sees it, the problem is not that “independent thought” is omitted from the [behaviorist] picture but rather that “internal, unobservable” thought processes are just friction that “will occasionally emerge to defeat our best social physics models.” Fortunately, the models are not really in danger because ‘the data tell us that deviations from our regular social patterns occur only a few percent of the time.” The autonomous individual is but a statistical blip, a slip of the pen that is easily overridden in the march toward confluent action and someone’s greater good. (Zuboff, p. 441)

This kind of self-congratulatory arrogance is perhaps typical of the tech culture. What remains invisible to it is largely undocumented. Even their own workforces are surprising them with acts of collective resistance. 20,000 Google employees walked out across the world in November 2018 to protest the sexist culture in the top ranks of the company (and especially its $90 million payoff to an executive who left under a cloud of sexual harassment charges). Other internal campaigns have challenged the corporation’s pursuit of contracts with the Pentagon and the Chinese government. Facebook has its own culture problems too, thanks to the ease with which their troves of private data have been used to manipulate elections and erode civic culture. Both tech giants had long feature articles about the rising discontent among their workforces in recent Wired articles (“Three Years of Misery Inside Silicon Valley’s Happiest Company” and “Fifteen Months of Fresh Hell Inside Facebook”). A New Yorker piece in August 2019 “Trouble in Paradise” ignores the organizing percolating inside the companies to focus on a few wealthy escapees who are trying to prick the conscience of their former co-religionists in the industry. But even in this piece, the malaise with what has been wrought is palpable. The bubble of self-satisfaction has definitely been popped. But where will it go from here? Will the technically adept who keep these companies going ever develop an independent politics or will they continue to slavishly follow the dictates of their corporate overlords? The early episodes of resistance are encouraging, but there is far to go. A more thorough overview of worker organizing trends in tech is provided by R.K. Upadhya in Notes From Below’s June 2019 special issue Logout!

Jenny Odell reminds us that the problems are not reducible to worker organizing at tech companies. Perhaps some of the people in those workplaces are involved in deepening their own thinking, and sharpening their political aspirations. But Odell is spot on when she argues that our very survival depends on

“an understanding of complexity, interrelationships, and nuance… Looking both to the troubling present and to successful actions in the past suggests that we will require new kinds of alliances and formations, which will further require periods both of solitude and of intense connection and communication. But how can we do that when our platforms for “connection” and expression detract from the attention to place and time that we need, simultaneously eroding the contexts that would allow new strategies to sharpen and flourish? (p. 166) … One of the main points I’ve tried to make in this book—about how thought and dialogue rely on physical time and space—means that the politics of technology are stubbornly entangled with the politics of public space and of the environment. This knot will only come loose if we start thinking not only about the effects of the attention economy, but also about the ways in which these effects play out across other fields of inequality. (p. 199)

Lizzy O’Shea’s excellent book Future Histories provides a different kind of history for the digital age, plumbing such luminaries as Ada Lovelace and Franz Fanon to find “new” pasts for what seems to have erupted from nowhere with the rise of the Internet. The actual history of the Internet as detailed in Yasha Levine’s book is one important thread to make sense of this world, but O’Shea finds many different pasts to bring forward.

It is critically important to disorganize the society currently built around digital doppelgangers and segregated marketing. This is unlikely to happen of its own accord. “We do not expect this colonialism to commit suicide,” wrote [Franz] Fanon. “It is altogether logical for it to defend itself fanatically.” In such circumstances, relying on the benevolence of state and capital to restructure digital society is a mistake. As Fanon concluded: “It is the colonial peoples who must liberate themselves from colonialist domination.” (O’Shea p. 208)

Valencia Street, summer 2019.

O’Shea wants to find a path forward for the Internet, one that breaks with the capitalist paradigm that has shaped it thus far. This is not so far fetched if you recall that it was one bureaucrat in the U.S. government who single-handedly privatized one of the biggest publicly financed inventions in history. As she argues, “It is the backbone of a huge amount of collaborative human effort and some of the most exciting developments in multiple fields of human endeavor. It should be returned to the domain of public ownership, through socialization and investment of public funds. It ought to be governed by rules agreed on transparently and in the interests of all users.” (p. 237) O’Shea, an Australian, turns to an indigenous perspective to frame a different way of seeing the Digital Commons in the future:

In the twenty-first century, we need to create a digital environment that is not owned by anyone or any entity but is preserved and protected for a shared future, based on a culture of mutual respect. We need to start thinking about the Internet as a landscape that creates the conditions in which we live, as a shared responsibility that we contribute to and draw from. (p. 223)


The digital commons therefore serves two purposes. First, it facilitates the effective distribution of certain goods. This is especially true when the marginal cost of producing that good is zero. Second, it facilitates production, specifically efficient, collaborative labor. A common body of information avoid duplication and creates economies of scale. It allows open source or peer-to-peer ways of working, the likes of which we have seen deployed so effectively in the free software movement. … Intellectual property laws hinder both these purposes of the digital commons from being realized…. In a world in which digital technology has great potential for helping us organize human activity efficiently and sustainably, the limits placed on its by capitalist modes of production are worth examining. (O’Shea, p. 250, 129)

Parksville, BC beach

This last point is made too by Aaron Bastani in his provocative book Fully Automated Luxury Communism. “There is more than enough technology for everyone on Earth to live healthy, happy, fulfilling lives. What stands in the way isn’t the inevitable scarcity of nature, but the artificial scarcity of market rationing and ensuring that everything, at all costs, is produced for profit.” (Bastani, p. 156) Bastani’s argument might find traction among some of the Silicon Valley workforce, as he adopts an accelerationist sensibility towards ideas that mostly percolate among the true believers of the Singularity. But unlike the self-satisfied libertarians and delusional acolytes of Ray Kurzweil and his minions, Bastani argues that we are entering a world where most basic goods, especially energy, food, and labor, are rapidly moving towards low or no cost, i.e. “they want to be free.” But unlike the cheerleaders of capitalist utopias, he argues for a plan of Universal Basic Services, an expanded common wealth if you will, that would allow most people to reduce their dependency on outmoded “gigs” and extract themselves from the extractive economy based on private profit. For Bastani, “fully automated luxury communism… is a map by which we escape the labyrinth of scarcity and a society built on jobs; the platform from which we can begin to answer the most difficult question of all, of what it means, as Keynes once put it, to live ‘wisely and agreeably and well’.”

Finally, Corey Pein sums up his book with the reminder that “Boredom was once possible. Idle hands made tremendous things. Today, no one is idle. Everybody’s working, even when they tell themselves they’re taking a break. It is said that “data is the new oil.” But we are the data; the new oil is us.” (p. 286) Unlike oil, though, we can still think for ourselves… can’t we?

The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power
by Shoshana Zuboff, Public Affairs/Hatchette Book Group: New York, 2019
Fully Automated Luxury Communism: A Manifesto
by Aaron Bastani, Verso, London: 2019
Future Histories: What Ada Lovelace, Tom Paine, and the Paris Commune Can Teach Us About Digital Technology
by Lizzie O’Shea, Verso Books, London, England: 2019
How to Do Nothing: Resisting the Attention Economy
by Jenny Odell, Melville House, Brooklyn, NY: 2019
I Hate the Internet
by Jarett Kobek, We Heard You Like Books, Los Angeles, CA: 2016
Live Work Work Work Die: A Journey into the Savage Heart of Silicon Valley
by Corey Pein, Metropolitan Books, Henry Holt & Co.: 2017
A People’s History of Silicon Valley: How the Tech Industry Exploits Workers, Erodes Privacy, and Undermines Democracy
by Keith A. Spencer, Eyewear Publishing/Squint Books: London, England, 2018
Silicon City: San Francisco in the Long Shadow of the Valley
by Cary McClelland, W.W. Norton & Co., New York: 2018
Surveillance Valley: The Secret Military History of the Internet
by Yasha Levine, Public Affairs/Hatchette Book Group, New York: 2018

1 comment to Who’s Processing Whom?

  • Fine piece, deserving wide circulation.
    WE certainly can “think” for ourselves, as of today before Zuck-style “neuro-capitalism” gets too crazy out of the dungeon lab, but we certainly cannot “act” for ourselves.
    Well, maybe Jenny Odell can as member of the higher ed cult, but unless there is an “Engaged Subjectivities” pop-up craze hitting the urban streets, there is no way to pay the rent or buy the groceries with an “Engaged Subjectivities” Unlimited play card.

Leave a Reply

You can use these HTML tags

<a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>