All Your Data Are Belong To Us

In the blink of an eye, sci-fi dystopia becomes reality becomes the reality we take for granted becomes the legally enshrined status quo:

“One of our top priorities in Congress must be to promote the sharing of cyber threat data among the private sector and the federal government to defend against cyberattacks and encourage better coordination,” said Carper, ranking member of the Senate Homeland Security and Governmental Affairs Committee.

Of course, the pols are promising that data analyzed by the state will remain nameless:

The measure — known as the Cyber Threat Intelligence Sharing Act — would give companies legal liability protections when sharing cyber threat data with the DHS’s cyber info hub, known as the National Cybersecurity and Communications Integration Center (NCCIC). Companies would have to make “reasonable efforts” to remove personally identifiable information before sharing any data.

The bill also lays out a rubric for how the NCCIC can share that data with other federal agencies, requiring it to minimize identifying information and limiting government uses for the data. Transparency reports and a five-year sunset clause would attempt to ensure the program maintains its civil liberties protections and effectiveness.

Obama seems to suggest that third-party “cyber-info hubs”—some strange vivisection of private and public power—will be in charge of de-personalizing data in between Facebook and the NSA or DHS:

These industry organizations, known as Information Sharing and Analysis Organizations (ISAOs), don’t yet exist, and the White House’s legislative proposal was short on details. It left some wondering what exactly the administration was suggesting.

In the executive order coming Friday, the White House will clarify that it envisions ISAOs as membership organizations or single companies “that share information across a region or in response to a specific emerging cyber threat,” the administration said.

Already existing industry-specific cyber info hubs can qualify as ISAOs, but will be encouraged to adopt a set of voluntary security and privacy protocols that would apply to all such information-sharing centers. The executive order will direct DHS to create those protocols for all ISAOs.

These protocols will let companies “look at [an ISAO] and make judgments about whether those are good organizations and will be beneficial to them and also protect their information properly,” Daniel said.

In theory, separating powers or multiplying agencies accords with the vision of the men who wrote the Federalist Papers, the idea being to make power so diffuse that no individual, branch, or agency can do much harm on its own. However, as Yogi Berra said, “In theory there is no difference between theory and practice, but in practice there is.” Mark Zuckerberg and a few other CEOs know the difference, too. They decided not to attend Obama’s “cyber defense” summit in Silicon Valley last week.

The attacks on Target, Sony, and Home Depot (the attacks invoked by the state to prove the need for more state oversight) are criminal matters, to be sure, and since private companies can’t arrest people, the state will need to get involved somehow. But theft in the private sector is not a new thing. When a Target store is robbed, someone calls the police. No one suggests that every Target in the nation should have its own dedicated police officer monitoring the store 24/7. So why does the state need a massive data sharing program with the private sector? It’s the digital equivalent of putting police officers in every aisle of every Target store in the nation—which is likely the whole point.

Target, of course, does monitor every aisle in each of its stores 24/7. But this is a private, internal decision, and the information captured by closed circuit cameras is shared with the state only after a crime been committed. There is no room of men watching these tapes, no IT army paid to track Target movements on a massive scale, to determine who is a possible threat, to mark and file away even the smallest infraction on the chance that it is needed to make a case against someone at a later date.

What Obama and the DHS are suggesting is that the state should do exactly that: to enter every private digital space and erect its own closed circuit cameras, so that men in suits can monitor movement in these spaces whether a crime has been committed or not. (State agencies are already doing it, of course, but now the Obama Administration is attempting to increase the state’s reach and to enshrine the practice in law.)

“As long as you aren’t doing anything wrong, what do you care?”

In the short term, that’s a practical answer. In the future, however, a state-run system of closed circuit cameras watching digital space 24/7 may not always be used for justified criminal prosecution.

The next great technological revolution, in my view, will be the creation of an entirely new internet protocol suite that enables some semblance of truly “invisible” networking, or perhaps the widespread adoption of personal cloud computing. The idea will be to exit the glare of the watchers.


Though he doesn’t state it directly, Timothy Burke recognizes that humanistic inquiry circa 2013 is at risk of being subsumed—enfolded into—techno-scientific discourse and scrutiny. In many venues, as he notes, it has already been subsumed:

I would not call such views inhumane: more anti-humane: they do not believe that a humane approach to the problems of a technologically advanced global society is effective or fair, that we need rules and instruments and systems of knowing that overrule intersubjective, experiential perspectives and slippery rhetorical and cultural ways of communicating what we know about the world.

The anti-humane is in play:

–When someone works to make an algorithm to grade essays

–When an IRB adopts inflexible rules derived from the governance of biomedical research and applies them to cultural anthropology

–When law enforcement and public culture work together to create a highly typified, abstracted profile of a psychological type prone to commit certain crimes and then attempt to surveil or control everyone falling within that parameter

–When quantitative social science pursues elaborate methodologies to isolate a single causal variable as having slightly more statistically significant weight than thousands of other variables rather than just craft a rhetorically persuasive interpretation of the importance of that factor

–When public officials build testing and evaluation systems intended to automate and massify the work of assessing the performance of employees or students

At these and many other moments across a wide scale of contemporary societies we set out to bracket off or excise the human element , to eliminate our reliance on intersubjective judgment. We are in these moments, as James Scott has put it of “high modernism”, working to make human beings legible and fixed for the sake of systems that require them to be so.

That humans can be quantified and their behaviors inserted into mechanistic or, more recently, statistical models is an idea as old as Comte and Spencer. Humanists of all stripes, religious and secular, have long denounced this idea, but with each passing decade, their denunciations have been met with more and more techno-scientific intrusions into the venues of humanistic inquiry. Researchers in China are currently attempting to map the genetic architecture of human intelligence itself; natural language processors are attempting to teach computers to learn human languages; and researchers across a wide array of disciplines continue to produce research which suggests that all life—human life included—is essentially “a mixture of genes, environment, and the accidents of history.

E.O. Wilson’s Sociobiology may have been problematic in tone and too over-confident, but its underlying idea—that everything human and humane will eventually be describable in techno-scientific terms—is as valid as ever. Kurzweil is far too optimistic about the speed of transhuman advancement, but if history has shown us one thing, it’s not to bet against scientific advancement. And if we do build a sentient machine or use machines to amplify the abilities of humans (aren’t we doing this already?), then what else can we conclude but that humanity can be meaningfully reduced to techno-scientific terms?

Timothy Burke writes that “a humane knowledge accepts that human beings and their works are contingent to interpretation,” but some interpretations are more productive than others. Western science may be a reductive way-of-knowing, and techno-scientific reductionism may be a tough interpretation for humanists to find value in, but in the post-Enlightenment marketplace of ideas, anti-humane knowledge has always been and will continue to be the driver of discourse. Why? It produces. Its material applications are powerful.

Most of Burke’s essay is nuanced and generative, but he concludes with a bit of a rhetorical flourish that undermines the continuing material productivity of science:

We might, in fact, begin to argue that most academic disciplines need to move towards what I’ve described as humane because all of the problems and phenomena best described or managed in other approaches have already been understood and managed. The 20th Century picked all the low-hanging fruit. All the problems that could be solved by anti-humane thinking, all the solutions that could be achieved through technocratic management, are complete.

If by “anti-humane thinking” Burke means a purely mechanistic view of humanity, then he’s probably right. However, no one holds such a view anymore, if they ever did. For example, machine learning is modeled on statistical probabilities, and genetic research is looking at complex polygenic and epigenetic effects that are not reducible to single gene tinkering. The techno-scientific lens is no longer mechanistic or averse to complexity, but I still get the feeling that it remains “anti-humane” in the eyes of many humanists.

The worst thing the humanities can do is to continue theorizing about how its subject matter simply cannot be subsumed by techno-scientific practice while its subject matter continues to be subsumed by techno-scientific practice. We need to stop talking about, say, “the social construction of gender and sexuality” as though representation and discourse were more important for understanding gender and sexuality than hormone therapy or the biology of same-sex reproduction. Too often, humanists confuse ethical critique with epistemology. In my opinion, instead of assuming that our areas of inquiry are by definition off-limits to the techno-scientific lens, we need to recognize that the humanities are indeed “incomplete” without recourse to the knowledge of science. We should cross the border into the Land of Techno-Science more often . . . . for this sets up an encounter in which the sciences will recognize that they, too, are “incomplete” without recourse to the knowledge of humane inquiry. Every discipline has its deflections.

True incommensurability is rare. I’m confident that reading, e.g., E.O. Wilson and Donna Haraway together could be productive—so long as neither work remains unchanged in the encounter. The encounter would not be about whose knowledge gets to be the base of the other’s, or whose knowledge anchors the other’s. Rather, the point of such a humane/anti-humane encounter would be to give birth to an epistemological offspring comprised of elements from both but resembling neither.

Robot Economy


Exiting the Womb is Messy

Hayek and Hazlitt assure us we needn’t worry about the loss of jobs to technological advances because said losses translate to newer jobs elsewhere, specifically in the manufacture and servicing of the advanced technology. This is true, but not the whole story.

Often, the newer jobs are more scarce and harder to obtain—Ford needs only a few engineers to oversee the computers doing the work once undertaken by dozens of assemblers. Efficiency, efficiency, efficiency. What’s more, new technologies are rarely as ‘in-demand’ as the old ones—Ford produces more cars than KUKA produces industrial robots used by Ford to replace workers. Lastly, the manufacture of new technologies rarely occurs where the old technologies were manufactured. Even if total job numbers are equalized, laid-off assembly workers can’t be expected to move from Detroit to South Asia.

The result: today, there aren’t many jobs that allow high school and college graduates to build or create things of value. Luckily, however, the sheer efficiency of the nascent robot economy and the blessed cheapness of outsourced, non-Western labor means that costs are kept low on the products we love in the West. Low costs (buttressed by the welfare state) have fostered the development of the service industry—people of all classes can afford to buy things, and by God, someone has to be there to ship them, retail them, exchange them, install them, repair them, and update them. Following the death of the ‘making things’ economy, the service economy has single-handedly staved off widespread economic depression and mass revolution in the West.

What happens, then, when robots and other advanced, efficient, human-redundant technologies are introduced to the service industry? Surely, it’s cheaper and more efficient to robot-ize certain jobs in retail, wholesale, and every other node in the service network? It is. We already see it happening.

Post robot-ization of the service economy, will some other economy rise and busy the masses with labor? If not, things will get interesting. And messy.

Union busters

Union busters

The service and welfare economies absorb surplus labor with ease, but not without cost. The entire point of the robot economy is to save money and increase efficiency by replacing the surplus laborers with machines who don’t claim disability or need a 401k. The new rich in this scenario will be those with high IQs, the designers and programmers of the machines, people who can complete a Computer Sci degree while everyone else bails for Communication Studies; the new poor in this scenario will be the erstwhile service laborers and middle managers, people with low to middling IQs who didn’t even learn HTML and now can’t find a job managing a rental car agency or teaching community college because—Yeah, There’s A Robot For That ™.

This scenario bodes well for high achievement, high IQ populations—who, however, will begin to gate themselves off in high security neighborhoods as the Rest of Us slowly realize that the Employers of the Next Economy are only looking for advanced robots and the nerds who can make them. (Attacking the robots only has so much symbolic value.)

But it doesn’t stop there, even after the messy revolts of the redundant laborers. If Kurzweil is even half right, the IQ of AI will advance to the point at which robots can replace more than service workers. What happens, then, when even high-skilled, high IQ positions are taken over by more efficient and more perfect machines?

Out of the Womb: Deep Space Employment

When Apollo 11 reached lunar orbit, only 3 men were aboard the space craft. But the safety—indeed, the possibility—of the mission relied on a well-staffed control center. Even today, the comparatively dull, low-earth orbit labors of the symbolic sapiens aboard the international space station are enabled and monitored by dozens of engineers and scientists on the ground.

However, when the U.S.S. Enterprise exits low-earth orbit, it does so on the assumption that a mission control center is no longer needed. Technology has advanced. The pink slips have been sent out in Florida and Houston. Now the only person on NASA payroll in those old places is the nice lady at the front desk of the Mission Control Center Museum. Everyone required for a safe, successful mission is on board the Enterprise. Granted, in Roddenberry’s optimistic vision, the numbers on board are quite large: depending on the series, the Enterprise boasts upwards of thirty crew members. The mission control centers experienced some attrition, but in the end, many of the geeks in suits simply became astronauts.

Humans need not apply

Humans need not apply

Ridley Scott provides a less sanguine vision of the robot economy as seen in deep space. The Nostromo only needed 7 people aboard—no, forgive me, 6 people on board. Ash, the science officer, was an android, and if Oder 937 was any indication, he was really the only necessary crew member. The humans were expendable. An android and an intelligent Auto Pilot are all you need to explore and mine the stars.

The Nostromo, of course, was a cargo ship, hauling not only hundreds of thousands of tons of raw materials bound for earth but also a refinery for processing those materials en route. How many workers were needed in the refinery? Zero.

Aboard the Prometheus is an equally minimalist crew as well as a machine that epitomizes—even more than the most advanced Auto Pilot—the success of the robot economy: the MedPod.



The MedPod diagnoses, treats illness, and performs fine-tuned surgeries with laser-like precision. How many doctors are needed aboard the Prometheus to perform an emergency C-section? Zero. How many nurses? Zero. How many techs? None. The MedPod even sutures. And naturally, it’s self-contained and self-cleaning, so low-wage orderlies are certainly unnecessary. A single, high-tech machine has entirely obviated the need for humans to fill the most human-centric of careers.

The Post-Scarcity Endgame

In this vision, the robot economy has advanced so far that even deep space missions—from the most mundane (the Nostromo) to the most profoundly important (the Prometheus)—can be undertaken with minimal human employment because the bulk of the labor is undertaken by technology, machines, and androids.

What, then, can we assume about the comparably low-tech missions and jobs back on earth? We can assume that the surplus labor has long since died off, which probably only took a few generations given that it was quickly weaned into far-below-replacement birth rates with the help of mass welfare, entertainment, and liberal arts education. The earth is now inhabited by robots—who do the hard labor, the service labor, and even, as we have seen, some of the medical labor, at no cost to the beneficiaries of that labor—and the high-IQ individuals who design, build, and maintain the robots. Put another way: the earth is inhabited by an intelligent upper class and their laboring machines. We can assume, in other words, that the robot economy has stabilized into something like a Post-Scarcity Society. The human population has shrunk and stabilized; the robot population (probably larger than the human one) is entirely low-impact and efficient: they don’t eat, they don’t breed, they can be recycled . . . And as the MedPod demonstrates, many of them don’t even look like us, so any anxiety about Android Rights has proven to be needless.

Back to today . . .

How soon does all of this begin to unfold? we may ask. How quickly will the robot economy rise and advance?

As T.S. Eliot once observed,

The robot economy arrives not with a bang but a burger-making machine.


Population Bombs, or The Rhetoric of the Eschaton


Rogue academic Nick Land has written a fascinating essay about the Death of Doomsday.

In the past, Land explains, Western society looked forward to some culminating eschaton: the winding down of the calendar: an end game that is “comprehensive, punctual, and climactic.” Of course, all these expected moments have turned out to be busts, and 2012’s Mayan doomsday was no exception. So, says Land, the West is done with Doomsday. We have ceased to look for the end or to expect the arrival of a satisfying, transcendental full stop, after which comes not a New Chapter in the book of human history but a New Book altogether.

The West is now contented to sit in its own filth ad infinitum, waiting for nothing now because it believes there’s nothing to wait for. (Or, as Nick Land would have it, hungering forever for an end that won’t come.) No moment of creative destruction. No great dividing line between Here and Hereafter. No purifying flood. Just a long fading away.

To paraphrase Adam Smith: “There’s a lot of ruin in a species.”

Enter David Attenborough, beating Paul Ehrlich’s old drum, playing the Doomsday prophet all over again, proclaiming humans to be a “plague on the earth” because when we expand we kill fauna, replace flora, and deplete resources, and if we don’t stop we’ll all die and the furry fauna along with us. Attenborough proves himself in this interview to be an interesting relic representing an Old Left eschatological view that has more recently been stuffed down the memory hole, along with all other eschatologies.

In the middle of the 20th century, the Population Bomb was just another eschatological vision, like nuclear winter or St. John’s Revelation, but it was a vision of those who, at the time, considered themselves men and women of the Left. Either humans stop breeding, they said, or we run out of arable land and water; the world ends not with a bang but a series of escalating conflicts over food, drink, and materials.

Unfortunately for this eschatology, two things happened:

1. The Simon-Ehrlich wager proved that, if the Population Bomb was ticking, it was in the process of being defused or had added time onto its red digital readout.

2. It became apparent that the humans doing most of the over-breeding were poor, non-Western, and typically not white.

As the now de-canonized environmentalist Edward Abbey learned, the new, late-2oth century Left could not ultimately tolerate an eschatology that suggested the teeming third world masses (especially the developing teeming third world masses) played a significant role in the environmental degradation of their own lands and of the Western lands into which they had begun to migrate. The cries of “overpopulation!” thus began to subside, even as L.A. sprawled . . .



Inevitable, really. The vision of the Population Bomb was simply a negative reframing of eugenic eschatology, a positive Doomsday culminating in human racial purity, an end-game which had been championed much earlier in the history of the Left but which fell into disrepute after the Germans took it too far. “We need to control humanity’s demographics and characteristics” easily became, post-environmentalism, “we need to control humanity’s growth.” The latter takes care of the former by default, at a statistical level anyway.

But again, as Edward Abbey learned, the new Left wanted to divorce itself from that kind of talk. They wanted to focus on how all human problems, from economic inequality to environmental degradation, stem from racist colonialism and its hand-maiden, capitalism. Utopia was blocked by racist old white men in board rooms, nothing more. The environmentalist eschatology–with its focus on population trends and the peril of furry fauna–didn’t quite mesh and was generally abandoned.

Or was it?

I think we can discover something quite surprising by listening to the two voices in the article linked above.

On one hand, we have the traditional Doomsday rhetoric of David Attenborough: “Humans are a plague . . . Either we limit our population growth or the natural world will do it for us, and the natural world is doing it for us right now.” But on the other hand, we have the politically correct rhetoric of one Jerry Karnas, director for some biology center or another. Says Karnas: “What’s needed is not population control but a real emphasis on reproductive rights, women’s empowerment, universal access to birth control and education, so more freedom for folks to make better, more informed family planning choices.”

What’s needed is not population control but a real emphasis on reproductive rights and women’s empowerment . . .

If Karnas is any indication, the preoccupation with Doomsday has not subsided but has merely been given a social justice sheen. Something tells me that Karnas means exactly what Attenborough means. However, Karnas knows that wealthy whites have been limiting their population growth for decades now and Attenborough’s charge is leveled de facto at those with under-privileged phenotypes. So, Karnas shifts focus away from the Population Bomb rhetoric of the mid-20th century and toward the rights and empowerment rhetoric of the contemporary Left.

The outcome, naturally, is the same. Karnas would wonder what went wrong if–after activists had brought feminism, education, and birth control to India, Africa, and South Asia–most of the families there still chose to breed like conservative homeschooling Lutherans. Or maybe I’m being too skeptical? Maybe Karnas wouldn‘t be nonplussed if “a real emphasis on reproductive rights and women’s empowerment” didn’t lead to a decline in population growth?

Perhaps even the concern about global warming is nothing more than a proxy for the old Population Bomb eschatological rhetoric. A moratorium on Western consumption and production could lead nowhere but to a general reduction in population growth and, eventually, lower population levels altogether. (Or possibly to an increase in shanty towns, if production declines but not population. That’s a separate line of thought.)

Back to Reality . . .

I was never entirely convinced by the Population Bomb eschatology, and it seems that it was, like all Doomsday scenarios, destined not to arrive. Moreover, there are plenty of smart people who believe that the high standards of living associated with Western capitalism are absolutely dependent upon a growing population. I’m not sure I’d go that far.

Locally, however, it’s obvious that certain demographic trends have been disastrous to local environments. The recent spilling-over of Mexico was just as detrimental to certain areas of Southern California as the spilling-over of Eastern WASPs was to that land originally. However, I never saw unlimited population growth as a real threat. Sooner or later, corrections kick in, and if some species of flora and furry fauna are lost before they kick in, well . . . that’s not quite the same thing as the last days of humanity, is it?

As far as the rhetoric of Doomsday goes, I do believe that the notion of a calendrical Last Day or its secular equivalent has been generally abandoned in Western thought. Threats of worldwide nuclear or biological apocalypse have been mitigated (a lone terrorist might kill everyone in New York City, but that’s a local problem). Asteroids continue to “just miss” us, but that’s old hat, and we’re confident they’ll continue to miss us until some humans have moved Off World or we’ve built a giant flying nuke to divert any ominous Wormwoods. Global warming has never quite been framed as a Doomsday scenario. And the Population Bomb has apparently been defused.

So why Attenborough’s sudden return to the old Doomsday rhetoric? Why Karnas’s empowerment rhetoric, which is really the same thing in a more socially ethical package?

Maybe they are just concerned about those furry fauna.