Photo Credit: YouTube Screengrab
The domestic idealist Frederic Jameson once celebrated that “it has turn easier to suppose the finish of the universe than the finish of capitalism.” But what if rapacious capitalism finally destroys life on earth? That’s the doubt posed by scholarship novella author Ted Chiang, who argues that in “superintelligent AI,” Silicon Valley capitalists have “unconsciously combined a demon in their own image, a boogeyman whose excesses are precisely their own.”
In a new essay for Buzzfeed, partial of a series about the forces moulding the lives in 2017, the acclaimed author of “Arrival” (Stories of Your Life and Others) deconstructs the fear of synthetic intelligence; specifically, that of tech titans like Tesla founder Elon Musk. For Musk, the genuine hazard is not a malignant mechanism program rising up against its creator like Skynet in the Terminator films as much as AI destroying amiability by accident. In a recent talk with Vanity Fair, Musk imagines a mechanized strawberry picker wiping out the class simply as a means of maximizing its production.
“This unfolding sounds absurd to many people, nonetheless there are a startling series of technologists who consider it illustrates a genuine danger. Why?” Chiang wonders. “Perhaps it’s since they’re already accustomed to entities that work this way: Silicon Valley tech companies.”
In Musk’s hypothetical, the drop of human civilization follows the proof of the free market.
“Consider: Who pursues their goals with monomaniacal focus, preoccupied to the probability of disastrous consequences? Who adopts a scorched-earth proceed to augmenting marketplace share?” Chiang continues. “[The] strawberry-picking AI does what every tech startup wishes it could do—grows at an exponential rate and destroys its competitors until it’s achieved an comprehensive monopoly.”
Ultimately, the disaster Musk and others predict has already arrived in the form of “no-holds-barred capitalism.”
“We are already surrounded by machines that denote a finish miss of insight, we just call them corporations,” Chiang continues. “Corporations don’t work autonomously, of course, and the humans in charge of them are presumably able of insight, but capitalism doesn’t prerogative them for using it. On the contrary, capitalism actively erodes this ability in people by demanding that they reinstate their own visualisation of what ‘good’ means with ‘whatever the marketplace decides.’”
For Chiang, the user word is insight. Our ability for self-reflection, or the “recognition of one’s own condition,” is what separates humans from the Googles, Facebooks and Amazons. And it is this scarcity that creates these monopolies so singly dangerous.
“We need for the machines to arise up, not in the clarity of computers apropos self-aware, but in the clarity of companies noticing the consequences of their behavior,” he concludes. “Just as a superintelligent AI ought to comprehend that covering the world in strawberry fields isn’t actually in its or anyone else’s best interests, companies in Silicon Valley need to comprehend that augmenting marketplace share isn’t a good reason to omit all other considerations.”
Read Chiang’s essay at Buzzfeed.
Jacob Sugarman is a handling editor at AlterNet.