told me, the entire universe becomes a computer or mind, as far beyond our ken as spaceships are to flatworms. Kurzweil writes that this is the universe’s destiny. Others agree, but believe that with the reckless development of advanced AI we’ll assure our elimination as well as that of other beings that might be out there. Just as ASI may not hate us or love us, neither will it hate or love other creatures in the universe. Is our quest for AGI the start of a galaxy-wide plague?
As I left Vassar’s apartment I wondered what could prevent this dystopian vision from coming true. What could stop the annihilating kind of AGI? Furthermore, were there holes in the dystopian hypothesis?
Well, builders of AI and AGI could make it “friendly,” so that whatever evolves from the first AGI won’t destroy us and other creatures in the universe. Or, we might be wrong about AGI’s abilities and “drives,” and fearing its conquest of the universe could be a false dilemma.
Maybe AI can never advance to AGI and beyond, or maybe there are good reasons to think it will happen in a different and more manageable way than we currently think possible. In short, I wanted to know what could put us on a safer course to the future.
I intended to ask the AI Box Experiment creator, Eliezer Yudkowsky. Besides originating that thought experiment, I’d been told that he knew more about Friendly AI than anyone else in the world.
Chapter Four
The Hard Way
With the possible exception of nanotechnology being released upon the world there is nothing in the whole catalogue of disasters that is comparable to AGI.
—Eliezer Yudkowsky, Research Fellow, Machine Intelligence Research Institute
Fourteen “official” cities comprise Silicon Valley, and twenty-five math-and-engineering-focused universities and extension campuses inhabit them. They feed the software, semiconductor, and Internet firms that are the latest phase of a technology juggernaut that began here with radio in the first part of the twentieth century. Silicon Valley attracts a third of all the venture capital in the United States. It has the highest number of technology workers per capita of any U.S. metropolitan area, and they’re the best paid, too. The country’s greatest concentration of billionaires and millionaires call Silicon Valley home.
Here, at the epicenter of global technology, with a GPS in my rental car and another in my iPhone, I drove to Eliezer Yudkowsky’s home the old-fashioned way, with written directions. To protect his privacy, Yudkowsky had e-mailed them to me and asked me not to share them or his e-mail address. He didn’t offer his phone number.
At thirty-three, Yudkowsky, cofounder and research fellow at MIRI, has written more about the dangers of AI than anyone else. When he set out on this career more than a decade ago, he was one of very few people who had made considering AI’s dangers his life’s work. And while he hasn’t taken actual vows, he forgoes activities that might take his eye off the ball. He doesn’t drink, smoke, or do drugs. He rarely socializes. He gave up reading for fun several years ago. He doesn’t like interviews, and prefers to do them on Skype with a thirty-minute time limit. He’s an atheist (the rule not the exception among AI experts) so he doesn’t squander hours at a temple or a church. He doesn’t have children, though he’s fond of them, and thinks people who haven’t signed their children up for cryonics are lousy parents.
But here’s the paradox. For someone who supposedly treasures his privacy, Yudkowsky has laid bare his personal life on the Internet. I found, after my first attempts to track him down, that in the corner of the Web where discussions of rationality theory and catastrophe live, he and his innermost musings are unavoidable.
His ubiquity is how I came to know that at age nineteen, in their hometown of Chicago, his younger brother, Yehuda, killed himself. Yudkowsky’s grief came
Jay Allan Storey
Mary Calmes
Elizabeth Cohen
Humberto Fontova
Jan Scarbrough
A New Order of Things
Lord of Wicked Intentions
Stacey Ballis
Marilu Mann
Daniel Schulman