January 18, 2018

Book Review: Life 3.0: Being Human in the Age of Artificial Intelligence

Views and opinions expressed are those of the author only.

I recently had the pleasure and frustration of reading Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark.1 I think I first heard about it from the recent “Future of Intelligence” episode of Sam Harris’ podcast Waking Up.2 Robert Hanson also wrote a review worth reading on his blog OvercomingBias.com.3 From hearing about it there, I inferred I was part of the target audience. However, upon reading it, I found that I’m probably not. Indeed, I doubt that most of the rationality community is either. I previously read Bostrom’s seminal book Superintelligence and found it rather dense, dry and a bit of a slog even for someone interested in the topic.4 Tegmark instead wrote for a general audience and, to me, the book falls in the pop-science genre. It has many reader-friendly features. When appropriate, it includes helpful charts and diagrams, though these are sometimes cartoonish. Each chapter concludes with a bullet point summary. Perhaps most importantly, the book is written in an approachable, and engaging style. I found myself enjoying reading the book and curious where he would go next. At the same time, I found myself frustrated with the presentation of AI safety topics. Tegmark understands the issues and states each of them eventually. However, this often gets lost among the other topics and fictionalized accounts. I’m concerned the average reader may not come away with an adequate understanding.

The prologue gives a brief fictional account of the creation of a superintelligence, starting from an idealistic, secret team in an unnamed fictional tech company, all the way to global domination. By the end, the AI and the company control essentially a world government that has brought prosperity and happiness to most people and all but eliminated conflict and war. Fiction can make things that might have seemed implausible feel like tangible reality. By opening this way, Tegmark both engages the reader and creates a viscerally real sense of the possibility of superintelligence. In it, he illustrates a very plausible scenario for AI takeover and the creators’ concern for AI safety. However, I worry that it appears too easy to contain and control a superintelligent AI. Being compelling and coming as it does at the beginning of the book, it will be very memorable. While consciously people will remember that the message of the book was the dangers of AI, they may reason from the most salient example: the fictional evidence of an AI that was easily contained and brought enormous benefits.

In the first two chapters, Tegmark engagingly introduces the problem of AI alignment and the state of the discussion. He does this with plenty of charts and diagrams. The framing of discussing the various positions on the problem and of listing and addressing the most common myths works well. This section also has the first good illustrations of the problem, including the example that “humans control tigers not because we’re stronger, but because we’re smarter. This means that if we cede our position as smartest on our planet, it’s possible we might also cede our control.”5

Next, the book tackles the subject of near-term AI advances and issues. I found this section to be one of the most interesting and useful in the book, perhaps because I haven’t read much on these topics before. I would have liked to see more detail about some areas. Nevertheless, the survey was helpful. Tegmark first covers the state of the art in AI. From there he moves on to issues of software bugs and security. I was glad to see this put front and center. It doesn’t get adequate concern in our society. He then considers AI’s impact on law, weapons, and employment. Specifically, the question of whether large portions of the population might become not only unemployed but unemployable. I particularly enjoyed the end of the chapter about how to give people a sense of purpose if there is no work. That often gets glossed over in discussions of technological unemployment.

At more than one hundred pages in, Life 3.0 finally comes to the issue of the dangers of human-level AGI and superintelligence. This section is far too short. There isn’t enough space to adequately cover the issues and make clear how grave the threat is. He points out that an AI will be motivated to circumvent our control even if its intentions are good using a fun illustration involving a world of 5-year olds imprisoning you. The difficulties of containing or boxing a superintelligent AI are pointed out. Here, he returns to the fictional story from the prologue to describe several ways the AI might escape. While this illustrates possible escape paths, I worry that it will fail to convey the message. Coming in the middle of the book, separated from the initial story, it is less likely to be remembered. Also, specific scenarios often lead people to think they can address them with some easy fixes and the problem is solved. Tegmark tries to make clear that there are many other potential means of escape and that we, being less intelligent than the AI, may not even be able to imagine how it might escape. However, those are brief abstract arguments compared to the compelling fictional scenarios.

From issues of the dangers of superintelligence, the book moves on to future scenarios. It describes twelve different scenarios, giving each two to five pages of explanation. They run the gamut from very good to very bad. Some come off as if they aren’t genuine possible futures, but rather dreamed up by someone who is ideologically motivated in the first ten minutes of thinking about the problem. Even worse, the list is missing some essential ones. In two of these, AI truly tries to create human happiness, not by some person’s ideology-driven philosophy, but by doing what might genuinely make humans happy. One such scenario is where the AI creates for each human a separate virtual world optimized for their happiness. Perhaps there is still a way to interact between the worlds, to visit your friend’s world, but people spend the majority of their time in their own. The second is what I would describe as a true utopia. Where the AI tries to create the kind of world that would make humans happy. Eliminating suffering, but also creating situations where the humans are challenged to learn and grow. The closest he comes to that is his “benevolent dictator” scenario, but that seems more like a caricature of what someone who values “diversity” in humans might imagine. Perhaps the biggest oversight is ignoring transhumanism as both a likely occurrence and moral good. His “libertarian utopia” does include cyborgs and augmented humans, but there is no discussion of elimination of all disease, radical life extension, dramatically enhancing human happiness, enhancing human cognition, and ultimately giving humans godlike powers. This chapter is meant to be thought-provoking and raise in the reader’s mind the question of what kind of future they would want. It concludes by directing people to AgeOfAi.org where there is a survey covering when they think superintelligent AI might arrive, and which of the twelve scenarios they would prefer.

The sixth chapter felt out of place to me. Ostensibly about how we might acquire resource for the very long term future, i.e. 10 000 years, it reads like a tour of cool ideas from modern science about what might someday be possible. It describes ideas including Dyson spheres, nuclear fusion, evaporating and spinning black holes, quasars, exotic matter states, quantum computers and the limits of computation, nuclear rockets, light sails, the Kardashev scale, von Neumann probes, and wormholes. Then tackles the end of the universe and what might happen if we are not the only intelligent species in the universe. This chapter seemed to be there mostly for the gee-whiz factor, and I think the book would have better off without it.

At this point, Tegmark finally tackles the subject of aligning an AI’s goals with our own. This chapter should have come immediately after the one on the dangers of AI and the difficulty of keeping it boxed. It does, however, do a good job of laying the foundation of what goals are and why it makes sense to talk about machines and AIs having goals. Then of explaining some of the challenges in goal alignment. I was pleased to see his emphasis on the fact that “the real risk with AGI isn’t malice but competence” (italics in original).6 That is to say, that an AI that was excellent at optimizing something slightly different from what we wanted would be extremely dangerous even though it wasn’t intentionally programmed to harm us.

The final chapter of the book takes a strange turn into consciousness. Tegmark defines consciousness as having subjective experience. What philosophers would call qualia. As with the chapter on acquiring resources, this one seemed to be a tour of fascinating ideas. This time about the philosophy, neuroscience, and ethics of consciousness. The relevant point is that the AI’s moral worth may be determined by whether it is conscious. This part could have been omitted from the book, so the choice to use it as the official conclusion was strange.

After the concluding chapter, there is an epilogue. However, its length and presentation make it feel like a chapter. In my mind, this became the conclusion of the book. The epilogue describes the founding of the Future of Life Institute by Tegmark and AI conferences organized by them. It ends with an encouragement to develop positive visions of the future. One can understand why Tegmark wanted to include this material in the book somewhere, but the shift of topic and tone at the end was jarring. It further weakened the already weak ending created by discussing consciousness.

Altogether, Life 3.0 was engaging and readable. The chapter on the near term impacts of AI was the most interesting to me, in part because I haven’t seen this topic discussed as much. I also came away with some illustrations that will be useful for explaining the issues in AI safety to others. It seems Tegmark is hoping to create awareness of AI safety among the general populace by encouraging them to imagine their own potential futures. It may do that. However, an uninformed thinker’s ideas on the subject are more than simply unhelpful. Rather, they are often detrimental to launching a proper response. In trying to reach a broad audience, Tegmark has failed to convey the gravity and nuance of the problem. The many tangentially related topics covered obscure the vital message of AI Safety. I believe Tegmark’s contributions to AI safety through the Future of Life Institute are more effective than this book.

  1. Tegmark, Max. Life 3.0: Being Human in the Age of Artificial Intelligence. Knopf, 2017. 

  2. Harris, Sam. “Waking Up.” Sam Harris, 16 Jan. 2018, www.samharris.org/podcast. 

  3. Hanson, Robin. “Tegmark’s Book of Foom.” Overcoming Bias, 2 Sept. 2017, www.overcomingbias.com/2017/09/tegmarks-book-of-foom.html. 

  4. Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies. Oxford University Press, 2016. 

  5. Life 3.0, p. 44 

  6. Life 3.0, p. 260 

Published: January 18, 2018
comments powered by Disqus