My rating: 3 of 5 stars
According to Bostrom the "essential task of our age" is preparing for the possibility of superintelligence (through AI or something similar) and the potential risks involved. He writes that "our principle moral priority" is "the reduction of existential risk and the attainment of a civilizational trajectory that leads to a compassionate and jubilant use of humanity's cosmic endowment."
So when America spends decades ignoring the clear rational and moral imperative of global climate change, much less when it elects Donald Trump, it's difficult to imagine that we will develop the skills to prepare for this possibility.
I actually don't believe that strong AI (John Searle's phrase) is ontologically possible. Reading Searle's argument in the 1990's convinced me of that. Bostrom never addresses Searle's arguments, but he clearly believes it is not only possible but likely within this century.
After grasping the main points of the book in the opening chapters, I then dragged while reading half of it as he imagines all sorts of various scenarios. Frankly I was bored and considered putting the book down, but in the second half the interest increases as he ethical issues, how to teach a superintelligence morality, and what criteria should be used in determining our longterm values. I still skimmed heartily.
Frankly, this is among the strangest books I've ever read, but there are nuggets of interests and the overarching thesis is provocative. Bostrom thinks all the best people should begin working on this problem and not waste their time on less important issues. Well, I don't plan to do that.
View all my reviews