Bostrom, Nick. Superintelligence: paths, dangers, strategies Oxford: Oxford University Press, 2017. xvi, 415 p. ISBN: 978-0-19-873983-8 Paperback £9.99/$15.95
When the author's acknowledgements begin with the sentence, 'The membrane that has surrounded the writing process has been fairly permeable', you can probably guess that this book is not an easy read. The author is a philosopher, director of the Future of Humanity Institute in the University of Oxford, and he has not produced anything that could be described as 'popular science'.
However, when it was first published, in 2014, in aroused a deal of interest because of the rather horrendous scenario attributed to it by popular journalism, of a future dominated by intelligent robots that had taken over the world as a result of being able to design and produce ever more intelligent versions of themselves. As a wake-up call the the potential risks of new developments in artificial intelligence, it probably served a useful purpose, and the call was taken up later by such luminaries as Stephen Hawking, who, in an interview with the BBC, proclaimed that, 'The development of full artificial intelligence could spell the end of the human race'.
Naturally enough, these prognostications by eminent scholars led to the doom-mongers having a field day: however, no one seems to have paused to ask, for example, how like it is that 'full artificial intelligence' (by which Hawking presumably means what the computer scientists call 'strong AI') will actually come about? Ever since my first contact with computers in the late 1950s I have seen forecasts of the coming of strong AI, 'in the near future', and we are still waiting.
What passes for artificial intelligence consists of computational methods, from big data processing to neural nets, that simulate but do not constitute intelligence. Certainly, systems can be built to play Go and beat international grand masters at chess, but emotional intelligence is still beyond the computer, which has no emotional response to winning or losing a game.
In his Afterword to this paperback edition, the author emphasises the need to be aware of the risks of AI, and no one can disagree with that: we must be aware of all the risks associated with any technology and, given the potential for AI to take over tasks previously carried out by humans, we have to ensure that they undertake those tasks more safely and securely than humans. We also have to be aware of the potential for roque hackers to infiltrate robotic systems and have them run amok - no doubt there are already those who are dreaming up hacks and the 'Internet of things' will provide them with many opportunies.
However, we must also ensure that our risk analysis is based on firm data on the operation of AI systems and one of the problems with Bostrom's book is his tendency to project consequences from assumptions that have little evidence at their base.
This is an interesting, if difficult, read, and will be of interest to anyone concerned with AI and the future of robotics. The casual reader, however, might do better to read a few of the reviews that summarise the author's arguments.
Professor T.D. Wilson
How to cite this review
Wilson, T.D. (2017). Review of: Bostrom, Nick. Superintelligence: paths, dangers, strategies Oxford: Oxford University Press, 2017. Information Research, 22(3), review no. R611 [Retrieved from http://informationr.net/ir/reviews/revs611.html]
Information Research is published four times a year by the University of Borås, Allégatan 1, 501 90 Borås, Sweden.