header
vol. 26 no. 3, September, 2021

Book Reviews


Crawford, Kate. Atlas of AI. Power, politics, and the planetary cost of artificial intelligence. New Haven, CT: Yale University Press, 2021. [8], 327 p. ISBN 978-0-300-20957-0 £20.00

This is not an "atlas" in any generally understood meaning of the word, indeed it is not an atlas in any specialised use of the word recorded in the Oxford English Dictionary. It is a monograph. The author explains in the introduction how the idea of an atlas guided her thinking in making the book, but I did not find the analogy persuasive. The sub-title is a much better indicator of the nature of the book.

The author is a kind of outsider (her education is not in computer science or artificial intelligence, and she is a musician, as well as an academic), who has found her way inside, to the extent of being a senior researcher at Microsoft Research, and co-founder and director of the AI Now Institute at New York University. Consequently, her perspective on artificial intelligence is not that of the computer scientist, or of the specialist in the field and her view of artificial intelligence is quite clearly expressed:

AI is neither artificial nor intelligent. Rather, artificial intelligence is both embodied and material... AI systems are not autonomous, rational, or able to discern anything without extensive, computationally intensive training with large data sets or predefined rules and rewards. (p. 8)

Later, the author characterises the book as, an expanded view of artificial intelligence as an extractive industry (p. 15), and the structure of the book is based on this proposition, beginning with an account of the truly extractive industry needed to mine the precious metals needed for the computers used in artificial intelligence. In places, the devastation wrought by the mining process is horrendous; for example, in Inner Mongolia there is a lake of 'toxic black mud', the result of mining for rare metals in the world's largest deposit of such metals. Small islands off the coast of Malaysia have been ruined by mining for tin, killing fish stocks and coral reefs and making the islands unattractive to tourists. We imagine, sitting before our desktop computers or laptops that we are dealing with a 'clean' technology, but the needs of that technology for rare metals shows that this is far from the case.

The author then moves on to how labour is treated in the technology industries. I imagine that we are well aware of the labour practices of firms like Amazon, where the adherence of workers to the demands of the clock has been the subject of numerous news accounts. We may not be aware of the extent to which artificial intelligence depends upon workers: Amazon's Mechanical Turk is supposedly an AI system, but even Jeff Bezos refers to it as 'artificial, artificial intelligence', since it depends upon thousands of workers 'who bid against one another for the opportunity to work on a series of microtasks' to complete work that the AI fails to perform. We are seeing, in fact, a modern version of the Victorian factory system: then, people became the slaves of the steam-driven machines, now they have become slaves of computer systems that time every action. As the author says: 'The structures of time are never completely inhumane, but they are maintained right at the outer limit of what most people can tolerate', and the cost in both physical and mental distress grows.

Data (Chapter 3) are now regarded as assets, and mined like the precious metals, without regard to the privacy of the individuals to which the data related. It seems that, at least in the USA, but possibly elsewhere, computer science research was regarded as not requiring the approval of universities' ethics committees, since it was not expected that such research would have an impact on individual rights. This led to such ethically dubious activity as collecting the images of students and faculty on the campus of the University of Colarado, without the approval of either the university or the individuals concerned, in order to develop a facial recognition application.

It appears that the data in virtually every training dataset for machine learning has been obtained in this way or scraped from applications such as Flickr, or indeed the entire Web, an activity described by the author as 'a pillaging of public space. A mindset has developed in the artificial intelligence community that sees data 'as a resource to be consumed, a flow to be controlled, or an investment to be harnessed' and, consequently, impersonal and not subject to ownership by the person described by the data. As the author concludes:

By looking at the layers of training data that shape and inform AI models and algorithms, we can see that gathering and labeling data about the world is a social and political intervention, even as it masquerades as a purely technical one. (p. 121)

Of course, platforms such as Facebook and Twitter are aware of the potential biases built into machine learning datasets and try, to some extent, to counteract these biases. Very recently, for example, Twitter announced an internal project to discover the sources of bias in their datasets noting that:

Leading this work is their ML Ethics, Transparency, and Accountability (META) team: a dedicated group of engineers, researchers, and data scientists collaborating across the company to assess downstream or current unintentional harms in the algorithms used and to help Twitter prioritize which issues to tackle first (Lim, 2021),

Chapters 4 (Classification) and 5 (Affect), are both about classification. Chapter 4 deals with the general problem of the inevitable bias that exists in any classification scheme, and the biases that exist when the scheme has been produced by machine learning. The machine can only learn from the data it receives and when the data are biased (as in the case, for example, of criminal records being more likely to relate to young, black people than, say, middle-aged white people), the machine is likely to identify a black person as more likely to commit a crime. Chapter 5 takes this further by examining the systems that have attempted facial identification of emotions by image classification. The story of the scientific dispute over the theories of Paul Ekman, who claimed that the facial expression of emotions was universal and, therefore, could guide the AI determination of emotions, is probably worth a chapter in itself.

The relationship between the state (mainly in the shape of its intelligence agencies) and the citizen is the subject of Chapter 6. This relationship most damaging in those societies where the control of the citizen is the principal objective of the state, or, more likely, the dictator or cabal running the state institutions. However, even the political leaders of supposedly benevolent Western democracies may be tempted by the snake-oil salesmen of AI. The author tells this story mainly through reference to Project Maven, which involved Google in one of the contracts, and to the company Palantir, which not only serves the intelligence services but also advises companies on potentially disgruntled workers or others who might defraud the company. In all cases this power of the AI provider is based on the accretion of massive amounts of data, mostly without the permission of those the data concern.

Chapter 6 leads on the the final, unnumbered chapter, the Conclusion, subtitled "Power". A key point, and well worth repeating is that, 'To understand AI for what it is, we need to see the structures of power it serves'. The end-product of AI is not some idealised technological solution to social problems but the exploitation of human beings who are cheaper than robots. Governments and their intelligence agencies collect data in the expectation that some possible future attack on the state might be frustrated, while corporations like Amazon, Uber and Deliveroo micromanage people through AI-powered surveillance systems, which reduce the person to an "asset" to be manipulated.

Then there is a Coda - Space: dealing with the efforts of Bezos and Musk to escape the constraints of the Earth and carry on mining in space. These are fundamentally sad individuals who cannot bear the thought of death, hoping against hope that their space exploration efforts will somehow bring them immortality.

Putting the title aside, this is an excellent book, well worth putting on your personal reading list, or on the reading lists of any number of related courses to do with information technology in general, and artificial intelligence in particular, but also courses on society and technology, social media, the politics of technology and more.

Reference

Lim, J. (2021, August 5). Twitter ramping up efforts to stamp out misinformation. Techwire Asia. https://techwireasia.com/2021/08/twitter-ramping-up-efforts-to-stomp-out-misinformation/

Professor T.D. Wilson
Editor in Chief
July, 2021


How to cite this review

Wilson, T.D. (2021). Review of: Crawford, Kate. Atlas of AI. Power, politics, and the planetary cost of artificial intelligence. New Haven, CT: Yale University Press, 2021. Information Research, 26(3), review no. R720. http://www.informationr.net/ir/reviews/revs720.html


Information Research is published four times a year by the University of Borås, Allégatan 1, 501 90 Borås, Sweden.