Artificial Intelligence, Inequality, and Power

Over the past several years, news media, including SWE Magazine, have reported on a series of controversies involving technology companies and the development and use of artificial intelligence (AI). Among the more important controversies were organized employee protests against the involvement of tech companies in assisting the military as well as immigration authorities with AI. Criticisms have also been levied against AI technologies that incorporate racial, gender, and other forms of bias. Technology companies have tried to respond by developing ethical codes regarding AI and its uses, but the tensions have not been eliminated. For example, last year, SWE Magazine (2021) reported on the departure of a leading member of Google’s AI ethics team, Timnit Gebru, Ph.D. Although Google denied it, Dr. Gebru claimed that she was fired because of her desire to publish a paper on how AI mimics language that could hurt marginalized groups. This year, another Google AI ethics researcher, Margaret Mitchell, Ph.D., was fired by Google for what the company said were violations of the company’s code of conduct and security protocols. But, Mitchell claims that it was actually her critique of the company’s AI work that motivated her termination (The Guardian 2021).

The controversy over AI, gender, diversity, and ethics is placed in a much broader context by The Atlas of AI, a book published by prominent AI researcher, Kate Crawford, Ph.D. (2021). Dr. Crawford is research professor of communication and science and technology studies at the University of Southern California and a senior principal researcher at Microsoft Research. She also served as the inaugural visiting chair for AI and justice at the École normale supérieure in Paris.

Kate Crawford, research professor of communication and science and technology studies, USC; senior principal researcher, Microsoft Research; and author of Atlas of AI. CREDIT: Cath Muscat

More than just a bug to be fixed

Dr. Crawford describes how much of the contemporary criticism of AI focuses on concerns that it contains built-in social biases, so that its use reproduces or even intensifies the unequal treatment of disadvantaged groups. She shows, in her book, that these biases derive of the way AI “learns.” Typically, AI systems are developed through the use of test data harvested from publicly available online sources. These data sets generally are not fully representative of the overall population; indeed, they often are skewed, include biased language, or contain gaps that omit whole groups of people or aspects of human experience. Making matters worse, these data then need to be classified, which requires the development of a classification system, which is another vector through which bias can enter the system (the classification system itself can be biased) and a huge amount of manual labor to sort the data into the classification system employed. More often than not, this work is performed by poorly paid, crowdsourced labor obtained through portals such as Amazon’s Mechanical Turk, which means both that the work of classifying involves a hidden form of exploitation and that it is vulnerable to inaccuracies as workers hurry to complete the “piecework” tasks they have taken on. Finally, since newer AI systems often build on the ones that came before, biases incorporated in early systems tend to become parts of subsequent technologies. Users are often unaware of these biases in the systems they employ.

According to Dr. Crawford, “the AI industry has traditionally understood the problem of bias as though it is a bug to be fixed…” (130). Recent corporate attempts to deal with the problem of bias in AI used in hiring illustrate this approach well. Many organizations use AI to screen resumés, but the technology involved has been criticized for unintentionally incorporating a range of racial, gender, and other biases. Dr. Crawford, for example, notes how Amazon developed an AI-based hiring system that rarely recommended female candidates, largely because the system “learned” using data from previous searches that had hired mostly men. Awareness of this problem has led to efforts to detect bias in AI and find ways to eliminate it. The New York Times recently reported (Lohr 2021) that a group of major corporations (including CVS Health, Deloitte, GM, Humana, IBM, Mastercard, Meta, Nike, and Walmart) has partnered with the Data and Trust Alliance, an organization that has developed an evaluation and scoring system for AI software. The hope is that using this system will allow employers to detect bias in their AI systems so that they can combat it and develop more effective AI tools for evaluating job applications.

Dr. Crawford sees steps such as this one as largely positive, but she feels the problems go deeper. She contends that:

“AI systems are not autonomous, rational, or able to discern anything without extensive computationally intensive training with large data sets or predefined rules and rewards. In fact, artificial intelligence as we know it depends entirely on a much wider set of political and social structures. And due to the capital required to build AI at scale and the ways of seeing that it optimizes, AI systems are ultimately designed to serve existing dominant interests. In this sense artificial intelligence is a registry of power” (8).

AI’s construction of race and gender

She argues that AI inevitably involves classification, imposing a set of predetermined categories on a complex reality, making it effectively an exercise of power. Any classification system involves a socially influenced oversimplification of reality, or, as Dr. Crawford puts it: “Machine learning systems are, in a very real way, constructing race and gender: they are defining the world within the terms they have set, and this has long-lasting ramifications for the people who are classified” (146). Dr. Crawford expresses concern about how AI has increasingly been used in ways that affect ordinary people in civilian life: in support of law enforcement, in hiring processes, even in attempting to detect ineligible welfare recipients and unemployment insurance abuse.

In her view, AI embodies power in another sense. Those with the resources to obtain the training data on which AI learning depends, to organize the labor of classification and checking it requires, and to build and deploy these complex technological systems, tend to be large organizations: governments, corporations, and the like. The “object” of AI, i.e., ordinary people, generally do not know they are part of the training data, nor are they consulted or given an opportunity to influence the classification systems AI employs. When those systems are proprietary, as is the case for some of the most powerful, ubiquitous ones (Google, Facebook, etc.), it is even less likely that they will be subject to the scrutiny of anyone other than the systems’ owners.

So, while Dr. Crawford agrees that AI systems can and should be examined to expose the biases they may embody, she feels that a wider range of voices needs to be included in that examination so that those affected by the work of AI systems can be heard (Corbyn 2021). She also emphasizes that it is not enough just to apply technical fixes to existing AI systems; the power structure those systems serve needs to be understood and problematized. Finally, she calls for a “renewed politics of refusal” (Corbyn 2021), for a willingness to say that not all AI systems are beneficial or needed: “Rather than asking where AI will be applied, merely because it can, the emphasis should be on why it ought to be applied” (226).

References
Corbyn, Z. (2021). Interview: Microsoft’s Kate Crawford: “AI Is Neither Artificial Nor Intelligent.” The Observer. June 6.

Crawford, K. (2021). The Atlas of AI: Power, Politics and the Planetary Costs of Artificial Intelligence. New Haven, Conn: Yale University Press.

Lohr, S. (2021). Group Backed by Top Companies Moves to Combat A.I. Bias in Hiring. The New York Times, Dec. 8.

SWE Magazine (2021). Gender Equity and Social Justice at Uber and Google. State of Women in Engineering 2021: 40–42.

The Guardian (2021). Google Fires Margaret Mitchell, Another Top Researcher on Its AI Team. Feb 19.

COPYRIGHT 2023 SWE MAGAZINE. ALL RIGHTS RESERVED.