CHALLENGES AND OPPORTUNITIES OF AI, PART I: BIAS IN AI

The promise of artificial intelligence is compelling: Faster computations, improved data analytics, and solutions to previously intractable global problems. But serious challenges with these technologies remain, including possible built-in biases against women and people of color.

By J. Cavanaugh Simpson, SWE Contributor

In the mid-1800s, computer science pioneer and mathematician Ada Lovelace foresaw problems with computer-based technologies that remain amid today’s boom in artificial intelligence.

Lovelace, daughter of British poet Lord Byron, predicted that algorithmic machines would offer tremendous potential, yet troublesome complexity. And she warned against “exaggerated ideas” of the power of such computing, emphasizing the endurance of human intelligence.

“The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform,” Lovelace wrote in Translator Note G in a paper in response to Charles Babbage’s early “programmable computing engine,” which was designed to improve the accuracy of nautical tables and other goals, though it was never completed.

Today, AI systems have tackled far more complex goals, including performance of high-volume computing tasks and analysis of large datasets to recognize and learn from patterns and predict outcomes. And so far, at least, even impressive generative AI creates text or images that only resemble — or, in some way, reassemble — the language and art of humans. (see the glossary below)

Yet a Lovelace-style humanistic dilemma remains: As AI expands in various sectors — from health care to consumer products to warfare — experts increasingly wonder about the impacts on humans. How accurate, fair, or transparent is AI? Who will be helped, and who could be harmed?

Meanwhile, professionals in science, engineering, technology, and mathematics are evaluating whether bias is inherent in AI, and if these systems will perpetuate or worsen inequalities in STEM education and practice, negatively affecting the diverse communities served by engineering solutions.

“Every single thing is going to get touched by AI. And we have so many developers who are working to build the solutions right,” said Neethu Elizabeth Simon, a senior software engineer at Intel Corp. With the rapid expansion of AI models and tools, developers “should be aware of the dangers of how AI can be misused,” especially regarding diversity, added Simon, an Institute of Electrical and Electronics Engineers senior member who was named a Society of Women Engineers Distinguished New Engineer in 2020.

“Every single thing is going to get touched by AI. And we have so many developers who are working to build the solutions right.”

– Neethu Elizabeth Simon

In many ways, AI can mirror flawed societies, automating historical biases. Examples range from AI tools with a hiring bias against women applicants for Amazon warehouse jobs¹ to facial recognition technologies that are less accurate for people of color,² leading to false arrests,³ to an AI tool that double-books medical appointments, predominately for low-income or minority individuals, potentially blocking access to quality health care.

Yet all engineers, experts say, can bring their lived experiences and awareness to bear on these issues and the potential outcomes of emerging AI. A primary way to accomplish this is to create or be part of diverse teams throughout the AI engineering process, including design, development, implementation, and testing. Mrinal Karvir, a senior AI software engineering manager and ethical AI champion at Intel, said there’s “a call to action for more women to join this movement.” Karvir, who has taught a course called Practical Approaches to Responsible AI⁴ in SWE’s Advance Learning Center, recognizes the progress women have been making in STEM and urges an “ongoing trend of reskilling and upskilling for us to keep that momentum.” (Listen to Karvir’s interview about bias in AI on Diverse, a SWE Podcast, episode 258.)

“Not only is this a benefit for women and the next generation,” but it also brings “that diverse perspective to help build AI products that are adopted globally,” said Karvir, a contributor to IEEE’s AIS (autonomous intelligent systems) ethics standards review groups.

Diversity in the AI workplace

Contributing to the development of AI means first working on such a project, usually as part of a team, but that in and of itself can be problematic for women. A 2023 report by researchers at The Alan Turing Institute in London looked closely at gender within elite high-tech careers. “Mind the gender gap: Inequalities in the emergent professions of artificial intelligence (AI) and data science”⁵ indicated that there is “structured gender inequality in the career trajectories of professionals in the data science and AI fields. Women are more likely than men to occupy a job associated with less status and pay in these fields,” authors Erin Young, Ph.D., Judy Wajcman, and Laila Sprejer wrote. “Paying attention to these nascent fields is crucial if we are to ensure that women take their rightful place at the forefront of technological innovation.”

“AI systems are fundamentally sociotechnical, so that both technical and social skills are essential to ensuring that algorithmic systems are designed to benefit all citizens.”

— Judy Wajcman

Various studies estimate that women constitute about a third of workers in AI worldwide, and an estimated 12% of AI researchers.⁶ As the “Mind the gender gap” researchers noted, “Inequality in these fields is not only about ethical issues of social justice and economic opportunity, but also crucially about how technologies such as AI are designed.”

Men appear to predominate in skills related to machine learning, databases, and big data, according to the Turing Institute study, which analyzed five years of data from various sources, including the popular job platform LinkedIn. At the same time, the study found that women in AI fields report stronger skill sets in analytics and research, as well as professional skills such as communications and curiosity.

Without greater diversity and wider representation among engineers and computer scientists, “responsible, ethical AI will not be possible,” Wajcman explained via email. “AI systems are fundamentally sociotechnical, so that both technical and social skills are essential to ensuring that algorithmic systems are designed to benefit all citizens.”

Cary Funk, Ph.D., who co-authored various reports on diversity in STEM⁷ as a former director of science and society research at Pew Research Center, pointed out ongoing underrepresentation “for women, but especially for racial and ethnic groups and sometimes the intersectionality of those two.”

There are various ways women and diverse engineers can work to fix bias in AI, according to experts:

  • Tackle bias in the AI workplace: Women engineers, engineers of color, and LGBTQIA+ engineers can acquire AI skills and knowledge via various training and educational venues, employing these technologies hands-on in projects while advocating and working on diverse teams in AI, among other efforts.
  • Mitigate bias in engineering solutions: Engineers can prevent or test for bias in AI tools and systems to identify and mitigate unfair outcomes. Sometimes this means using AI-enabled technologies to seek out bias. Overall, from design to implementation, technologists can leverage their own expertise with diversity to better meet the needs of underrepresented communities globally.
  • Advocate for ethics and bias guidelines, standards, and tips: AI is developing with few guardrails. Yet there are tips, templates, and rules on bias and ethics that computer scientists and engineers can keep in mind. These are offered by government agencies, think tanks, universities, nonprofits, companies, and individual STEM professionals with day-to-day frontline experience.

Bias in the AI workplace

One way bias shows up in AI is through the images AI systems generate when prompted with certain questions. Much like the traditional media visualizations on which many AI systems were trained, AI-generated images tend to depict scientists, doctors, and engineers predominately as white and male. Such systems have proven difficult to retrain or invert, according to research, including an article published late last year in The Lancet.⁸

SWE member Lisa M.P. Munoz, author of the book Women in Science Now: Stories and Strategies for Achieving Equity (Columbia University Press, 2023), recently reviewed the Lancet study, and prompted the Stable Diffusion AI image generation model to create 100 images of engineers. “Of the 100 images showing engineers, one appeared to be female,” Munoz wrote of her experience for the SWE blog, All Together.⁹ “Like for the AI-generated scientists, most appeared older and male. It was again like being back in the 1960s.”

“I’m trying to create that awareness of looking at this holistically, not just that this is something technically cool.”

– Mrinal Karvir

Such images, when overused in advertising, conference presentations, or other public venues, can affect engineers from underrepresented groups, lessening their desire to apply for jobs or impacting workplace confidence, various studies indicate. One such study published in Frontiers in Psychology, “Double Trouble: How Being Outnumbered and Negatively Stereotyped Threatens Career Outcomes of Women in STEM,”¹⁰ noted, “Women STEM students who watched a video about an engineering conference reported lower belonging and lower desire to participate in the conference when the men in the video were overrepresented compared to when the gender composition was equal.”

Gerlinde Weger, an AI ethics consultant with nearly 30 years of experience in advanced technologies globally, serves as the technology ethicist for the IEEE Standards Association. Weger said now is the time to be vigilant, including responding to possible AI-generated images, such as those at conferences, that perpetuate biases. “If someone shows me something that is highly biased, I will ask them ‘Why?’” she said. Overall, image viewers and creators can “be aware and critically discern if what is shown makes sense.”

Nikita Tiwari, a senior AI enabling engineer at Intel who was born in Mumbai, India, understands how viewing such images can have a deleterious effect. “If you can’t see it, you can’t be it,” said Tiwari, who is an IEEE senior member specializing in AI bias and ethics. “There’s a type of ripple effect when you see someone like you in a position and then unconsciously feel, ‘OK, maybe that’s right.’”

Tiwari emphasized that when someone experiences bias in the workplace, including in AI-related projects, it’s essential to “be confident enough to speak up. And when there’s a room full of men, and you’re the only woman, I can understand that it takes a lot of courage.”

Such scenarios can lead to what are called social identity threats, which women and minority engineers might feel when numerically outnumbered in a workplace, noted the “Double Trouble” study. Diminished career confidence and sense of belonging can also contribute to longtime retention issues for women engineers, according to SWE research.¹¹

Overall, a “historical association of masculinity with computing, in particular, frontier technical skills” continues to fuel gender gaps, the Turing Institute study noted. Wajcman, a fellow of the institute and an emerita professor at the London School of Economics, has studied these issues for decades. “Technology has changed so radically, but somehow women remain marginalized in these kinds of occupations, and consequently miss out on their fair share of interesting and well-paid jobs,” she noted.

“What role will engineers have in thinking through the societal implications of these developments? I see a lot of people talking about this.”

– Cary Funk

Overall, wider disparities reflect what’s become known as an “AI diversity crisis,” as described in an IEEE Software article.¹² “The breakthrough of modern AI has not yet been able to improve on existing diversity challenges regarding gender, race, geography, and other factors,” authors Bram Adams and Foutse Khomh wrote in their abstract.

Like any threshold technology, however, AI could offer opportunities for women and diverse technologists to take the lead: to learn and employ frontier tech, pursue AI ethics, and create diverse research and development teams. (Look for part 2 of this series, on the opportunities provided by AI, in the SWE Magazine Summer issue.) Another key: Make sure people know about your AI or other new tech skills and projects at work, at conferences, and via networking. Noted Tiwari: “Making yourself visible is still the biggest gap.”

Mitigating AI bias in the world

In a March 2023 panel, “Lightning Session: Women in AI,”¹³ co-hosted by IEEE Women In Engineering (WIE) and streamed via YouTube, Karvir pointed to some AI fallout cases, including a secretive algorithm that apparently determines ride-share drivers’ pay; generative AI products such as ChatGPT and other generative pre-trained transformers that could aid malware authors; and a planned AI recruitment tool for warehouse jobs that was reportedly dropped by Amazon¹⁴ because no one could “stop the tool from discriminating against women.” Such cases are cited in the ongoing AI Incident Database,¹⁵ which continues to accept reports for its records.

These problems have sparked a broader and pivotal trend in the engineering field: a deeper focus on how AI-engineered products or solutions will actually impact society.

Dr. Funk, currently a science and society senior advisor for the Aspen Institute, asked, “What role will engineers have in thinking through the societal implications of these developments? I see a lot of people talking about this.”

“There’s the potential for discrimination, biases, and unfairness when it comes to the models and resulting output.”

– Samantha Coyle

For example, she points to higher education programs such as the University of Virginia’s department of engineering and society. According to its website, the department aims to illuminate the social consequences of science, technology, and engineering, and conversely, how STEM fields are shaped by social conditions.

There is also an influx of new undergraduate engineering degrees in AI (read how a student is helping develop the Ivy League’s first AI degree at the University of Pennsylvania in “Career Pathways,”). The University of California, Santa Cruz, has an ethics-based certificate program on humanizing AI. The certificate is intended to foster students’ “critical thinking about social and cultural systems that inform these technologies,” the program website notes. And Carnegie Mellon University, with its Center for Human Rights Science, has launched an AI Institute for Societal Decision Making, funded by $20 million from the National Science Foundation. It is one of seven new NSF-funded National Artificial Intelligence Research Institutes.¹⁶

Additionally, women engineers and technologists can pursue responsible approaches by learning how to apply AI to their own projects and urging or adhering to company AI principles, such as those instituted at Intel,¹⁷ Microsoft,¹⁸ and Google.¹⁹ Among the overall goals are building AI datasets representative of all users, not just a dominant group; being vigilant for biases; and testing for such issues throughout the engineering process. Women engineers can also seek roles that empower them to audit AI-enabled devices, software, platforms, or other products or systems for privacy, fairness, accuracy, and transparency.

Overall, there’s a growing movement for humans to remain essential in the development and responsible use of AI, overseeing its decision-making. For technologists pursuing AI, Karvir said, “I’m trying to create that awareness of looking at this holistically, not just that this is something technically cool.”

Samantha Coyle — a software engineer at the software startup company Diagrid Inc., and recipient of the SWE Rising Technical Contributor Award in 2023 — sees both sides of the AI boom in engineering. “With everything, you get the good with the bad, and there’s a lot of exciting opportunities that will arise,” Coyle said. “Specifically for women, there are new roles, new opportunities.”

“Everyone wants to be treated fairly, which is nondiscrimination; everyone wants to feel safe.”

– Gerlinde Weger

Machine learning jobs, in particular, can pay well. Some of the suggested skills that engineers are pursuing include strong programming and advanced data research skills.

With respect to the bad side of the AI boom, “there’s the potential for discrimination, biases, and unfairness when it comes to the models and resulting output,” Coyle said, emphasizing that much depends on the data on which each system was trained.

A few years ago, while working for Intel, Coyle joined a computer-based vision project that took a deep dive into AI. The project recorded the shopping habits of people in retail spaces, tracking what they looked at while protecting privacy. Coyle noticed that women with strollers perusing the freezer aisles were not in the dataset. Their shape did not fit within the subject profile, so essentially, the shopping habits of moms with young children were not being incorporated in the result. AI can work “better as a tool because of what women bring to the table,” said Coyle, who added that diverse profiles should be included in such images for data training and collection.

“Like all brand-new technologies, we have to figure out how to make it; how to build it in a way that allows it to perform at the highest level and minimizes any dangers.”

– Christina Silcox

Even broader impacts can be found in potentially higher-risk areas such as health care, which is listed as a top AI market globally. (The AI health care market is expected to reach nearly $21 billion this year and to surpass a mind-boggling $148 billion by 2029, according to the market research platform MarketsandMarkets.²⁰ And that is likely true even though investments in digital health funding ebb and flow.²¹)

Christina Silcox, Ph.D., research director for digital health at the Duke-Margolis Institute for Health Policy at Duke University, is closely monitoring AI’s potential pitfalls and benefits. AI could, for example, assist health care providers who serve an expanding older population, she said. Such systems could offer faster triage and diagnosis, improve treatment option recommendations, and help with administrative tasks, such as charting patients’ progress.

“We have a health care workforce that’s not going to be able to keep up,” Dr. Silcox said. “AI can make the job less stressful, make the job more efficient,” which in turn can “help combat burnout.”

“Like all brand-new technologies, we have to figure out how to make it; how to build it in a way that allows it to perform at the highest level and minimizes any dangers,” Dr. Silcox said. Those dangers might include AI software tools that don’t work as expected, especially for certain subgroups, or not as intended overall.

Dr. Silcox co-authored “Preventing Bias and Inequities in AI-Enabled Health Tools,”²² which lists concerns such as “inequitable framing,” or programming tools that solve a surface issue yet unintentionally worsen health inequities. One revealing example is “no-show” algorithms, which can predict which patients might not keep appointments. To avoid lost revenue, health care offices might double-book those patients.

“The problem with that is Black, Latino, and American Indian or Alaskan Native patients disproportionately experience systemic barriers to care, such as lack of reliable transportation and limited access to paid sick leave or affordable health insurance that may prevent them from getting to the doctor’s office,” Dr. Silcox noted in an interview with The Pew Charitable Trusts, which funded the study.²³

Dr. Silcox later explained, “If we double-booked people that we think are going to not show up, [that] ends up with some people getting not as good care. If both people showed up, we aren’t able to care for two people in the time that we allotted for one person.” A better-framed AI model or tool could focus on supportive service options, such as transportation, reminder calls, or a video consultation.

To start addressing concerns in health care AI, various hospitals have created oversight systems. Duke, for example, evaluates clinical algorithms created for use at Duke Health via its Algorithm-Based Clinical Decision Support framework.

At the same time, AI can also be used to detect destructive biases in health care and other sectors. In a recent study,²⁴ New Zealand researchers used machine learning to detect systemic bias against indigenous Māori people in health care, a step “toward developing a model of socially responsible and fair AI.”

Guidelines, standards, and tips

Much like the society-changing automobile, transformative technologies such as AI first roll out into the real world, and then safety laws, standards, and guidelines start to filter in to address problems. While various AI standards have been in the works for several years, there’s a current spate of efforts among public leaders, industry organizations, international nonprofits, and companies to build out frameworks for ethical considerations of privacy, diversity, bias, transparency, and other parameters to lessen AI risk. The McKinsey Global Institute, for example, in its report “Tackling bias in artificial intelligence (and in humans),”²⁵ noted areas in which “more human vigilance is necessary to critically analyze the unfair biases that can become baked in and scaled by AI systems.” Those areas include the judicial system and resume evaluation systems, the paper states.

In Europe, for example, the EU AI Act²⁶ has set limits on AI — including strict requirements on sectors considered higher risk, such as health care. In the United States, President Joe Biden expanded his AI policies in October 2023 with the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,²⁷ which included a requirement that “developers of the most powerful AI systems share their safety test results and other critical information with the U.S. government.”

For the new projects, STEM professionals can find templates and tools for mitigating bias and setting ethical goals for their AI systems through UNESCO, which outlines recommendations that include ensuring diversity and preventing harm.²⁸ And the U.S. National Institute of Standards and Technology issued technical insights via its “AI Risk Management Framework.”²⁹ Other nonprofit groups focus on equity, accountability, and civil rights, such as the Algorithmic Justice League, founded by Joy Buolamwini, Ph.D., author of the book Unmasking AI: My Mission to Protect What Is Human in a World of Machines (Random House, October 31, 2023). (Read a review of the book in Media.)

And leading within the engineering field is IEEE, with its ethics certification program, applicable to various sectors, called IEEE CertifAIEd.³⁰ Through this program, a panel of experts can review an AI system for ethical values and risks, including for algorithmic bias.

In the end, experts say, the backbone of any guidelines should match the essential tenets of humanity. As AI ethics consultant Weger noted, “Everyone wants to be treated fairly, which is nondiscrimination; everyone wants to feel safe,” and to have agency.

Coyle echoed those thoughts for addressing ethics and biases in AI, including machine learning and related technologies. “Getting it right is really hard,” Coyle said. “I think we just have to try our best as humans.”

Glossary of AI Terms

Artificial intelligence (AI)
The simulation of human intelligence processes by machines or computer systems. AI can mimic human capabilities such as communication, learning, and decision-making.

AI ethics
The issues that AI stakeholders such as engineers and government officials must consider to ensure that the technology is developed and used responsibly. This means adopting and implementing systems that support a safe, secure, unbiased, and environmentally friendly approach to artificial intelligence.

Algorithm
A sequence of rules given to an AI to perform a task or solve a problem. Common algorithms include classification, regression, and clustering.

Big data
The large datasets that can be studied to reveal patterns and trends to support business decisions. It’s called “big” data because organizations can now gather massive amounts of complex data using data collection tools and systems. Big data can be collected very quickly and stored in a variety of formats.

Chatbot
A software application that is designed to imitate human conversation through text or voice commands.

Data mining
The process of sorting through large datasets to identify patterns that can improve models or solve problems.

Data science
An interdisciplinary field of technology that uses algorithms and processes to gather and analyze large amounts of data to uncover patterns and insights that inform business decisions.

Generative AI
A type of technology that uses AI to create content, including text, video, code, and images. A generative AI system is trained using large amounts of data, so that it can find patterns for generating new content.

Guardrails
Restrictions and rules placed on AI systems to make sure that they handle data appropriately and do not generate unethical content.

Hallucination
An incorrect response from an AI system, or false information in an output that is presented as factual information.

Large language model (LLM)
An AI model that has been trained on large amounts of text so that it can understand language and generate human-like text.

Machine learning
A subset of AI that incorporates aspects of computer science, mathematics, and coding. Machine learning focuses on developing algorithms and models that help machines learn from data and predict trends and behaviors, without human assistance.

Prompt
An input that a user feeds to an AI system to get a desired result or output.

Turing test
A test created by computer scientist Alan Turing to evaluate a machine’s ability to exhibit intelligence equal to humans, especially in language and behavior. When facilitating the test, a human evaluator judges conversations between a human and machine. If the evaluator cannot distinguish between responses, then the machine passes the Turing test.

Voice recognition/speech recognition
A method of human–computer interaction in which computers listen to and interpret human dictation (speech) and produce written or spoken outputs. Examples include Apple’s Siri and Amazon’s Alexa, devices that enable hands-free requests and tasks.


Source: Abbreviated from Coursera, www.coursera.org/articles/ai-terms, accessed 4/22/24.


1. “Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters, October 10, 2018.
2. Project Gender Shades, MIT Media Lab, January 2017 to August 2020.
3. “When Artificial Intelligence Gets It Wrong,” Innocence Project, September 19, 2023.
4. “Practical Approaches to Responsible AI,” SWE Advance Learning Center.
5 “Mind the gender gap: Inequalities in the emergent professions of artificial intelligence (AI) and data science,” Erin Young, Judy Wajcman, and Laila Sprejer, The Alan Turing Institute, August 27, 2023.
6. “AI Is the Future—But Where Are the Women?” Wired, August 17, 2017.
7. “Diversity in the STEM workforce varies widely across jobs,” Cary Funk and Kim Parker, Pew Research Center, January 9, 2018.
8. “Reflections before the storm: the AI reproduction of biased imagery in global health visuals,” Arsenii Alenichev, Patricia Kingori, and Koen Peeters Grietens, The Lancet, August 9, 2023.
9. “Computer, Draw a Scientist: Do AI Images Reject or Reflect Gender Stereotypes?” All Together, a SWE blog, October 13, 2023.
10. “Double Trouble: How Being Outnumbered and Negatively Stereotyped Threatens Career Outcomes of Women in STEM,” Ruth van Veelen, Belle Derks, and Maaike Dorine Endedijk, Frontiers in Psychology, February 18, 2019.
11. “Retention in the Engineering Workforce,” Society of Women Engineers research, 2021.
12. “The diversity crisis of software engineering for artificial intelligence,” Bram Adams and Foutse Khomh, IEEE Software, August 21, 2020.
13. “March Lightning Session: Women in AI,” Hacker Dojo on YouTube, March 2023.
14. “Amazon scraps secret AI recruiting tool that showed bias against women,” Reuters, October 10, 2018.
15. AI Incident Database, https://incidentdatabase.ai/
16. “NSF announces 7 new National Artificial Intelligence Research Institutes,” U.S. National Science Foundation, May 4, 2023.
17. Responsible AI Principles, Intel.
18. Principles and Approach, Microsoft.
19. Our Principles, Google.
20. “Artificial Intelligence (AI) in Healthcare Market — Global Forecast to 2029,” MarketsandMarkets, January 2024.
21. “2023 year-end digital health funding: Break on through to the other side,” Insight, Rock Health Capital, January 8, 2024.
22. “Preventing Bias and Inequities in AI-Enabled Health Tools,” Duke-Margolis Institute for Health Policy, July 6, 2022.
23. “How to Understand and Fix Bias in Artificial Intelligence-Enabled Health Tools,” Pew, August 24, 2022.
24. “Data and model bias in artificial intelligence for healthcare applications in New Zealand,” Frontiers in Computer Science, December 22, 2022.
25. “Tackling bias in artificial intelligence (and in humans),” McKinsey Global Institute, June 6, 2019.
26. “Artificial Intelligence Act: MEPs adopt landmark law,” European Parliament News, March 13, 2024.
27. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence, The White House, October 30, 2023.
28. “Ethics of Artificial Intelligence,” UNESCO, unesco.org, accessed April 9, 2024.
29. “AI Risk Management Framework,” NIST Information Technology Laboratory, January 26, 2023.
30. IEEE CertifAIEd, IEEE Standards Association.

COPYRIGHT 2023 SWE MAGAZINE. ALL RIGHTS RESERVED.