NLM Science, Technology, and Society Lecture - Confronting Race, Gender, & Ability Bias in Tech

National Library of Medicine
11 Mar 202462:24
EducationalLearning
32 Likes 10 Comments

TLDRThe transcript from the Fourth Annual NLM Lecture, hosted by the Office of Strategic Initiatives, delves into the societal and ethical implications of artificial intelligence (A.I.), particularly focusing on bias in A.I. systems. Guest lecturer Meredith Broussard, an associate professor at New York University and author, discusses the misconceptions about A.I., emphasizing that it is not magic and does not operate independently of human biases. She highlights the importance of considering the social context and historical data that train A.I. systems, which often reflect past discrimination and inequality. Broussard shares her personal experience with breast cancer and how it shaped her understanding of A.I. in medicine. She stresses the need for collaboration across disciplines to audit and improve A.I. systems, advocating for a more nuanced and contextual approach to A.I. development and use, especially in high-stakes areas like healthcare and elections. The lecture underscores the critical role of human involvement in creating equitable and trustworthy A.I. technologies.

Takeaways
  • πŸ“š The importance of understanding societal and ethical implications in research, particularly in the development of Artificial Intelligence (AI) technologies.
  • πŸ§‘β€πŸ« Introducing Meredith Broussard, an expert in AI ethics, investigative reporting, and author, who provides insights into bias in AI and its impact on society.
  • 🚫 Debunking the myth of AI as magic or Hollywood fantasy, emphasizing that AI is a complex reality based on mathematical patterns and data.
  • πŸ” Discussing the issue of algorithmic bias, particularly how past societal problems like discrimination and inequality are reflected in the data used to train AI systems.
  • 🏦 Highlighting real-world examples of AI bias, such as in mortgage-approval algorithms and facial recognition technology, and their societal consequences.
  • πŸ€– The need for careful consideration of potential biases when developing AI technologies to avoid perpetuating harmful stereotypes and discrimination.
  • πŸ“‰ Addressing the challenges of racial disparities in medical diagnosis and AI-read scans, and questioning the use of race as a biological determinant in medical algorithms.
  • πŸ“ˆ The complexity of changing AI systems and pipelines once discriminatory assumptions are embedded, emphasizing the resource-intensive nature of such efforts.
  • πŸ€” Encouraging a critical perspective on AI, considering it better suited for low-stakes mundane tasks rather than high-stakes or general-purpose applications.
  • 🀝 The value of interdisciplinary collaboration, including humanists, social scientists, and technologists, to audit algorithms and ensure ethical AI development.
  • πŸ“ The necessity for algorithmic accountability, with recommendations for resources and methodologies to evaluate AI systems in context and their impact on society.
Q & A
  • What is the main focus of the Fourth Annual NLM Lecture?

    -The main focus of the Fourth Annual NLM Lecture is to raise awareness around societal and ethical implications in the conduct of research, particularly concerning the development of artificial intelligence technologies.

  • Who is the guest lecturer for the event?

    -The guest lecturer is Meredith Broussard, an Associate Professor at New York University's Carter Journalism Institute and author of 'More Than a Glitch' and 'Artificial Unintelligence.'

  • What is the significance of discussing bias in AI?

    -Discussing bias in AI is significant because it helps to develop strategies to mitigate bias, which is crucial for creating trustworthy AI systems that do not perpetuate or exacerbate existing social inequalities and discrimination.

  • What are some of the key documents guiding the development of trustworthy AI?

    -Key documents include the Blueprint for an AI Bill of Rights from the White House Office of Science and Technology Policy and the AI Risk Management Framework published by the National Institute of Standards and Technology (NIST).

  • How does the National Library of Medicine (NLM) contribute to the advancement of equitable health outcomes?

    -The NLM contributes by housing and making accessible valuable data resources, leading in health data standards development, and driving data science through its extramural and intramural research programs.

  • What is the role of the audience in the lecture?

    -The audience is expected to engage in important conversations about AI ethics and bias, and to participate in a question-and-answer portion following the lecture.

  • Why is it important to challenge the idea that AI systems are unbiased?

    -It is important to challenge this idea because AI systems can inadvertently perpetuate historical patterns of discrimination, racism, sexism, and structural inequality present in the data they are trained on, leading to biased outcomes.

  • What is the significance of the term 'Techno Chauvinism' in the context of AI?

    -Techno Chauvinism refers to the belief that AI or computational solutions are inherently objective, unbiased, or superior. Challenging this perspective is important to recognize the potential for bias and the need for careful consideration when applying AI.

  • How can algorithmic bias manifest in real-world systems?

    -Algorithmic bias can manifest in systems like mortgage-approval algorithms, which have been found to deny borrowers of color more frequently than white counterparts, or in facial recognition systems that are less accurate for women and people with darker skin tones.

  • What is the role of the Algorithm Accountability Act?

    -The Algorithm Accountability Act, although not explicitly detailed in the transcript, generally aims to ensure that automated decision-making processes within AI systems are fair and transparent, without discrimination or bias.

  • How does the speaker suggest we approach the development of AI systems to ensure they are used for social good?

    -The speaker suggests a multidisciplinary approach involving collaboration between humanists, social scientists, and technologists. This includes algorithmic auditing, benchmarking, and considering the social implications and potential biases before implementing AI systems.

Outlines
00:00
πŸ“š Introduction to the NLM Lecture and Guest Speaker

The Fourth Annual NLM Lecture is organized to raise awareness about societal and ethical implications in research. The guest lecturer, Meredith Broussard, an Associate Professor at NYU and author, is introduced. Her expertise lies in AI ethics, investigative reporting, and using data analysis for social good. The lecture aims to discuss strategies to mitigate bias in AI technologies and is set against the backdrop of increasing discussions around trustworthy AI and recent guiding documents from the U.S. government.

05:02
🧠 Understanding AI and Algorithmic Bias

Meredith Broussard emphasizes that AI is not magic and dispels Hollywood's portrayal of AI. She discusses the importance of understanding AI's limitations and potential biases. Using the concept of 'Technochauvinism,' she explains that AI systems can default to discrimination because they learn from historical data, which contains patterns of past discrimination. The Markup's investigation into mortgage-approval algorithms is highlighted as an example of this bias.

10:02
πŸ–ΌοΈ Addressing Bias in Automated Systems

The paragraph discusses the need to adjust algorithms to ensure fairness, particularly in lending practices. It also touches on the issues with facial recognition systems, which often exhibit bias based on gender and skin color. The importance of diverse training data is stressed, and the use of AI in high-stakes contexts, such as policing, is questioned. The potential regulation of such technologies is suggested to prevent misuse and discrimination.

15:09
πŸ€” Personal Encounters with AI in Medicine

Meredith shares her personal experience with breast cancer during the pandemic and her subsequent exploration of AI's role in medical diagnosis. She recounts her attempt to use an open-source AI to analyze her own medical scans, which led to a series of challenges and misconceptions about AI capabilities in medical imaging and diagnosis.

20:13
πŸ“Š Misconceptions and Realities of AI in Cancer Detection

The speaker clarifies misconceptions about AI in cancer detection, explaining that AI typically identifies areas of concern in scans and assigns a score rather than providing a definitive diagnosis. She also discusses the complexities of racial disparities in medical statistics and the limitations of AI in accounting for these factors.

25:19
πŸ” The Role of Race in Medical and AI Systems

Meredith Broussard addresses the issue of race in medical algorithms and AI systems, noting that race is a social construct rather than a biological reality. She provides examples where race has been incorrectly used in medical calculations, such as kidney disease diagnosis and concussion assessments, and the recent changes made to remove race as a variable in these algorithms.

30:22
🚫 The Problematic Embedding of Race in AI

The paragraph discusses the issues with large language models (LLMs) and their propagation of race-based medicine. It highlights the biases in ML systems and the need for more thoughtful and less discriminatory AI applications in healthcare and other fields.

35:24
πŸ€” Reflecting on AI's Role and Development

Broussard expresses skepticism about the timeline for improving AI, suggesting that many easy problems have already been solved by computers and what remains are complex sociotechnical problems. She references the AI Democracy Project by ProPublica, which benchmarked leading AI models for their potential to generate misinformation, especially in the context of U.S. elections.

40:28
🀝 The Importance of Collaboration in AI

The speaker advocates for a collaborative approach to AI development, involving humanists, social scientists, and biomedical researchers. She emphasizes the need for algorithmic auditing and benchmarking to ensure AI technologies are accessible and unbiased. Broussard suggests that AI is better suited for low-stakes, mundane tasks rather than high-stakes decisions.

45:29
πŸ“š Resources for Further Learning

Meredith provides a list of resources for further learning about racial disparities in medicine and AI, including books, social media accounts, and organizations focused on algorithmic accountability and responsible AI practices.

50:31
πŸ€” Audience Q&A and Final Thoughts

The session concludes with a Q&A where the audience asks about the specifics of harm in the context of election information, the role of algorithmic auditing, and the importance of considering social determinants of health when evaluating AI systems. The speaker emphasizes the need for granular, context-specific evaluations of AI and the importance of consulting with affected communities in technology design.

Mindmap
Keywords
πŸ’‘Algorithmic Bias
Algorithmic bias refers to the systemic errors that can occur in machine learning and AI systems due to the use of biased or unrepresentative data. It is a central theme in the video, where the speaker discusses how past patterns of discrimination are reflected in the data used to train AI systems, leading to unfair outcomes. An example from the script is the discussion about mortgage-approval algorithms that are more likely to deny borrowers of color.
πŸ’‘Artificial Intelligence (AI)
Artificial Intelligence (AI) is the simulation of human intelligence processes by machines, especially computer systems. In the video, the speaker emphasizes that AI is not magic and does not operate as depicted in Hollywood. Instead, it is a complex field that involves math and computation. The video discusses the ethical and societal implications of AI, particularly concerning bias.
πŸ’‘Data Analysis
Data analysis involves examining raw data with statistical algorithms to gain insights and draw conclusions. The video highlights the importance of data analysis in AI ethics, as the data used to train AI systems can contain biases that affect the systems' outcomes. The speaker uses the example of facial recognition systems that work better for certain demographics, illustrating the need for careful data analysis to mitigate bias.
πŸ’‘Health Disparities
Health disparities refer to differences in the quality of health care or health status across different population groups. The video addresses health disparities in the context of AI in medicine, particularly regarding racial differences in cancer detection and treatment outcomes. The speaker shares a personal story about being diagnosed with breast cancer and the subsequent questions about racial bias in medical statistics and AI diagnostics.
πŸ’‘Machine Learning
Machine learning is a subset of AI that involves the use of data and algorithms to parse data, learn from that data, and make informed decisions based on what they've learned. In the video, the process of machine learning is described as creating a model from data, which can then be used for various tasks. The issue of algorithmic bias is directly related to the quality and representativeness of the data used in machine learning.
πŸ’‘Misinformation
Misinformation refers to false or misleading information that is spread, often unintentionally. The video discusses the role of AI in generating misinformation, particularly in the context of elections. The speaker mentions a study that benchmarked leading AI models to evaluate the misinformation they generated regarding U.S. elections.
πŸ’‘Racial Disparities
Racial disparities highlight the differences in outcomes and experiences across different racial groups. In the video, the speaker talks about racial disparities in the context of medical diagnosis and AI-read scans, noting that certain AI models were differentially accurate based on race, which is a significant concern as race is a social construct and not a biological determinant.
πŸ’‘Social Constructs
Social constructs are concepts that are created and developed through social interaction. The video emphasizes that race is a social construct rather than a biological reality. The speaker criticizes the embedding of race in medical systems and algorithms as if it were a biological reality, which can lead to discriminatory practices and outcomes.
πŸ’‘Technochauvinism
Technochauvinism is the over-reliance on or overestimation of technology's ability to solve problems. The video criticizes this perspective, arguing that not all tasks are best suited to computational solutions. The speaker advocates for a more nuanced approach where the right tool is chosen for the task, whether it be a computer or something else.
πŸ’‘Transparency
Transparency in the context of the video refers to the openness and clarity with which AI systems operate and make decisions. The speaker discusses the need for transparency in AI, particularly regarding how data is used and how decisions are made. This is linked to the concept of algorithmic auditing, where the inner workings of algorithms are examined to understand potential biases.
πŸ’‘Algorithmic Accountability
Algorithmic accountability involves holding AI systems responsible for their outcomes and ensuring they operate fairly and without discrimination. The video stresses the importance of algorithmic accountability, with the speaker describing efforts to audit and evaluate AI systems to ensure they are not perpetuating biases or causing harm. The speaker also mentions the creation of tools to facilitate this auditing process.
Highlights

The lecture emphasizes the societal and ethical implications of research in artificial intelligence (AI), aiming to spark important conversations within the biomedical research community.

Meredith Broussard, the guest lecturer, is an associate professor at New York University and author of 'More Than a Glitch' and 'Artificial Unintelligence', discussing AI ethics and its impact on society.

AI is not magic and does not operate on Hollywood's vision, contrary to public misconceptions; it's a complex and beautiful field of study based on math and data.

AI systems can inadvertently discriminate by default due to historical patterns of discrimination reflected in the training data.

The Markup's investigation revealed mortgage-approval algorithms were biased, denying borrowers of color at a higher rate than white counterparts.

Facial recognition systems were found to have accuracy disparities based on gender and skin tone, with the worst performance on women with dark skin.

The use of facial recognition in high-risk contexts, such as policing, needs careful consideration due to its potential for misidentification and discrimination.

Broussard shares her personal experience with breast cancer and her investigation into AI's role in medical diagnosis, uncovering her own misconceptions about AI in medicine.

AI in cancer detection identifies areas of concern in scans and assigns scores, but these scores are not definitive predictions of cancer.

There are racial disparities in medical diagnosis and AI-read scans, with some models showing different accuracies based on race.

Race is a social construct, not a biological reality, yet it is sometimes erroneously embedded in medical algorithms and AI systems.

The kidney disease diagnosis algorithm previously included a race correction, which has now been removed due to its discriminatory nature.

Large language models (LLMs) can propagate harmful, race-based medicine due to their inability to parse for content, treating popularity as a proxy for quality.

Broussard questions the expectation that AI will eventually overcome its biases, given the persistence of social problems throughout history.

The AI Democracy Project by Proof Labs assessed leading AI models for their potential to generate misinformation in the context of U.S. elections.

A four-dimensional approach to evaluating AI performance includes assessing for inaccuracy, harm, incompleteness, and bias.

Broussard emphasizes the importance of looking for human problems within AI systems and the need for collaboration between technologists, humanists, and social scientists.

AI is better suited for low-stakes, mundane tasks rather than high-stakes or general-purpose use, where the risks of bias and inaccuracy are higher.

Transcripts
Rate This

5.0 / 5 (0 votes)

Thanks for rating: