Fighting Bias in AI


Sponsored by Drexel University’s College of Computing & Informatics(CCI) and CCI’s Diversity, Equity & Inclusion Council, join us for a conversation about fighting bias in artificial intelligence (AI). Mathematical models are often viewed as fair and objective. One might think that algorithms do not “see” race and therefore cannot be prejudiced; they base their decisions upon big data patterns and correlations that arise from statistics. However, in an experiment conducted by the American Civil Liberties Union (ACLU), Amazon’s face recognition system falsely matched 28 members of U.S. Congress with mugshots where false matches were disproportionately of people of color. In the past few years, studies have shown that algorithms can exhibit racial and gender bias, discriminate within a computer-vision facial recognition systems, and encode gendered bias in natural language processing. As AI becomes more pervasive in consumer-based technology, it is important that considerations be taken to prevent bias in the algorithmic decision making process. Panelists will share their knowledge of this developing topic and discuss current projects.