Instructors: Kai-Wei Chang, Vicente Ordonez, Margaret Mitchell, Vinodkumar Prabhakaran

Venue: EMNLP 2019

Abstract

Natural language processing techniques play important roles in our daily life. Despite these methods being successful in various applications, they run the risk of exploiting and reinforcing the societal biases (e.g. gender bias) that are present in the underlying data. For instance, an automatic resume filtering system may inadvertently select candidates based on their gender and race due to implicit associations between applicant names and job titles, causing the system to perpetuate unfairness potentially. In this talk, I will describe a collection of results that quantify and control implicit societal biases in a wide spectrum of vision and language tasks, including word embeddings, coreference resolution, and visual semantic role labeling. These results lead to greater control of NLP systems to be socially responsible and accountable.

Slides

Video

Instructors’ bio:

  • Kai-Wei Chang is an assistant professor in the Department of Computer Science at the University of California Los Angeles. His research interests include designing robust machine learning methods for large and complex data and building language processing models for social good applications. His awards include the EMNLP Best Long Paper Award (2017), the KDD Best Paper Award (2010), and the Okawa Research Grant Award (2018). Kai-Wei has given tutorials at NAACL 15, AAAI 16 on different research topics, and gave a tutorial about gender stereotypes in word embeddings at FAT 18. Additional information is available at http://kwchang.net.

  • Vicente Ordonez is an assistant professor in the Department of Computer Science at the University of Virginia. His research interests lie at the intersection of computer vision, natural language processing and machine learning. His focus is in building efficient visual recognition models that can perform high-level perceptual tasks for applications in social media, urban computing, and everyday activities that leverage both images and text. He is a recipient of best paper awards at the Conference on Empirical Methods in Natural Language Processing (2017) and the International Conference on Computer Vision (2013), an IBM Faculty Award (2017) and a Google Faculty Research Award (2017).

  • Margaret Mitchell is a Senior Research Scientist and leads the Ethical AI team within Google Research. Her research is interdisciplinary, combining computer vision, natural language processing, statistical methods, deep learning, and cognitive science; and she applies her work in clinical and assistive domains. She has published over 40 papers, including top-tier conferences for NLP, Computer Vision, and Cognitive Science. She is also the co-founder of the annual workshops Clinical Psychology and Computational Linguistics, Ethics in Natural Language Processing, and Women and Underrepresented Minorities in Natural Language Processing. Her TED talk on evolving Artificial Intelligence towards positive goals has over one million views, and the system she co-developed using her first-place image-captioning system, Seeing AI, has won the Helen Keller Achievement Award award and the Fast Company Innovation by Design award.

  • Vinodkumar Prabhakaran is a computational social scientist, doing research at the intersection of AI and society. He is currently a research scientist at Google, working on issues around ethics and fairness in AI. Prior to this, he was a postdoctoral fellow in the computer science department at Stanford University, and obtained his Masters and PhD in computer science from Columbia University in 2015. His research brings together NLP techniques, machine learning algorithms, and social science methods to identify and address large scale societal issues such as gender bias, racial disparities, workplace incivility, and abusive behavior online. His work has been published in top-tier NLP conferences such as ACL, NAACL, and EMNLP, as well as multidisciplinary journals such as the Proceedings of the National Academy of Sciences (PNAS).