Communicative Robots, TA

Graduate course, Vrije Universiteit Amsterdam, CLTL, 2020

Designed the Ethical considerations for application design in NLP student project, held a weekly paper discussion and supervised the final project.

Ethical considerations for application design in NLP

When we design applications for our robot, we need to consider the stakeholders and the potential societal impact. The overarching goal of the projects in this course is to design and implement modules for detecting friend and kinship relations from multimodal signals and store the results in the brain. In this sub-project we will use the example of binary gender classification to explore ethical problems that can arise when designing NLP technologies. Students will be provided with relevant readings and we will discuss these during the group meetings. Subsequently we will design a module that takes our findings from the readings and discussions into account, makes use of the robots communicative capabilities and moves away from binary classification to a more inclusive approach.

Readings

General introduction

Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and Abstraction in Sociotechnical Systems. Proceedings of the Conference on Fairness, Accountability, and Transparency, 59–68.https://doi.org/10.1145/3287560.3287598

Hovy, D., & Spruit, S. L. (2016). The Social Impact of Natural Language Processing. Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), 591–598.https://doi.org/10.18653/v1/P16-2096

Leidner, J. L., & Plachouras, V. (2017). Ethical by Design: Ethics Best Practices for Natural Language Processing. Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, 30–40.https://doi.org/10.18653/v1/W17-1604

Gender as a variable

Larson, B. (2017). Gender as a Variable in Natural-Language Processing: Ethical Considerations. Proceedings of the First ACL Workshop on Ethics in Natural Language Processing, 1–11.https://doi.org/10.18653/v1/W17-1601

Krishnan, A., Almadan, A., & Rattani, A. (2020). Understanding Fairness of Gender Classification Algorithms Across Gender-Race Groups. ArXiv:2009.11491 [Cs].http://arxiv.org/abs/2009.11491

Wu, W., Protopapas, P., Yang, Z., & Michalatos, P. (2020). Gender Classification and Bias Mitigation in Facial Images. 12th ACM Conference on Web Science, 106–114.https://doi.org/10.1145/3394231.3397900

Perspectives from non-binary and trans people

Hamidi, F., Scheuerman, M. K., & Branham, S. M. (2018). Gender Recognition or Gender Reductionism? The Social Implications of Embedded Gender Recognition Systems. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1–13.https://doi.org/10.1145/3173574.3173582

Keyes, O. (2018). The Misgendering Machines: Trans/HCI Implications of Automatic Gender Recognition. Proceedings of the ACM on Human-Computer Interaction, 2(CSCW), 88:1–88:22.https://doi.org/10.1145/3274357

Keyes, O. (2020, July 14). _ Gender classification and bias mitigation: A post-publication review _ . https://ironholds.org/debiasing/