220330 Queer in AI
From Jon Svensson
Arjun Subramonian, Ashwin, Hetvi, Sharvani Jha
Trustworthy artificial intelligence (AI) has become an important topic because trust in AI systems and their creators has been lost. Researchers, corporations, and governments have long and painful histories of excluding marginalized groups from technology development, deployment, and oversight. As a result, these technologies are less useful and even harmful to minoritized communities. We argue that any AI development, deployment, and monitoring framework that aspires to trust must incorporate both feminist, non-exploitative participatory design principles and strong, outside, and continual monitoring and testing. We additionally explain the importance of considering aspects of trustworthiness beyond just transparency, fairness, and accountability, specifically, to consider justice and shifting power to the disempowered as core values to any trustworthy AI system. Creating trustworthy AI starts by funding, supporting, and empowering grassroots organizations like Queer in AI so the field of AI has the diversity and inclusion to credibly and effectively develop trustworthy AI. We explain what Queer in AI is, the mission of the organization, and the initiatives that Queer in AI runs (from community education and empowerment to policymaking and advocacy). We then leverage the expert knowledge Queer in AI has developed through its years of work and advocacy to discuss if and how gender, sexuality, and other aspects of queer identity should be used in datasets and AI systems and how harms along these lines should be mitigated.
Arjun Subramonian (they/them) is a first-year PhD student and Cota-Robles fellow at the University of California, Los Angeles. Their research focuses on trustworthy graph machine learning and natural language processing, including fairness, biases, and ethics. They are a core organizer of Queer in AI, a NAACL 2022 DEI chair, and a co-founder of QWER Hacks. They are an avid hiker and birder. Twitter: @arjunsubgraph
Ashwin (they/them) is a bahujan non-binary researcher and currently a research associate at the Center for Computational Science, IIIT Hyderabad, India. Their research focuses on the use of mixed methods to diagnose socio-technical systems for inequalities and identify pitfalls in the design process of these systems that lead to these inequalities. They’re also an organizer with QueerInAI where they assist with drafting RFI’s alongside hosting talks and panels to raise awareness about how AI systems affect people marginalized on the basis of caste and queerness. Twitter: @_ashwxn
Hetvi is a final year B. Tech Mathematics and Computing student from IIT Delhi. Hetvi has worked on research at the intersection(s) of biology, ML, and theoretical CS. Hetvi is a member of IITD's queer collective, and an organizer with QueerInAI. Hetvi also loves art & literature, and co-runs a zine-letter called hariandhetu. Twitter: @vuisnotabot
Sarthak (he/him) is a Statistics graduate from Ramjas College, University of Delhi. Ever since, he has worked as a Data Analyst at multiple companies including Disney+ and Groupon, India, where he worked on Decision Analytics. Sarthak's interests lie primarily at the intersection of Data Science and its application in otherwise little explored avenues of Ethics, Environment, Politics and Art in creating intuitive and impactful models of Automation.
Sharvani Jha (she/her) is a software engineer at Microsoft (UCLA Computer Science BS 2021). She is interested in AI ethics and advocacy efforts in tech communities. She is a core organizer for Queer in AI and focuses on undergraduate initiatives, organizing workshops at machine learning conferences, and graphic design & publicity efforts. Twitter: @sharvanilla