Computing has embraced principles of neutrality, objectivity, and fairness. However, societies are neither neutral, objective, nor fair. Prescribing these values in computing can inadvertently mirror, or amplify, widespread societal injustices. This talk will explore how justice theories, including restorative justice, racial justice, and social justice, can expose alternative approaches for the design and analysis of computational systems. The talk will identify synergies between human-computer interaction and machine learning, using the context of online harassment as a case study. In doing so, it will explore what aspects of online harassment, such as experiences of harm, can or should be formalized in algorithms or not. It will conclude with reflections on how to embed justice in research, teaching, and service activities and in institutional priorities.
Free and open to the public; RSVP required. Please visit the event webpage for additional information. Hosted by the Computational Social Science Working Group at Columbia University's Data Science Institute.