In the fall of 2019, I enrolled in the PhD course titled “Introduction to Human-centered AI. ” The course is delivered and managed by Cecilia Ovesdotter Alm from RIT university.
Human-centered AI is essentially a perspective on AI and ML that algorithms must be designed with awareness that they are part of a larger system consisting of human stakeholders. According to Mark O. Riedl, the main requirements of human-centered AI can be broken into two aspects: (a) AI systems that have an understanding of human sociocultural norms as part of a theory of mind about people, and (b) AI systems that are capable of producing explanations that non-experts in AI or computer science can understand.

Course introduction lecture held at Malmö University (2019).
One of the course learning outcomes is to be able to demonstrate critical thinking concerning bias and fairness in data analysis, including but not limited to gender aspects. With regard to this, I have put together a 10 minutes presentation of the article “50 Years of Test (Un)fairness: Lessons for Machine Learning” written by Ben Hutchinson and Margaret Mitchell.