Assistant Professor Northeastern University, Massachusetts, United States
The Artificial Intelligence (A.I.) industry has created new jobs that are essential to the real-world deployment of intelligent systems. Part of these new jobs typically focus on labeling data for machine learning models or having workers complete tasks that A.I. alone cannot do. The human labor behind our A.I. has powered a futuristic reality with self-driving cars, voice based virtual assistants, or search results with minimum hate speech. However, the workers powering the A.I. industry are often invisible to end-users. Their invisibility has led to power dynamics where workers are often paid below minimum wage, and have limited career growth. Part of the problem is that these platforms are currently also black boxes, where we have limited information about the labor conditions inside.
I will present how we can start to address these problems through my proposed "A.I. For Good" framework. My framework uses value sensitive design to understand people's values and rectify harm. I will present case-studies where I use this framework to design A.I. systems that improve the labor conditions of the workers operating behind the scenes in our A.I. industry; as well as how we can use this framework to audit digital labor platforms and hold them accountable of the conditions they provide to workers. I conclude by presenting a research agenda for studying the impact of A.I. in society; and researching effective socio-technical solutions in favor of the future of work, especially within the context of biotechnology.