Yang Liu
Assistant Professor, Computer Science and Engineering, University of California, Santa Cruz
Machine Learning (ML) is increasingly used in domains that have a profound effect on people's opportunities and well-being, including healthcare, law enforcement, and consumer finance. The effectiveness of ML depends on reliable datasets, training methods, and deployment, and additional challenges arise when each of the above components interacts with human agents. In this talk, I will detail some of my recent efforts on building a responsible and robust ML system with human in the loop. I will start with presenting a cohort of results on developing a new family of approaches to protect the training of ML models from potentially corrupted and biased training labels generated by human. Then I’d like to go beyond this static setting and discuss how to use the design and deployment of ML to offer human agents improvable actions to better develop their profiles and qualifications. My long-term goal is to build robust ML systems that promote healthy dynamics of the human-ML interaction.
Yang Liu is currently an Assistant Professor of Computer Science and Engineering at University of California, Santa Cruz. He was previously a postdoctoral fellow at Harvard University. He obtained his PhD degree from the Department of EECS, University of Michigan, Ann Arbor in 2015. His research aims to build responsible machine learning tools with humans in the loop, including developing robust training methods to deal with noisy human inputs, fair machine learning treatment to deploy to serve our society, and incentive-compatible data collection mechanisms. His works have seen applications in high-profile projects, such as the Hybrid Forecasting Competition organized by IARPA, and Systematizing Confidence in Open Research and Evidence (SCORE) organized by DARPA. His recourse classifier work is included in IBM AI Fairness 360 toolkit. His works have also been covered by WIRED and WSJ.