Sign Up

6548 Forest Park Pkwy, St. Louis, MO 63112, USA

Dr. David. J. Crandall
Professor of Computer Science, Indiana University Bloomington

While early work in computer vision was inspired by studies of human perception, most recent work has focused on techniques that work well in practice but probably have little biological basis. But low-cost, lightweight wearable cameras and gaze trackers can now record people's actual fields of view as they go about their everyday lives. Such first-person, "egocentric" video contains rich information about how people see and interact with the world around them, potentially helping us better understand human perception and behavior while also yielding insights that could improve computer vision. For example, studying how young children interact with unfamiliar toys could help computer vision researchers design better techniques for learning computational object models. In this talk, I'll describe several recent interdisciplinary projects in which we have used computer vision to study and model people's behavior from a first-person perspective, and then used these insights to try to improve computer vision. I'll also talk about a project to collect a large-scale dataset of egocentric video data to push forward work in this area.

Finally, I'll talk about recent work in which we studied the people in the computer vision community itself, trying to understand how researchers and practitioners feel about the trajectory of the field and how to improve it.

  • Yusuf Bitiren

1 person is interested in this event


Register in advance for this meeting.

After registering, you will receive a confirmation email containing information about joining the meeting.

6548 Forest Park Pkwy, St. Louis, MO 63112, USA

Dr. David. J. Crandall
Professor of Computer Science, Indiana University Bloomington

While early work in computer vision was inspired by studies of human perception, most recent work has focused on techniques that work well in practice but probably have little biological basis. But low-cost, lightweight wearable cameras and gaze trackers can now record people's actual fields of view as they go about their everyday lives. Such first-person, "egocentric" video contains rich information about how people see and interact with the world around them, potentially helping us better understand human perception and behavior while also yielding insights that could improve computer vision. For example, studying how young children interact with unfamiliar toys could help computer vision researchers design better techniques for learning computational object models. In this talk, I'll describe several recent interdisciplinary projects in which we have used computer vision to study and model people's behavior from a first-person perspective, and then used these insights to try to improve computer vision. I'll also talk about a project to collect a large-scale dataset of egocentric video data to push forward work in this area.

Finally, I'll talk about recent work in which we studied the people in the computer vision community itself, trying to understand how researchers and practitioners feel about the trajectory of the field and how to improve it.