Speaker: Aysegul Dündar, NVIDIA
Title: Unsupervised feature learning and adaptation for a diverse visual world
Date/Time: January 15, 2020 / 12.40-13.30
Place: FENS G032
Abstract: Deep learning models have seen tremendous success in the past few years, advancing state-of-the-art results on various computer vision benchmarks. However, they rely on thousands to millions of human annotated images and even still, fail to generalize to our diverse visual world. In order to scale the deep learning models to tackle the vast space of possible visual domains under real-world diverse settings, we need to develop unsupervised feature learning and unsupervised domain adaptation algorithms. In this talk, I will present our recent work on unsupervised disentanglement of pose, appearance, and background from videos. I will demonstrate that the proposed disentanglement enables a video modeling method which can predict 100 frames into the future while maintaining the structure of the moving foreground object and a high fidelity to the static background. In the second part of the talk, after briefly introducing the domain adaptation problem, I will discuss some of our efforts to transfer information between different visual environments in an unsupervised way.
Bio: Aysegul Dundar is a research scientist in the Applied Deep Learning Research group at NVIDIA. She received her Ph.D. degree at the Weldon School of Biomedical Engineering, Purdue University, under the supervision of Professor Eugenio Culurciello. Her research was focused on embedded vision systems and was featured in popular technology journals such as MIT Technology Review and BBC. She received a B.Sc. degree in Electrical and Electronic Engineering from Bogazici University in Turkey, in 2011. In CVPR 2018, she led a team that won the 1st place in the Domain Adaptation for Semantic Segmentation Competition in the Workshop on Autonomous Vehicle challenge. Her current research focus is on domain adaptation, unsupervised feature learning, and generative models for image synthesis and manipulation
Contact: Öznur Taştan & Murat Kaya Yapıcı