tatjana

Tatjana Chavdarova

Postdoctoral Researcher
MLO, EPFL
Lausanne, Switzerland

Google scholar
Github
Twitter


About me

I am a Postdoctoral researcher in the Machine Learning and Optimization (MLO) lab at EPFL, led by Prof. Martin Jaggi. I am an organizer of the Smooth Games reading group at EPFL. In part of my time, I participate in the intelligent Global Health (iGH) sub-group of MLO led by Mary-Anne Hartley, by advising on the machine learning aspect of the ongoing projects. I obtained my Ph.D. from EPFL and Idiap, and during my Ph.D. studies I did internships at Mila and DeepMind. I was recently awarded a Postdoctoral fellowship by the Swiss National Science Foundation and in 2021 I will join Prof. Michael Jordan's lab as a postdoctoral researcher.

Research interests

My main interests lie at the intersection of game theory and machine learning. I would like to understand training dynamics of multi player games and saddle point optimization. In general, my focus includes unsupervised learning, generative modeling, generalization and robustness, and recently Hamiltonian ML.

Recent publications

  • Taming GANs with Lookahead-Minmax. With M. Pagliardini, S. Stich, F. Fleuret, M. Jaggi. ICLR, 2021.
  • Abstract. Generative Adversarial Networks are notoriously challenging to train. The underlying minmax optimization is highly susceptible to the variance of the stochastic gradient and the rotational component of the associated game vector field. To tackle these challenges, we propose the Lookahead algorithm for minmax optimization, originally developed for single objective minimization only. The backtracking step of our Lookaheadminmax naturally handles the rotational game dynamics, a property which was identified to be key for enabling gradient ascent descent methods to converge on challenging examples often analyzed in the literature. Moreover, it implicitly handles high variance without using large mini-batches, known to be essential for reaching state of the art performance. Experimental results on MNIST, SVHN, CIFAR-10, and ImageNet demonstrate a clear advantage of combining Lookaheadminmax with Adam or extragradient, in terms of performance and improved stability, for negligible memory and computational cost. Using 30-fold fewer parameters and 16-fold smaller minibatches we outperform the reported performance of the class-dependent BigGAN on CIFAR-10 by obtaining FID of 12.19 without using the class labels, bringing state-of-the-art GAN training within reach of common computational resources.

    Source Code

    Link to paper

    bibtex

    @inproceedings{chavdarova2021lagan, title={{Taming GANs with Lookahead-Minmax}}, author={Tatjana Chavdarova and Matteo Pagliardini and Sebastian U Stich and Fran{\c{c}}ois Fleuret and Martin Jaggi}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=ZW0yXJyNmoG} }
  • Ph.D. thesis Deep Generative Models and Applications. Jury: F. Fleuret, P. Frossard, L. Denoyer, S. Lacoste-Julien, M. Jaggi. July 2020.
  • Link to thesis

    bibtex

    @article{Chavdarova:278463, title = {Deep Generative Models and Applications}, author = {Chavdarova, Tatjana}, institution = {IEL}, publisher = {EPFL}, address = {Lausanne}, pages = {169}, year = {2020}, url = {http://infoscience.epfl.ch/record/278463}, doi = {10.5075/epfl-thesis-10257}, }
  • Reducing Noise in GAN Training with Variance Reduced Extragradient. With G. Gidel, F. Fleuret, S. Lacoste-Julien. NeurIPS 2019.
  • Abstract. We study the effect of the stochastic gradient noise on the training of generative adversarial networks (GANs) and show that it can prevent the convergence of standard game optimization methods, while the batch version converges. We address this issue with a novel stochastic variance-reduced extragradient (SVRE) optimization algorithm, which for a large class of games improves upon the previous convergence rates proposed in the literature. We observe empirically that SVRE performs similarly to a batch method on MNIST while being computationally cheaper, and that SVRE yields more stable GAN training on standard datasets.

    Source Code

    Link to paper

    bibtex

    @inproceedings{chavdarova2019, Author = {Tatjana Chavdarova and Gauthier Gidel and François Fleuret and Simon Lacoste-Julien}, Title = {Reducing Noise in {GAN} Training with Variance Reduced Extragradient}, Booktitle = {NeurIPS}, Year = {2019} }
  • SGAN: An Alternative Training of Generative Adversarial Networks. With F. Fleuret. CVPR 2018.
  • Abstract. Generative Adversarial Networks (GANs) have demonstrated impressive performance for data synthesis, and are now used in a wide range of computer vision tasks. In spite of this success, they gained a reputation for being difficult to train, what results in a time-consuming and human-involved development process to use them. We consider an alternative training process, named SGAN, in which several adversarial local pairs of networks are trained independently so that a global supervising pair of networks can be trained against them. The goal is to train the global pair with the corresponding ensemble opponent for improved performances in terms of mode coverage. This approach aims at increasing the chances that learning will not stop for the global pair, preventing both to be trapped in an unsatisfactory local minimum, or to face oscillations often observed in practice. To guarantee the latter, the global pair never affects the local ones. The rules of SGAN training are thus as follows: the global generator and discriminator are trained using the local discriminators and generators, respectively, whereas the local networks are trained with their fixed local opponent. Experimental results on both toy and real-world problems demonstrate that this approach outperforms standard training in terms of better mitigating mode collapse, stability while converging and that it surprisingly, increases the convergence speed as well.

    Link to paper

    bibtex

    @inproceedings{chavdarova-fleuret-2018, author = {Chavdarova, T. and Fleuret, F.}, title = {{SGAN}: An Alternative Training of Generative Adversarial Networks}, booktitle = {CVPR}, year = {2018}, }
  • WILDTRACK: A Multi-camera HD Dataset for Dense Unscripted Pedestrian Detection. CVPR 2018.
  • Abstract. People detection methods are highly sensitive to occlusions between pedestrians, which are extremely frequent in many situations where cameras have to be mounted at a limited height. The reduction of camera prices allows for the generalization of static multi-camera set-ups. Using joint visual information from multiple synchronized cameras gives the opportunity to improve detection performance. In this paper, we present a new large-scale and high-resolution dataset. It has been captured with seven static cameras in a public open area, and unscripted dense groups of pedestrians standing and walking. Together with the camera frames, we provide an accurate joint (extrinsic and intrinsic) calibration, as well as 7 series of 400 annotated frames for detection at a rate of 2 frames per second. This results in over 40,000 bounding boxes delimiting every person present in the area of interest, for a total of more than 300 individuals. We provide a series of benchmark results using baseline algorithms published over the recent months for multi-view detection with deep neural networks, and trajectory estimation using a non-Markovian model.

    Source Code

    Link to paper

    Download dataset

    bibtex

    @inproceedings{chavdarova-et-al-2018, author = {Chavdarova, T. and Baqué, P. and Bouquet, S. and Maksai, A. and Jose, C. and Bagautdinov, T. and Lettry, L. and Fua, P. and Van Gool, L. and Fleuret, F.}, title = {{WILDTRACK}: A Multi-camera {HD} Dataset for Dense Unscripted Pedestrian Detection}, booktitle = {CVPR}, year = {2018}, pages = {5030-5039}, }

Invited Talks

Students & Teaching

  • Yatin Dandi (theoretical research project), On the Effect of Noise induced by Gradient Stochasticity on Optimizing 2-player Differentiable Games, 2021, EPFL.
  • Gilberto Manunza (MSc thesis), On the Effect of Variance Reduced Gradient and Momentum for Optimizing Deep Generative Adversarial Networks, six months project, 2021, EPFL.
  • Apostolov Alexander (CS-498 Semester Project), On the Effect of Variance Reduced Gradient and Momentum for Optimizing Deep Neural Networks, autumn semester, 2020, EPFL.
  • Oğuz Kaan Yüksel, co-supervision with S. Stich (CS-498 Semester Project), Normalizing Flows for Generalization and Robustness, autumn semester 2020, EPFL.
  • co-superivison with Mary-Anne Hartley:
    • Deeksha M. Shama, Deep Learning Approaches for Covid-19 Diagnosis via Digital Lung Auscultation , autumn semester, 2020.
    • Pablo Cañas, On Uncertainty Estimation of Global COVID Policy Simulator, autumn semester, 2020.
  • Teaching Assistant, Deep Learning Course (EE-559) at EPFL, for MSc students, 2018 & 2020.
  • Teaching Assistant for one week, An Introduction to Deep Learning for MSc students at African Master’s in Machine Intelligence, Kigali, Rwanda, 2018.

Other services

Reviewing

Workshops

Reading Group

Misc

  • Board of Directors for Women in Machine Learning. 2021.