tatjana

Tatjana Chavdarova

Postdoctoral Researcher
EECS, UC Berkeley

Google scholar
Github
Twitter


About me

I am a Postdoctoral Researcher at the Department of Electrical Engineering and Computer Science (EECS) at the University of California at Berkeley, working with Michael Jordan. My research is supported by the Swiss National Science Foundation.

Prior to my current position, I was Postdoctoral Research Scientist at the Machine Learning and Optimization (MLO) lab at EPFL, led by Martin Jaggi. While at EPFL, I organized the Smooth Games reading group . In part of my time, I participated in the intelligent Global Health (iGH) sub-group of MLO led by Mary-Anne Hartley, by advising on the machine learning aspect of the ongoing projects.

I obtained my Ph.D. from EPFL and Idiap, supervised by François Fleuret. During my Ph.D. studies I did two internships at: (i) Mila where I was supervised by Yoshua Bengio and Simon Lacoste-Julien, as well as at (ii) DeepMind supervised by Irina Higgins.

Research interests

My main interests lie at the intersection of game theory and machine learning. I would like to understand the training dynamics of multi-player games and saddle point optimization. In addition, my interests include unsupervised learning, generative modeling, generalization, and robustness.

Recent works

  • (preprint) Improved Adversarial Robustness via Uncertainty Targeted Attacks . With Gilberto Manunza, M. Pagliardini, M. Jaggi. preprint presented at the UDL ICML workshop, 2021.
  • Abstract. Deploying deep learning models in real-world applications often raises security and reliability concerns, due to their sensitivity to small input perturbations. While adversarial training methods aim at training more robust models, these techniques often result in a lower unperturbed (clean) test accuracy, including the most widely used Projected Gradient Descent (PGD) method. Furthermore, fast adversarial training methods often overfit the specific perturbation used during training.
    In this work, we propose uncertainty-targeted attacks (UTA), where the perturbations are obtained by maximizing the model's estimated uncertainty. We demonstrate on MNIST and CIFAR-10 that this approachwhen implemented both in image and latent spacedoes not drastically deteriorate the clean test accuracy relative to PGD, its fast variant does not suffer from catastrophic overfitting, and it is robust to PGD attacks.

  • Semantic Perturbations with Normalizing Flows for Improved Generalization. With O. K Yüksel, S. Stich, M. Jaggi. ICCV (to appear) and preprint version at INFF+ ICML workshop, 2021.
  • Abstract. Data augmentation is a widely adopted technique for avoiding overfitting when training deep neural networks. However, this approach requires domain-specific knowledge and is often limited to a fixed set of hard-coded transformations. Recently, several works proposed to use generative models for generating semantically-meaningful perturbations to train a classifier. However, because accurate encoding and decoding is critical, these methods, which use architectures that approximate the latent-variable inference, remained limited to pilot studies on small datasets.
    We propose to use recently improved normalizing flows to define fully unsupervised data-augmentations. We exploit the exactly reversible encoder-decoder structure of normalizing flows to perform perturbations in the latent space. We demonstrate that simplistic on-manifold perturbations match the performance of advanced data-augmentation techniquesreaching 96.6% test accuracy for CIFAR-10 using ResNet-18, and outperform existing methods particularly in low data regimesyielding 10-25% relative improvement of test accuracy from classical training. We find that our latent adversarial perturbations adaptive to the classifier throughout its training are most effective, yielding the first test accuracy improvement results on real-world datasetsCIFAR-10/100via latent-space perturbations.

  • Taming GANs with Lookahead-Minmax. With M. Pagliardini, S. Stich, F. Fleuret, M. Jaggi. ICLR, 2021.
  • Abstract. Generative Adversarial Networks are notoriously challenging to train. The underlying minmax optimization is highly susceptible to the variance of the stochastic gradient and the rotational component of the associated game vector field. To tackle these challenges, we propose the Lookahead algorithm for minmax optimization, originally developed for single objective minimization only. The backtracking step of our Lookaheadminmax naturally handles the rotational game dynamics, a property which was identified to be key for enabling gradient ascent descent methods to converge on challenging examples often analyzed in the literature. Moreover, it implicitly handles high variance without using large mini-batches, known to be essential for reaching state of the art performance. Experimental results on MNIST, SVHN, CIFAR-10, and ImageNet demonstrate a clear advantage of combining Lookaheadminmax with Adam or extragradient, in terms of performance and improved stability, for negligible memory and computational cost. Using 30-fold fewer parameters and 16-fold smaller minibatches we outperform the reported performance of the class-dependent BigGAN on CIFAR-10 by obtaining FID of 12.19 without using the class labels, bringing state-of-the-art GAN training within reach of common computational resources.

    Source Code

    Link to paper

    Short video

    bibtex

    @inproceedings{chavdarova2021lagan, title={{Taming GANs with Lookahead-Minmax}}, author={Tatjana Chavdarova and Matteo Pagliardini and Sebastian U Stich and Fran{\c{c}}ois Fleuret and Martin Jaggi}, booktitle={International Conference on Learning Representations}, year={2021}, url={https://openreview.net/forum?id=ZW0yXJyNmoG} }
  • Ph.D. thesis Deep Generative Models and Applications. Jury: F. Fleuret, P. Frossard, L. Denoyer, S. Lacoste-Julien, M. Jaggi. July 2020.
  • Link to thesis

    bibtex

    @article{Chavdarova:278463, title = {Deep Generative Models and Applications}, author = {Chavdarova, Tatjana}, institution = {IEL}, publisher = {EPFL}, address = {Lausanne}, pages = {169}, year = {2020}, url = {http://infoscience.epfl.ch/record/278463}, doi = {10.5075/epfl-thesis-10257}, }
  • Reducing Noise in GAN Training with Variance Reduced Extragradient. With G. Gidel, F. Fleuret, S. Lacoste-Julien. NeurIPS 2019.
  • Abstract. We study the effect of the stochastic gradient noise on the training of generative adversarial networks (GANs) and show that it can prevent the convergence of standard game optimization methods, while the batch version converges. We address this issue with a novel stochastic variance-reduced extragradient (SVRE) optimization algorithm, which for a large class of games improves upon the previous convergence rates proposed in the literature. We observe empirically that SVRE performs similarly to a batch method on MNIST while being computationally cheaper, and that SVRE yields more stable GAN training on standard datasets.

    Source Code

    Link to paper

    bibtex

    @inproceedings{chavdarova2019, Author = {Tatjana Chavdarova and Gauthier Gidel and François Fleuret and Simon Lacoste-Julien}, Title = {Reducing Noise in {GAN} Training with Variance Reduced Extragradient}, Booktitle = {NeurIPS}, Year = {2019} }
  • SGAN: An Alternative Training of Generative Adversarial Networks. With F. Fleuret. CVPR 2018.
  • Abstract. Generative Adversarial Networks (GANs) have demonstrated impressive performance for data synthesis, and are now used in a wide range of computer vision tasks. In spite of this success, they gained a reputation for being difficult to train, what results in a time-consuming and human-involved development process to use them. We consider an alternative training process, named SGAN, in which several adversarial local pairs of networks are trained independently so that a global supervising pair of networks can be trained against them. The goal is to train the global pair with the corresponding ensemble opponent for improved performances in terms of mode coverage. This approach aims at increasing the chances that learning will not stop for the global pair, preventing both to be trapped in an unsatisfactory local minimum, or to face oscillations often observed in practice. To guarantee the latter, the global pair never affects the local ones. The rules of SGAN training are thus as follows: the global generator and discriminator are trained using the local discriminators and generators, respectively, whereas the local networks are trained with their fixed local opponent. Experimental results on both toy and real-world problems demonstrate that this approach outperforms standard training in terms of better mitigating mode collapse, stability while converging and that it surprisingly, increases the convergence speed as well.

    Link to paper

    bibtex

    @inproceedings{chavdarova-fleuret-2018, author = {Chavdarova, T. and Fleuret, F.}, title = {{SGAN}: An Alternative Training of Generative Adversarial Networks}, booktitle = {CVPR}, year = {2018}, }
  • WILDTRACK: A Multi-camera HD Dataset for Dense Unscripted Pedestrian Detection. CVPR 2018.
  • Abstract. People detection methods are highly sensitive to occlusions between pedestrians, which are extremely frequent in many situations where cameras have to be mounted at a limited height. The reduction of camera prices allows for the generalization of static multi-camera set-ups. Using joint visual information from multiple synchronized cameras gives the opportunity to improve detection performance. In this paper, we present a new large-scale and high-resolution dataset. It has been captured with seven static cameras in a public open area, and unscripted dense groups of pedestrians standing and walking. Together with the camera frames, we provide an accurate joint (extrinsic and intrinsic) calibration, as well as 7 series of 400 annotated frames for detection at a rate of 2 frames per second. This results in over 40,000 bounding boxes delimiting every person present in the area of interest, for a total of more than 300 individuals. We provide a series of benchmark results using baseline algorithms published over the recent months for multi-view detection with deep neural networks, and trajectory estimation using a non-Markovian model.

    Source Code

    Link to paper

    Download dataset

    bibtex

    @inproceedings{chavdarova-et-al-2018, author = {Chavdarova, T. and Baqué, P. and Bouquet, S. and Maksai, A. and Jose, C. and Bagautdinov, T. and Lettry, L. and Fua, P. and Van Gool, L. and Fleuret, F.}, title = {{WILDTRACK}: A Multi-camera {HD} Dataset for Dense Unscripted Pedestrian Detection}, booktitle = {CVPR}, year = {2018}, pages = {5030-5039}, }

Invited Talks

Students & Teaching

  • Gilberto Manunza (MSc thesis), On the Effect of Variance Reduced Gradient and Momentum for Optimizing Deep Generative Adversarial Networks, six months project, 2021, EPFL.
  • Yatin Dandi (theoretical research project), On the Effect of Noise induced by Gradient Stochasticity on Optimizing 2-player Differentiable Games, 2021, EPFL.
  • Apostolov Alexander (CS-498 Semester Project), On the Effect of Variance Reduced Gradient and Momentum for Optimizing Deep Neural Networks, autumn semester, 2020, EPFL.
  • co-superivison with S. Stich:
    • Oğuz Kaan Yüksel, Normalizing Flows for Generalization and Robustness , (CS-498 Semester Project) autumn semester, 2020, EPFL.
    • Yehao Liu, On the Drawbacks of Popular Deep Learning Uncertainty Estimation Methods, spring semester & summer internship, 2021, EPFL.
  • co-superivison with Mary-Anne Hartley:
    • Deeksha M. Shama, Deep Learning Approaches for Covid-19 Diagnosis via Digital Lung Auscultation , autumn semester, 2020.
    • Pablo Cañas, On Uncertainty Estimation of Global COVID Policy Simulator, autumn semester, 2020.
  • Teaching Assistant, Deep Learning Course (EE-559) at EPFL, for MSc students, 2018 & 2020.
  • Teaching Assistant for one week, An Introduction to Deep Learning for MSc students at African Master's in Machine Intelligence, Kigali, Rwanda, 2018.

Other services

Reviewing

Workshops

Reading Group

Misc