Games in Machine Learning Course

Winter semester 2023/2024

Abstract

A typical deep learning (DL) pipeline involves defining a utility or loss function and training a set of parameters to optimize it. However, this approach has its limitations. The specified objective might not effectively capture the desired behavior of the model, leading to high-performing models that struggle with accurate predictions when encountering slightly modified inputs. Additionally, these models may learn irrelevant correlations, resulting in subpar performance. This is often solved through a technique known as robustification, which employs a min-max objective. In this approach, two models, or players, are trained jointly. Each player has a different real-valued objective that depends on the parameters of both players. Consequently, in the field of machine learning, it becomes increasingly imperative to understand how to solve two-player games using iterative, gradient-based methods to find the Nash Equilibrium solution. Moreover, many problems in machine learning inherently involve multiple players. Examples include Generative Adversarial Networks, multi-agent reinforcement learning, collaborative robots, and competing drones.

This course will use a framework encompassing all these problems called Variational Inequalities (VIs). Due to its generality, our studies also apply to standard minimization (single-player games). In addition to a brief introduction to game theory, this course focuses on:

Lecturers:

UdS students: visit cms course website.

Course Materials

Guest Lecture: The Complexity of Constrained Min-Max Optimization — Manolis Zampetakis

February 6th, 2024

» Slides

Resources & References

  • Large-Scale Convex Optimization: Algorithms & Analyses via Monotone Operators

    by Ernest Ryu and Wotao Yin; Cambridge University Press 2023

  • Contact

    For typos, remarks, or you'd like to use the latex code of the slides feel free to reach out at: tatjana.chavdarova[at]berkeley[dot]edu.

    Thanks!