Program for FeigeFest

Program for Tuesday, January 14


9:00-9-30 Coffee

9:30-10:10 Noga Alon (Princeton)

Uri and Hat Guessing Games

10:10-10:50 Robi Krauthgamer (Weizmann Institute of Science)

Coresets for Clustering in Graphs of Bounded Treewidth

10:50-11:10 Coffee

11:10-11:50 Mario Szegedy (Alibaba)

How to find an approximate max cut of the cycle?

11:50-12:30 Michael Langberg (SUNY Buffalo)

Can one network edge make a difference: The edge removal problem

12:30-13:10 Sigal Oren (Ben Gurion University)

Combinatorial Auctions with Endowment Effect

13:10-14:30 Lunch

14:30-15:10 Moshe Tennenholtz (Technion)

Data Science with Game Theory Flavor

15:10-15:50 Inbal Talgam Cohen (Technion)

Beyond Worst Case Analysis in Algorithmic Game Theory

15:50-16:10 Coffee

16:10-16:50 Gideon Schechtman (Weizmann Institute of Science)

No good dimension reduction in the trace class norm

16:50-17:30 Adi Shamir (Weizmann Institute of Science)

Security Challenges in Machine Learning

Dinner in honor of Uri Feige


Program for Wednesday, January 15

9:00-9-30 Coffee

9:30-10:10 Michel Goemans (MIT)

25 Years of MAX CUT

10:10-10:50 Nati Linial (Hebrew University)

Hypertrees

10:50-11:10 Coffee

11:10-11:50 Nikhil Bansal (CWI and TU Eindhoven)

Online vector balancing

11:50-12:30 Moshe Babaioff (Microsoft Research)

How to maximize profit when selling multiple items?

12:30-13:10 Artur Czumaj (Warwick)

Estimating basic graph parameters: Random sampling and beyond

13:10-14:30 Lunch

14:30-15:10 Moni Naor (Weizmann Institute of Science)

Instance Complexity and Unlabeled Certificates in the Decision Tree Model

15:10-15:50 Amin Coja-Oghlan (Goethe University)

Optimal group testing

15:50-16:10 Coffee

16:10-16:50 Yuval Filmus (Technion)

Foresight in submodular optimization

16:50-17:30 James Lee (University of Washington)

Vertex-weighted metrics: Separators, flows, strings, and gravity

Abstracts

Noga Alon: Uri and Hat Guessing Games

Can Uri guess the color of his hat by looking at the color of yours? I will mention results and conjectures he has that deal with several variants of this problem, and will discuss a more recent variant obtained jointly with Ben-Eliezer, Shangguan and Tamo, which settles a problem of Butler, Hajiaghayi, Kleinberg and Leighton.

Robi Krauthgamer: Coresets for Clustering in Graphs of Bounded Treewidth

We initiate the study of coresets for clustering in graph metrics, i.e., the shortest-path metric of edge-weighted graphs, which is relevant for example for road networks and data visualization. A coreset is a technique that reduces the data while maintaining a (1+epsilon)-approximation to the objective, and leads to significantly improved running time, storage, or communication in various settings.

I will discuss how every graph G of treewidth tw(G) admits a coreset of size O(tw(G)) for k-Median clustering (omitting dependence on k and epsilon). Moreover, this coreset can be constructed in near-linear time, and we have a matching lower bound Omega(tw(G)), which also justifies restricting the graph's topology.

Joint work with Vladimir Braverman, Lingxiao Huang, Shaofeng H.-C. Jiang, and Xuan Wu.

Artur Czumaj: Estimating basic graph parameters: Random sampling and beyond

In 2004, Feige presented a cute analysis of a simple sampling algorithm to estimate the number of edges in a graph, given oracle access to vertex degrees. We discuss various related results for estimating basic graph parameters; some based on simple random sampling, and some using more sophisticated sampling techniques.

Michael Langberg: Can one network edge make a difference: The edge removal problem

The edge removal problem addresses the loss in communication rate when a single edge is removed from a given network. One might expect that this loss be bounded by above by the capacity of the edge removed, however, this need not be the case. This talk will survey a line of recent studies (and open questions) on the edge removal problem including an overview of intriguing reductive connections that have emerged recently between the edge removal problem and seemingly unrelated problems in network communication.

Moshe Tennenholtz: Data Science with Game Theory Flavor

Design of data science algorithms and techniques, central to the Internet and on-line media, needs to be revolutionized. Current designs ignore participants’ strategic incentives. We are establishing an entirely new repertoire of incentive-compatible data science algorithms and techniques, with major applications in search and information retrieval, recommendation systems, regression, on-line learning, clustering and segmentation, and social networks analysis. In this talk I will introduce our research agenda, and discuss in more detail couple of concrete contributions.

Michel Goemans: 25 Years of MAX CUT

I will recount the developments on the maximum cut problem and its semidefinite programming relaxation over the last 25 years. Along the way, I will discuss some of Uri Feige's numerous contributions.

Nati Linial: Hypertrees

A finite connected acyclic graph is called a tree. Both properties - connectivity and being acyclic make very good sense in higher dimensions as well. This has led Gil Kalai (1983) to define the notion of a d-dimensional hypertree for d>1. The study of hypertrees is an exciting area of research, and I will try to give you a taste of the many open questions and what we know (and do not know) about them. No specific prior background is assumed.

The talk is based on several papers. The list of coauthors on these papers includes Roy Meshulam, Mishael Rosenthal, Yuval Peled, Ilan Newman and Yuri Rabinovich.

Inbal Talgam Cohen (Technion): Beyond Worst Case Analysis in Algorithmic Game Theory

Algorithmic game theory is a natural meeting place for the worst-case approach of algorithm design, and the average-case approach of economic mechanism design. The Beyond Worst-Case Analysis (BWCA) paradigm bridges between these two approaches, leading to new insights on simplicity and robustness. We’ll show two examples of simple practical economic mechanisms (selling items separately and linear contracts), which are far from optimal in both the worst-case and average-case senses, but are near-optimal in an appropriate BWCA sense.

Based on joint works with Paul Duetting, Alon Eden, Michal Feldman, Ophir Friedler, Tim Roughgarden and Matt Weinberg.

Gideon Schechtman: No good dimension reduction in the trace class norm

I'll present a result of Assaf Naor, Gilles Pisier and myself: Let $S_1$ denote the Schatten--von Neumann trace class, i.e., the Banach space of all compact operators $T:\ell_2\to \ell_2$ whose trace class norm $\|T\|_{S_1}=\sum_{j=1}^\infty\sigma_j(T)$ is finite, where $\{\sigma_j(T)\}_{j=1}^\infty$ are the singular values of $T$. We prove that for arbitrarily large $n\in \mathbb{N}$ there exists a subset $S\subset S_1$ with $|S|=n$ that cannot be embedded with bi-Lipschitz distortion $O(1)$ into any $n^{o(1)}$-dimensional linear subspace of $S_1$. This extends a well known result of Brikmann and Charikar (2003) who proved a similar result with $\ell_1$ replacing $S_1$. It stand in sharp dichotomy with the Johnson--Lindenstrauss lemma (1984) which says that the situation in $\ell_2$ is very different.

James Lee: Online algorithms, competitive analysis, and the shadows of gradient descent

Projected gradient descent of a convex function on a convex body is a remarkably powerful algorithmic tool, both for continuous and discrete optimization. Algorithms derived in this framework are often (automatically) robust, efficient, and able to operate effectively even in the presence of limited, noisy information about the underlying objective.

The projection of such algorithms to lower-dimensional spaces yields dynamics whose convergence can look quite complex (and perhaps unlike one's intuitive notion of "gradient descent"). Consider, for instance, training a deep neural network. Here one expects that, in the most interesting settings, the "formation" of low-level features informs the formation of higher-level features, which in turn refines the lower-level features, and so on, up and down the network. This class of dynamics is not well-understood, as often the natural formulations appear non-convex, and convergence is unclear.

I will present recently-developed algorithms for some classical problems in online algorithms and competitive analysis, where convergence of "local dynamics" on a convolutional network appears analogously complex. Recasting these algorithms as the "shadows" of gradient descent on a higher-dimensional space yields a powerful framework for analyzing their competitive ratio.

Amin Coja-Oghlan: Optimal group testing

In group testing the aim is to identify a small set of infected individuals within a large population. At our disposal we have a test procedure that can be applied to a group of individuals, with the test returning a positive result if at least one individual in the group is infected. All tests are conducted in parallel. The aim is to devise a (possibly randomised) test design with as few tests as possible that identifies the infected individuals with high probability. We show that there occurs a sharp information-theoretic/algorithmic phase transition as the number of tests passes an explicit threshold. The talk is based on joint work with Oliver Gebhard, Max Hahn-Klimroth and Philipp Loick.

Nikhil Bansal: Online vector balancing

Consider the setting where vectors in [-1,1]^n arrive online, and upon the arrival of vector v_t at time t, it must be immediately and irrevocably assigned a sign +1 or -1. The goal is to pick these signs to ensure that the norm of the signed sum of these vectors stays small. Such online discrepancy games were originally considered by Spencer in the 70's, but have received renewed attention due to applications in online algorithms. I will give an overview of the results and techniques in the area and describe several new recent results that obtain close to optimal bounds in many settings. I will also discuss several basic questions that remain completely open.

Based on joint works with Joel Spencer, Haotian Jiang, Sahil Singla and Makrand Sinha

Moni Naor: Instance Complexity and Unlabeled Certificates in the Decision Tree Model

Suppose that you want to check whether a graph satisfies some (graph) property. The goal is to minimize the number of queries to the adjacency matrix. You are given as an “untrusted hint” an isomorphic copy of the graph. You must always be correct, but the number of queries made is only measured when the hint is correct.

Do such unlabeled certificates help? In the worst case? Per instance?

In this talk I will discuss labeled and unlabeled certificates in particular in the context of ``instance optimality". This is a measure of goodness of an algorithm in which the performance of one algorithm is compared to others per input. This is in sharp contrast to worst-case and average-case complexity measures, where the performance is compared either on the worst input or on an average one, respectively.

Joint work with Tomer Grossman and Ilan Komargodski

James Lee: Vertex-weighted metrics: Separators, flows, strings, and gravity


Projected gradient descent of a convex function on a convex body is a remarkably powerful algorithmic tool, both for continuous and discrete optimization. Algorithms derived in this framework are often (automatically) robust, efficient, and able to operate effectively even in the presence of limited, noisy information about the underlying objective.


The projection of such algorithms to lower-dimensional spaces yields dynamics whose convergence can look quite complex (and perhaps unlike one's intuitive notion of "gradient descent"). Consider, for instance, training a deep neural network. Here one expects that, in the most interesting settings, the "formation" of low-level features informs the formation of higher-level features, which in turn refines the lower-level features, and so on, up and down the network. This class of dynamics is not well-understood, as often the natural formulations appear non-convex, and convergence is unclear.


I will present recently-developed algorithms for some classical problems in online algorithms and competitive analysis, where convergence of "local dynamics" on a convolutional network appears analogously complex. Recasting these algorithms as the "shadows" of gradient descent on a higher-dimensional space yields a powerful framework for analyzing their competitive ratio.

Moshe Babaioff: How to maximize profit when selling multiple items?

A seller of goods often does not know how much the buyer would be willing to pay for each of her goods. How should she therefore sell her items when aiming to maximize profit?

When there is only one item for sale, there is a simple deterministic mechanism that maximizes the seller’s profit, but in contrast, recent results show that profit-maximizing mechanisms for selling multiple items are very complex, must use lotteries, and moreover, might have to present infinitely many lotteries for the buyer to choose from. The talk will present a simple deterministic mechanism whose profit is not significantly degraded compared to the complex profit-maximizing optimal mechanism, and then present results regarding the complexity of mechanisms that are almost optimal.

The talk is based on papers joint with Yannai Gonczarowski, Nicole Immorlica, Brendan Lucier, Noam Nisan and S. Matthew Weinberg.

Yuval Filmus: Foresight in submodular optimization

Feige showed in 1998 that the greedy algorithm is optimal for Set Cover and for its dual Max Cover. The goal of Max Cover is to choose a given number of sets which together cover the maximum number of elements. Partition Max Cover is the variant in which sets have a color, and we need to choose one set of each color; this generalizes Max SAT. Greedy is no longer optimal, and standard approaches use an LP or other relaxation followed by rounding. We describe an optimal combinatorial algorithm, which uses local search with foresight. Our algorithm can also be adapted to the more general case of monotone submodular optimization.

Joint work with Justin Ward (QMUL) from 2014.

Mario Szegedy: How to find an approximate max cut of the cycle?

We survey local quantum and classical algorithms for finding an approximate MAXCUT of d-regular graphs. Then we zoom into the d=2 case, where we are (almost) able to give a precise analysis, at least in the classical case. Our investigation reveals a surprisingly rich combinatorial structure even in this simple case. Joint with Cupjin Huang, Yaoyun Shi, Fang Zhang, Ronald de Wolf and Jianxin Chen.

Gideon Schechtman: No good dimension reduction in the trace class norm

I'll present a result of Assaf Naor, Gilles Pisier and myself: Let $S_1$ denote the Schatten--von Neumann trace class, i.e., the Banach space of all compact operators $T:\ell_2\to \ell_2$ whose trace class norm $\|T\|_{S_1}=\sum_{j=1}^\infty\sigma_j(T)$ is finite, where $\{\sigma_j(T)\}_{j=1}^\infty$ are the singular values of $T$. We prove that for arbitrarily large $n\in \mathbb{N}$ there exists a subset $S\subset S_1$ with $|S|=n$ that cannot be embedded with bi-Lipschitz distortion $O(1)$ into any $n^{o(1)}$-dimensional linear subspace of $S_1$. This extends a well known result of Brikmann and Charikar (2003) who proved a similar result with $\ell_1$ replacing $S_1$. It stand in sharp dichotomy with the Johnson--Lindenstrauss lemma (1984) which says that the situation in $\ell_2$ is very different.

Sigal Oren: Combinatorial Auctions with Endowment Effect

We consider combinatorial auctions with bidders that exhibit endowment effect. In most of the previous work on cognitive biases in algorithmic game theory (e.g., [Kleinberg and Oren, EC’14] and its follow-ups) the focus was on analyzing the implications and mitigating their negative consequences. In contrast, in this talk we will see that in some cases cognitive biases can be harnessed to obtain better outcomes. Specifically, we discuss Walrasian equilibria in combinatorial markets. It is well known that Walrasian equilibria is guaranteed to exist only in limited settings, e.g., when all valuations are gross substitutes, but fails to exist in more general settings, e.g., when the valuations are submodular.

We consider combinatorial settings in which bidders exhibit the endowment effect, that is, their value for items increases with ownership. Our main result shows that when the valuations are submodular, even a mild degree of endowment effect is sufficient to guarantee the existence of Walrasian equilibria. In fact, we show that in contrast to Walrasian equilibria with standard utility maximizing bidders – in which the equilibrium allocation must be efficient – when bidders exhibit endowment effect any local optimum can be an equilibrium allocation. We also provide lower bounds on the intensity of the endowment effect that the bidders must have in order to guarantee the existence of a Walrasian equilibrium in various settings.

Joint work with Moshe Babaioff and Shahar Dobzinski.

Adi Shamir: Security Challenges in Machine Learning

Machine learning had advanced at an amazing speed in the last ten years, but there are still many unsolved and poorly understood security problems in this field, ranging from adversarial examples to deepfakes. In this talk I will give a high-level survey of some of these problems, and describe several novel solutions that had been recently proposed.