# Bonn, matroids, and Farkas’ Lemma

## Bonn

The Hausdorff Institute in Bonn is hosting an excellent trimester program in combinatorial optimization, and last week was devoted to a workshop on rigidity, submodularity and discrete convexity. The organizers of the workshop, Satoru Iwata, Tibor Jordán, and Jan Vondrák, assembled an impressive lineup of speakers, so we had a great overview on the state of the art in these fields. I am not going into details about the talks here (you can soon watch the videos on the youtube channel of the Hausdorff Institute, just mention some quick impressions.

• Clearly, one of the hot topics of recent years is submodular maximization. There were plenty of interesting talks on questions like parallelization, derandomization, restricted queries, and online problems, each with a different spin on the greedy algorithm.
• Great to see that research on discrete convexity is also going strong. Kazuo Murota's seminal work on these concepts led to several applications, and recently new connections have been discovered to auction theory [1] and to multiflow theory [2]. Another framework that keeps popping up in various graph theoretic applications is the Frank-Jordán theorem; András Frank showed us how it leads to good characterizations for the existence of degree-restricted simple graphs with given properties.
• More and more rigidity concepts turn out to be analyzable using combinatorial methods. Although the main open question of the area, the characterization of 3-dimensional rigidity, is still open, the search for a proof led to several new concepts that turned out to be interesting in their own right.
• A surprising new result by Lee, Sidford, and Wong [3], that they dubbed the “revenge of the ellipsoid method”, may turn out to be a game-changer in the way we think about efficient algorithms in combinatorial optimization. In particular, their result implies $\tilde{O}(n^3)$ running time for submodular flows, which is far better than anything we can currently hope to achieve by combinatorial methods.

These workshops are also ideal for conversations about different approaches to understanding and teaching basic concepts in combinatorial optimization. A topic that came up during breakfast is the dilemma we face when teaching Farkas’ lemma. There are essentially three different ways to do it:

• Start with the simplex method, and prove Farkas’ Lemma using the finite termination of the algorithm
• Introduce convex separation, and prove Farkas’ Lemma using a combination of convex separation and the existence of basic solutions
• Give a self-contained combinatorial proof of Farkas’ Lemma using induction.

All three methods have their merits, and the choice should probably depend on the skills and predilections of the students. Instead of discussing my real-world teaching preferences here, I will describe a purely hypothetical situation: how would I teach Farkas’ Lemma to students who already know (and love) matroid theory but who are unaware of linear programming duality? I would of course teach them the oriented matroid version!

## Warm-up: oriented matroids

The easiest way to define oriented matroids for our purpose is by the cycle axioms. As we know from matroid theory, the family of cycles of a matroid is a clutter ${\mathcal C}$ that has the following property (we use the stronger version of the cycle axiom):

If $C_1$ and $C_2$ are in ${\mathcal C}$, $e \in C_1 \cap C_2$, and $f \in C_1 \setminus C_2$, then there is a cycle $C_3$ with $f \in C_3 \subseteq C_1 \cup C_2 - e$.

The oriented cycle axioms are very similar, the difference being that cycles are not sets but unordered pairs of disjoint sets. The first axiom is that if for each pair we take the union of the two sets, then the set family we obtain is a clutter (in particular, we cannot have two different orientation of the same cycle in the oriented matroid). The second axiom is the following.

If $(C^+_1,C^-_1)$ and $(C^+_2,C^-_2)$ are in ${\mathcal C}$, $e \in C^+_1 \cap C^+_2$, and $f \in (C^+_1 \setminus C^+_2) \cup (C^-_1 \setminus C^-_2)$, then there is a cycle $(C^+_3,C^-_3)$ with $C^+_3 \subseteq C^+_1 \cup C^-_2 - e$, $C^-_3 \subseteq C^-_1 \cup C^+_2 - e$, and $f \in C^+_3 \cup C^-_3$.

To have some intuition about this axiom, consider the case of cycles in a directed graph, with the partitioning corresponding to the two directions of the arcs along the cycle. You can convince yourself easily that the oriented cycle axioms hold.

It is also useful to understand the case of linear oriented matroids represented over ${\mathbb R}$. Here cycles represent minimal dependent sets of vectors. Each such minimal dependent set has a unique partition into two subsets corresponding to the positive and negative coefficients in the linear dependency (note that the positive and negative part can be switched, this is why the pair is unordered). Let $C_1$ and $C_2$ be cycles and $e$ a vector in their intersection. The vector $e$ can be uniquely written as a linear combination $\lambda$ of the vectors in $C_1-e$, and also as a linear combination $\mu$ of the vectors in $C_2-e$. Let us choose an arbitrary vector $f \in C^+_1 \setminus C^+_2$; by rescaling the coefficients, we can assume that $\lambda_f-\mu_f=1$. Now observe that $\lambda-\mu$ is a linear dependency of $C_1 \cup C_2-e$, and its sign pattern satisfies the second oriented cycle axiom. This gives $f$ as a linear combination with a given sign pattern, and we can take such a linear combination with the smallest support to get an oriented cycle.

Minors of an oriented matroid $M$ can be obtained similarly to matroid minors. Let $E$ be the ground set of $M$. Deletion of a subset $F\subseteq E$, denoted by $M\setminus F$, means that only cycles disjoint from $F$ are preserved. Contraction of $F$ ($M/F$) consists of the members of $\{(C^+\setminus F,C^-\setminus F): (C^+,C^-) \in {\mathcal C}\}$ with inclusionwise minimal non-empty union. It is a bit tricky to show that this satisfies the first axiom, so let me give a short proof by contradiction. Suppose that $(C^+_1\setminus F,C^-_1 \setminus F)$ and $(C^+_2\setminus F,C^-_2\setminus F)$ are distinct cycles in $M/F$ such that $(C^+_1 \cup C^-_1)\setminus F = (C^+_2 \cup C^-_2)\setminus F$. Then there are elements $e \in (C^+_1 \cap C^+_2) \cup (C^-_1 \cap C^-_2)\setminus F$ and $f \in (C^+_1 \setminus C^+_2) \cup (C^-_1 \setminus C^-_2)\setminus F$. By the second axiom, there is a cycle $(C^+_3,C^-_3)$ such that $f \in C^+_3 \cup C^-_3 \subseteq (C^+_1 \cup C^-_1 \cup C^+_2 \cup C^-_2)-e$, which contradicts the minimality of $(C^+_1 \cup C^-_1)\setminus F$.

Not all matroids can be oriented; however, we have the nice property that an orientation of a matroid gives rise to an orientation of the dual. Take a co-cycle $D$ of the dual matroid, and consider two elements $e,f$ of $D$. Since $D$ is a cocycle, there must be a cycle $C$ such that $C \cap D =\{e,f\}$. Furthermore, by the first oriented cylce axiom and the properties of contraction, either every such cycle $C$ has $e$ and $f$ in the same class, or all of them have $e$ and $f$ in different classes. Define $e$ and $f$ to be equivalent if they are in different classes for any such cycle. It can be seen using the second axiom that this is an equivalence relation and there are at most two equivalence classes. This defines the partitioning of $D$, and the obtained orientation of the dual matroid satisfies the oriented cycle axioms. It is also easy to see that minor operations are switched for the dual, i.e. deletion in the dual is the same as taking the dual of the contraction. The following important property can also be derived from the axioms.

(*) If $(C^+,C^-)$ is a cycle, $(D^+,D^-)$ is a co-cycle, and $f \in C^+ \cap D^+$, then $(C^+ \cap D^-) \cup (C^- \cap D^+)$ is non-empty.

Proof. By the definition of the orientation of co-cycles, the statement is true if $|(C^+ \cup C^-) \cap (D^+ \cup D^-)|=2$. Suppose we have a counterexample with smallest intersection, and let $e$ by an element different from $f$ in the intersection. There is a cycle $(C^+_2,C^-_2)$ such that $(C^+_2 \cup C^-_2) \cap (D^+ \cup D^-)=\{e,f\}$. The second axiom for this cycle and $(C^+,C^-)$ gives a cycle that is a smaller counterexample, a contradiction.

What does the orientation of the dual mean in the two cases that we examined earlier? In a graphic matroid, the same digraph that defined the orientation of the cycles also defines an orientation of the cuts, and this is precisely the orientation of the dual matroid. The situation is a bit less obvious for matroids represented over ${\mathbb R}$. If $A$ is the matrix of the representation, $L$ is the subspace spanned by the rows of $A$, and $L^\bot$ is its orthogonal complement, then the oriented cycles correspond to the signed minimal supports of vectors in $L^\bot$, while the oriented co-cycles correspond to signed minimal supports of vectors in $L$.

## Finale: Farkas' Lemma

A directed cycle is an oriented cycle where one class is empty; directed co-cycle is defined analogously. Farkas’ Lemma for oriented matroids is simply the following.

Theorem. Every element of an oriented matroid is either in a directed cycle or in a directed co-cycle.

Proof. The statement that an element cannot be in both is implied by property (*). We prove the other direction for a given element $e$ by induction on the size of the matroid. The statement is trivial if the size is 1. Otherwise let $f$ be another element. If $e$ is in a directed cycle in $M\setminus f$ or it is in a directed co-cycle in $M/f$, then we are done. Thus we can assume by induction that $e$ is in a directed co-cycle $(\emptyset, D^-)$ in $M\setminus f$ and in a directed cycle $(\emptyset,C^-)$ in $M\setminus f$; furthermore, $(f,D^-)$ is a co-cycle and $(f,C^-)$ is a cycle in $M$. However, this contradicts property (*).

How does this relate to the original Farkas Lemma for a linear system $\{Ax=b, x \geq 0\}$? Let $M$ be the oriented matroid defined by the columns of $(A, -b)$. The system $\{Ax=b, x \geq 0\}$ is solvable if and only if $-b$ is in a directed cycle. Otherwise $-b$ is in a directed co-cycle, which means that the rows of $(A,-b)$ span a non-negative vector whose last coordinate is positive. This is equivalent to the existence of a vector $y$ such that $yA \geq 0$ and $yb\lt0$, as required by Farkas’ Lemma.

I think the upshot of all this is that the oriented matroid version of Farkas’ Lemma is a purely combinatorial statement that, to my knowledge, cannot be proved using convex separation or a perturbed version of the simplex method. In fact, Bland [4] invented his famous pivoting rule precisely because he wanted a finite simplex method for oriented matroids, and other known rules, like the lexicographic rule, did not seem to generalize to this setting.

The computational complexity of the problem is also interesting. In the cycle oracle model, there is no polynomial algorithm for Farkas’ Lemma, because an exponential number of oracle calls may be needed [5]. A more powerful oracle might do the trick, but for any oracle that is strongly polynomial time computable in the linear case, a polynomial-time algorithm for the oriented matroid Farkas Lemma would mean a strongly polynomial algorithm for the linear Farkas Lemma, which is one of the great open problems in optimization.

To finish the post after this long detour, here is a link to the wiki of the trimester program, which collects the open questions that were raised by the participants.

1. K Murota, A Shioura, Z Yang, Time Bounds for Iterative Auctions: A Unified Approach by Discrete Convex Analysis, Technical Report link
2. H. Hirai, A dual descent algorithm for node-capacitated multiflow problems and its applications, arXiv link
3. Y.T. Lee, A. Sidford, S.C. Wong, A Faster Cutting Plane Method and its Implications for Combinatorial and Convex Optimization, arXiv link
4. R.G. Bland, New finite pivoting rules for the simplex method, DOI link, PDF link
5. A. Bachem, A. Reinhold, On the Complexity of the Farkas-Property of Oriented Matroids, Technical Report