*P*be a convex polyhedral cone. It is sufficiently well known that the nonnegative rank of the slack matrix of

*P*equals the minimum number of facets of a convex polyhedral cone

*Q*which can be mapped onto

*P*by a linear mapping. This number is called the

*linear extension complexity*of

*P*. Gouveia, Parrilo, and Thomas have generalized the connection between the minimum "size" of

*Q*and a suitable factorization rank of the slack matrix to closed convex sets

*Q*which need not be polyhedral. The most prominent case for applications is undoubtedly that of so-called spectrahedra, where

*Q*is required to be the intersection of the (non-polyhedral) convex cone of (real) positive semidefinite matrices with a linear subspace. The situations when we take the doubly nonnegative matrices or the co-positive matrices instead of the positive semidefinite matrices are equally important.

In this post, I will write out some details of arguments in the context of these constructions.

#####
**The setup**

We will be concerned with the situation that there are two convex polyhedral cones
$P_0 \subset P_1$.

The case $P := P_0 = P_1$ corresponds to the situation above. We then ask for the minimum "complexity" of some type of convex cone *Q*, for which there is a linear mapping $\pi$ such that

$P_0 \subset \pi(Q) \subset P_1 \ \ \ \ \ \ \ \ \ \ \ (*)$.

The more general setup is of pertinence in applications in *Combinatorial Optimization.*There, one often has a convex polyhedral cone $P_0$ generated by the (finitely many) feasible solutions to the combinatorial optimization problem, and a convex polyhedral cone $P_1$ which is a

*relaxation*of $P_0$: the cone $P_1$ only contains $P_0$, but is not equal to it. This "gap" usually is a consequence of the enormous complexity of polyhedra considered in Combinatorial Optimization. As an example, consider the famous Traveling Salesman Problem. In this case, $P_0$ is the homogenization of the

*Traveling Salesman Polytope*(the latter is the convex hull of the edge sets of salesman tours). A large number of facet-defining inequalities is known for $P_0$, but, for more than 10 cities, describing $P_0$ in terms of linear inequalities is elusive. Some of the better known inequalities are the subtour inequalities and the 2-matching inequalities. Both are exponentially large families of inequalities. The following question is now very interesting:

*Is there a linear/semidefinite/etc (extended) formulation with polynomial "complexity" which is at least as good as the subtour+2-matching relaxation?*Letting $P_1$ be the homogenization of the polytope defined by the subour and 2-matching inequalities, this question is equivalent to asking for a low-complexity convex set

*Q*as in (*). The Max-Cut problem is another example. Here, one may ask, for example, whether there exist small semidefinite (extended) formulations which improve on, say, the triangle inequalities. (For more on this question, see our abstract arXiv:1203.3961.)

The rigorous definition is the following. For the rest of this post, let $\mathcal C$ be a set of closed convex cones contained in linear spaces $V_C \supset C \in \mathcal C$, and let $h\colon \mathcal C \to \mathbf N$ be a mapping ($h(C)$ will be the "complexity" of

*C*). For simplicity, we require that the $V_C$ are finite dimensional. For convenience, we also assume that

*C*is full-dimensional in $V_C$, i.e., the interior of

*C*is non-empty.

**Definition.**A

**$\mathcal C$-extension**of a pair of convex polyhedral cones $P_0\subset P_1$ in a linear space $W$ is a $C\in \mathcal C$ together with a linear subspace

*L*of $V_C$, and a linear mapping $\pi\colon V_C \to W$, such that $Q := C\cap L$ satisfies (*). The

**$\mathcal C$-extension complexity**of $P_0\subset P_1$ is the infimum over all $h(C)$ of $\mathcal C$-extensions of $P_0\subset P_1$.

**Remark.**The extension complexity may be infinite.

**Remark.**For the definition to make sense in applications in combinatorial optimization, where one is typically interested in polyhedra $P_0\subset P_1$ which are not cones, the family $\mathcal C$ will have to be closed with respect to homogenization. This is the case in the examples mentioned above.

#####
**Connection with factorization rank**

We now define the factorization rank of a matrix over a system of convex cones. For a closed convex cone *C*contained in a linear space

*V*, we denote its

*dual cone*by

$C^* := \bigl\{ \alpha \bigm| \left( \alpha \mid \xi \right) \ge 0 \text{ for all } \xi \in C \bigr\} \quad \subset V^*$,

where $V^*$ stands for the dual linear space. Recall that
$(C\cap D)^* = C^*+D^* \ \ \ \ \ \ \ \ \ \ \ \ \ \ (**)$.

**Definition.**A

**$\mathcal C$-factorization**of an $m\times n$-matrix

*S*is a $C\in \mathcal C$ together with $\alpha_1,\dots,\alpha_m \in C^*$ and $\xi_1,\dots,\xi_n \in C$ such that for all $k,\ell$

$S_{k,\ell} = \left( \alpha_k \mid \xi_\ell \right)$.

The **$\mathcal C$-factorization rank**of

*S*is the infimum over all $h(C)$ of $\mathcal C$-factorizations of

*S*.

**Remark.**The factorization rank may be infinite. The connection between extension complexity and factorization rank is via the slack matrix of $P_0$ "with respect to" $P_1$. For this, let

*A*be an $m\times r$-matrix and

*X*an $r\times n$-matrix, and suppose that $S := AX \ge 0$. Moreover, let $P_0$ be the convex cone generated by the columns of

*X*, and $P_1 := \{x \mid Ax \ge 0\}$. We assume that the rank of

*S*is

*r*, i.e., both

*A*and

*X*are of rank

*r*. In this case, we say that

*S*is the

**slack matrix**of $P_0 \subset P_1$.

Theorem.The $\mathcal C$-extension complexity of $P_0\subset P_1$ equals the $\mathcal C$-factorization rank of the slack matrixS.

**Proof.**For simplicity, denote by $x_1,\dots,x_n$ the columns of

*X*, and by $a_1,\dots,a_m$ the rows of

*A*, so that $S_{k,\ell} = \left(a_k\mid x_\ell\right)$. We first show that the $\mathcal C$-extension complexity is

**at least**the $\mathcal C$-factorization rank, by constructing a factorization of

*S*from an extension of $P_0\subset P_1$. Let $(C,L,\pi)$ be such an extension, and $Q:= C\cap L$. For every $\ell$, there exists $\xi_\ell \in Q$ such that $\pi(\xi_\ell) = x_\ell$. For every

*k*, it is follows from $\left(\pi^*(a_k)\mid \xi\right) = \left(a_k \mid \pi(\xi) \right)$ that $\pi^*(a_k) \in Q^*$. Thus, by (**), there exist $\alpha_k \in C^*$ and $\gamma_k\in V^*$ with $\left(\gamma_k\mid \lambda\right) = 0$ for all $\lambda\in L$, such that $\pi^*(a_k) = \alpha_k + \gamma_k$. We conclude

$S_{k,\ell} = \left( a_k \mid x_\ell \right) = \left( a_k \mid \pi(\xi_\ell) \right) = \left( \pi^*(a_k) \mid \xi_\ell \right) = \left( \alpha_k + \gamma_k \mid \xi_\ell \right) = \left( \alpha_k \mid \xi_\ell \right)$,

i.e., we have a factorization.
Second, we show that the $\mathcal C$-extension complexity is **at most**the $\mathcal C$-factorization rank, by constructing an extension of $P_0\subset P_1$ from a factorization of

*S*. Let $(C,\alpha_1,\dots,\alpha_m,\xi_1,\dots,\xi_n)$ be such a factorization. We define

$Q' := \Bigl\{ \left(\begin{smallmatrix}x\\\xi\end{smallmatrix}\right) \Bigm| \left(a_k\mid x\right) = \left(\alpha_k \mid \xi\right) \forall k, \ \xi \in C \Bigr\}$.

In a moment, we will show that the projection of $Q'$ onto the *x*-coordinate lies between $P_0$ and $P_1$. This is sufficient to prove the theorem, even though, ostensibly, $Q'$ is not the intersection of $C$ with a linear subspace. But in fact, it is: since

*A*has full row rank, the projection of $Q'$ onto the $\xi$-coordinate is a linear bijection between $Q'$ and

$Q := \Bigl\{ \xi \Bigm| \bigl( \left(\alpha_k \mid \xi\right) \bigr)_k \in \mbox{im}\,A \Bigr\} \cap C$,

which has the desired form.
Now, to show that the projection of $Q'$ onto the *x*-coordinate lies between $P_0$ and $P_1$, we first argue that if $\left(\begin{smallmatrix}x\\\xi\end{smallmatrix}\right) \in Q'$, then $(Ax)_k = (\alpha_k\mid \xi) \ge 0$ because $\alpha_k \in C^*$, $\xi \in C$, and hence $x \in P_1$. For the other inclusion, we only have to check whether the columns of

*X*are contained in the image under the projection. But for the column $x_\ell$, we have $\left(\begin{smallmatrix}x_\ell\\\xi_\ell\end{smallmatrix}\right) \in Q'$ since $(a_k\mid x_\ell) = (\alpha_k\mid\xi_\ell)$ holds. This concludes the proof of the theorem.