FOCS 2011
TechTalks from event: FOCS 2011
We will be uploading the videos for FOCS 2011 during the week of Nov 28th 2011. If you find any discrepancy, please let us know by clicking on report error link on talk page. If you did not permit the video to be published and by mistake we have published your talk, please notify us immediately at support AT weyond.com
1A

MinMax Graph Partitioning and SmallSet ExpansionWe study graph partitioning problems from a minmax perspective, in which an input graph on $n$ vertices should be partitioned into $k$ parts, and the objective is to minimize the maximum number of edges leaving a single part. The two main versions we consider are where the $k$ parts need to be of equalsize, and where they must separate a set of $k$ given terminals. We consider a common generalization of these two problems, and design for it an $O(\sqrt{\log n\log k})$approximation algorithm. This improves over an $O(\log2 n)$ approximation for the second version due to Svitkina and Tardos \cite{ST04}, and roughly $O(k\log n)$ approximation for the first version that follows from other previous work. We also give an improved $O(1)$approximation algorithm for graphs that exclude any fixed minor. Along the way, we study the $\rho$Unbalanced Cut problem, whose goal is to find, in an input graph $G=(V,E)$, a set $S\subseteq V$ of size $S=\rho n$ that minimizes the number of edges leaving $S$. We provide a bicriteria approximation of $O(\sqrt{\log{n}\log{(1/\rho)}})$; when the input graph excludes a fixedminor we improve this guarantee to $O(1)$. Note that the special case $\rho = 1/2$ is the wellknown Minimum Bisection problem, and indeed our bounds generalize those of Arora, Rao and Vazirani \cite{ARV08} and of Klein, Plotkin, and Rao~\cite{KPR93}. Our algorithms work also for the closely related Small Set Expansion (SSE) problem, which asks for a set $S\subseteq V$ of size $0<S \leq \rho n$ with minimum edgeexpansion, and was suggested recently by Raghavendra and Steurer~\cite{RS10}. In fact, our algorithm handles more general, weighted, versions of both problems. Previously, an $O(\log n)$ true approximation for both $\rho$Unbalanced Cut and Small Set Expansion follows from R\"acke~\cite{Racke08}.

The Graph Minor Algorithm with Parity ConditionsWe generalize the seminal Graph Minor algorithm of Robertson and Seymour to the parity version. We give polynomial time algorithms for the following problems: \begin{enumerate} \item the parity $H$minor (Odd $K_k$minor) containment problem, and \item the disjoint paths problem with $k$ terminals and the parity condition for each path, \end{enumerate} as well as several other related problems. We present an $O(m \alpha(m,n) n)$ time algorithm for these problems for any fixed $k$, where $n,m$ are the number of vertices and the number of edges, respectively, and the function $\alpha(m,n)$ is the inverse of the Ackermann function (see Tarjan \cite{tarjan}). Note that the first problem includes the problem of testing whether or not a given graph contains $k$ disjoint odd cycles (which was recently solved in \cite{tony,oddstoc}), if $H$ consists of $k$ disjoint triangles. The algorithm for the second problem generalizes the Robertson Seymour algorithm for the $k$disjoint paths problem. As with the RobertsonSeymour algorithm for the $k$disjoint paths problem for any fixed $k$, in each iteration, we would like to either use the presence of a huge clique minor, or alternatively exploit the structure of graphs in which we cannot find such a minor. Here, however, we must take care of the parity of the paths and can only use an ``odd clique minor". This requires new techniques to describe the structure of the graph when we cannot find such a minor. We emphasize that our proof for the correctness of the above algorithms does not depend on the full power of the Graph Minor structure theorem \cite{RS16}. Although the original Graph Minor algorithm of Robertson and Seymour does depend on it and our proof does have similarities to their arguments, we can avoid the structure theorem by building on the shorter proof for the correctness of the graph minor algorithm in \cite{kw}. Consequently, we are able to avoid the much of the heavy machinery of the Graph Minor structure theory. Our proof is less than 50 pages.

Separator Theorems for MinorFree and Shallow MinorFree Graphs with ApplicationsAlon, Seymour, and Thomas generalized Lipton and Tarjan's planar separator theorem and showed that a $K_h$minor free graph with $n$ vertices has a separator of size at most $h^{3/2}\sqrt n$. They gave an algorithm that, given a graph $G$ with $m$ edges and $n$ vertices and given an integer $h\geq 1$, outputs in $O(\sqrt{hn}m)$ time such a separator or a $K_h$minor of $G$. Plotkin, Rao, and Smith gave an $O(hm\sqrt{n\log n})$ time algorithm to find a separator of size $O(h\sqrt{n\log n})$. Kawarabayashi and Reed improved the bound on the size of the separator to $h\sqrt n$ and gave an algorithm that finds such a separator in $O(n^{1 + \epsilon})$ time for any constant $\epsilon > 0$, assuming $h$ is constant. This algorithm has an extremely large dependency on $h$ in the running time (some power tower of $h$ whose height is itself a function of $h$), making it impractical even for small $h$. We are interested in a small polynomial time dependency on $h$ and we show how to find an $O(h\sqrt{n\log n})$size separator or report that $G$ has a $K_h$minor in $O(\poly(h)n^{5/4 + \epsilon})$ time for any constant $\epsilon > 0$. We also present the first $O(\poly(h)n)$ time algorithm to find a separator of size $O(n^c)$ for a constant $c < 1$. As corollaries of our results, we get improved algorithms for shortest paths and maximum matching. Furthermore, for integers $\ell$ and $h$, we give an $O(m + n^{2 + \epsilon}/\ell)$ time algorithm that either produces a $K_h$minor of depth $O(\ell\log n)$ or a separator of size at most $O(n/\ell + \ell h^2\log n)$. This improves the shallow minor algorithm of Plotkin, Rao, and Smith when $m = \Omega(n^{1 + \epsilon})$. We get a similar running time improvement for an approximation algorithm for the problem of finding a largest $K_h$minor in a given graph.

A constant factor approximation algorithm for unsplittable flow on pathsIn this paper, we present the first constantfactor approximation algorithm for the unsplittable flow problem on a path. This represents a large improvement over the previous best polynomial time approximation algorithm for this problem in its full generality, which was an $O(\log n)$approximation algorithm; it also answers an open question by Bansal et~al.[SODA'09]. The approximation ratio of our algorithm is $7+\epsilon$ for any $\epsilon>0$. In the unsplittable flow problem on a path, we are given a capacitated path $P$ and $n$ tasks, each task having a demand, a profit, and start and end vertices. The goal is to compute a maximum profit set of tasks such that the total demand of the selected tasks does not exceed the capacity of any edge on $P$. This is a wellstudied problem that occurs naturally in various settings, and therefore it has been studied under alternative names, such as resource allocation, bandwidth allocation, resource constrained scheduling and temporal knapsack. Poly nomial time constant factor approximation algorithms for the problem were previously known only under the nobottleneck assumption (in which the maximum task demand must be no greater than the minimum edge capacity). We introduce several novel algorithmic techniques, which might be of independent interest: a framework which reduces the problem to instances with a bounded range of capacities, and a new geometrically inspired dynamic program which solves a special case of the maximum weight independent set of rectangles problem to optimality. We also give the first proof that the problem is strongly NPhard; we show that this is true even if all edge capacities are equal and all demands are either 1, 2, or 3.
1B

How Bad is Forming Your Own Opinion?A longstanding line of work in economic theory has studied models by which a group of people in a social network, each holding a numerical opinion, can arrive at a shared opinion through repeated averaging with their neighbors in the network. Motivated by the observation that consensus is rarely reached in real opinion dynamics, we study a related sociological model in which individuals' intrinsic beliefs counterbalance the averaging process and yield a diversity of opinions. By interpreting the repeated averaging as bestresponse dynamics in an underlying game with natural payoffs, and the limit of the process as an equilibrium, we are able to study the cost of disagreement in these models relative to a social optimum. We provide a tight bound on the cost at equilibrium relative to the optimum; our analysis draws a connection between these agreement models and extremal problems for generalized eigenvalues. We also consider a natural network design problem in this setting, where adding links to the underlying network can reduce the cost of disagreement at equilibrium.

The Complexity of the Homotopy Method, Equilibrium Selection, and LemkeHowson Solutions.We show that the widely used homotopy method for solving fixpoint problems, as well as the HarsanyiSelten equilibrium selection process for games, are PSPACEcomplete to implement. Extending our result for the HarsanyiSelten process, we show that several other homotopybased algorithms for solving games are also PSPACEcomplete to implement. A further application of our techniques yields the result that it is PSPACEcomplete to compute any of the equilibria that could be found via the classical LemkeHowson algorithm, a complexitytheoretic strengthening of the result in [24]. These results show that our techniques can be widely applied and suggest that the PSPACEcompleteness of implementing homotopy methods is a general principle.

Welfare and Profit Maximization with Production CostsCombinatorial Auctions are a central problem in Algorithmic Mechanism Design: pricing and allocating goods to buyers with complex preferences in order to maximize some desired objective (e.g., social welfare, revenue, or profit). The problem has been wellstudied in the case of limited supply (one copy of each item), and in the case of digital goods (the seller can produce additional copies at no cost). Yet the case of resourcesthink oil, labor, computing cycles, etc.neither of these abstractions is just right: additional supplies of these resources can be found, but only at a cost (where the marginal cost is an increasing function). In this work, we initiate the study of the algorithmic mechanism design problem of combinatorial pricing under increasing marginal cost. The goal is to sell these goods to buyers with unknown and arbitrary combinatorial valuation functions to maximize either the social welfare, or our own profit; specifically we focus on the setting of \emph{posted item prices} with buyers arriving online. We give algorithms that achieve constant factor approximations for a class of natural cost functionslinear, lowdegree polynomial, logarithmicand that give logarithmic approximations for all convex marginal cost functions (along with a necessary additive loss). We show that these bounds are essentially best possible for these settings

The SecondBelief Mechanism.In settings of incomplete information, we put forward a very conservative indeed, purely settheoretic model of the knowledge that the players may have about the types of their opponents. Yet, we prove that such knowledge can be successfully and robustly leveraged by means of a solution concept relying on very weak assumptions: in essence, via extensiveform mechanisms under â€œmutualâ€ knowledge of rationality. We demonstrate the potential of our approach in auctions of a single good by 1. considering a new revenue benchmark, always lying between the highest and secondhighest valuation, 2. proving that no classical mechanism can even slightly approximate it in any robust way, and 3. providing a new mechanism that perfectly and robustly achieves it, with the extra property that the good will always be sold out at the end of the auction. Our impossibility result for robustly implementing our revenue benchmark applies not only to implementation in dominant strategies, but also to any implementation ``at equilibrium", as well as to implementation in undominated strategies.
2A

Efficient Fully Homomorphic Encryption from (Standard) LWEWe present a fully homomorphic encryption scheme that is based solely on the (standard) learning with errors (LWE) assumption. Applying known results on LWE, the security of our scheme is based on the worstcase hardness of short vector problems on arbitrary lattices. As icing on the cake, our scheme is quit e efficient, and has very short ciphertexts. Our construction improves upon previous works in two aspects: 1. We show that ``somewhat homomorphic'' encryption can be based on LWE, using a new {\em relinearization} technique. In contrast, all previous schemes relied on complexity assumptions related to ideals in various rings. 2. More importantly, we deviate from the ``squashing paradigm'' used in all previous works. We introduce a new {\em dimension reduction} technique, which shortens the ciphertexts and reduces the decryption complexity of our scheme, without introducing additional assumptions. In contrast, all previous works required an additional, very strong assumption (namely, the sparse subset sum assumption). Since our scheme has very short ciphertexts, we use it to construct an asymptoticallyefficient LWEbased singleserver private information retrieval (PIR) protocol. The communication complexity of our protocol (in the publickey model) is $k \cdot \polylog\,k+\log DB$ bits per singlebit query, which is better than any known scheme. Previously, it was not known how to achieve a communication complexity of even $\poly(k, \logDB)$ based on LWE.

Fully Homomorphic Encryption without Squashing Using Depth3 Arithmetic CircuitsAll currently known fully homomorphic encryption (FHE) schemes use the same blueprint from [Gentry 2009]: First construct a somewhat homomorphic encryption (SWHE) scheme, next "squash" the decryption circuit until it is simple enough to be handled within the homomorphic capacity of the SWHE scheme, and finally "bootstrap" to get a FHE scheme. In all existing schemes, the squashing technique induces an additional assumption: that the sparse subset sum problem (SSSP) is hard. We describe a \emph{new approach} that constructs FHE as a hybrid of a SWHE scheme and a multiplicatively homomorphic encryption (MHE) scheme, such as Elgamal. Our construction eliminates the need for the squashing step, and thereby also removes the need to assume the SSSP is hard. We describe a few concrete instantiations of the new method, obtaining the following results: 1. A "simple" FHE scheme where we replace SSSP with Decision DiffieHellman. 2. The first FHE scheme based entirely on worstcase hardness: Specifically, we describe a "leveled" FHE scheme whose security can be quantumly reduced to the approximate shortest independent vector problem over ideal lattices (idealSIVP). 3. Some efficiency improvements for FHE: While at present our new method does not improve computational efficiency, we do provide an optimization that reduces the ciphertext length. For example, at one point, the entire FHE ciphertext may consist of a single Elgamal ciphertext! Our new method does not eliminate the bootstrapping step. Whether this can be done remains an intriguing open problem. As in the previous blueprint, we can get "pure" (nonleveled) FHE by assuming circular security. Our main technique is to express the decryption function of SWHE schemes as a depth3 arithmetic circuit of a particular form. When evaluating this circuit homomorphically, as needed for bootstrapping, we temporarily switch to a MHE scheme, such as Elgamal, to handle the product part of the circuit. We then translate the result back to the SWHE scheme by homomorphically evaluating the decryption function of the MHE scheme. (Due to the special form of the circuit, switching to the MHE scheme can be done without having to evaluate anything homomorphically.) Using our method, the SWHE scheme only needs to be capable of evaluating the MHE scheme's decryption function, not its own decryption function. We thereby avoid the circularity that necessitated squashing in the original blueprint.

Coin Flipping with Constant Bias Implies OneWay FunctionsIt is well known (\cf Impagliazzo and Luby [FOCS '89]) that the existence of almost all ``interesting" cryptographic applications, \ie ones that cannot hold information theoretically, implies oneway functions. An important exception where the above implication is not known, however, is the case of coinflipping protocols. Such protocols allow honest parties to mutually flip an unbiased coin, while guaranteeing that even a cheating (efficient) party cannot bias the output of the protocol by much. While Impagliazzo and Luby proved that coinflipping protocols that are safe against negligible bias do imply oneway functions, and, very recently, Maji, Prabhakaran, Sahai and Schreiber [FOCS '10] proved the same for constantround protocols (with any nontrivial bias). For the general case, however, no such implication was known. We make a significant step towards answering the above fundamental question, showing that coinflipping protocols safe against a constant bias (concretely, $\frac{\sqrt{2} 1}{2}$) imply oneway functions.

How to Garble Arithmetic CircuitsYao's garbled circuit construction transforms a boolean circuit $C:\{0,1\}^n\to\{0,1\}^m$ into a ``garbled circuit'' $\hC$ along with $n$ pairs of $k$bit keys, one for each input bit, such that $\hC$ together with the $n$ keys corresponding to an input $x$ reveal $C(x)$ and no additional information about $x$. The garbled circuit construction is a central tool for constantround secure computation and has several other applications. Motivated by these applications, we suggest an efficient arithmetic variant of Yao's original construction. Our construction transforms an arithmetic circuit $C : \Z^n\to\Z^m$ over integers from a bounded (but possibly exponential) range into a garbled circuit $\hC$ along with $n$ affine functions $L_i : \Z\to \Z^k$ such that $\hC$ together with the $n$ integer vectors $L_i(x_i)$ reveal $C(x)$ and no additional information about $x$. The security of our construction relies on the intractability of the decisional variant of the learning with errors (LWE) problem.