FOCS 2009 50th Annual IEEE Symposium on Foundations of Computer Science October 24-27, 2009 Renaissance Atlanta Hotel Downtown, Atlanta GA

FAQ

List of Accepted Papers

The following is a preliminary list of the papers accepted to FOCS 2009, with abstracts.  A plain text list without abstracts is available here.

 Zeev Nutov. Approximating minimum cost connectivity problems via uncrossable bifamilies and spider-cover decompositions Abstract: We will consider finding a minimum cost edge-cover of setpair families arising from instances of the Generalized Steiner Network problems, where the costs can be on the edges or on the nodes. Let k denote the maximum connectivity requirement. Our main results are: 1. A very simple method to decompose such families into O(k) uncrossable setpair families. 2. An O(k log n)-approximation for the element-connectivity version with node costs, matching the best known ratio for the edge-connectivity version. This implies improved approximation ratios for the node connectivity variant with rooted requirements: O(k^2) for edge costs and O(k^2 log n) for node costs, improving the previous ratios O(k^2 log n) and O(k^8 log^2 n), respectively. David Doty. Randomized Self-Assembly for Exact Shapes Abstract: Working in Winfree's abstract tile assembly model, we show that a constant-size tile assembly system can be programmed through relative tile concentrations to build an n x n square with high probability, for any sufficiently large n. This answers an open question of Kao and Schweller (Randomized Self-Assembly for Approximate Shapes, ICALP 2008), who showed how to build an *approximately* n x n square using tile concentration programming, and asked whether the approximation could be made *exact* with high probability. Jan Vondrak. Symmetry and approximability of submodular maximization problems Abstract: A number of recent results on optimization problems involving submodular functions have made use of the "multilinear relaxation" of the problem. We present a general approach to deriving inapproximability results in the value oracle model, based on the notion of "symmetry gap". Our main result is that for any fixed instance that exhibits a certain "symmetry gap" in its multilinear relaxation, there is a naturally related class of instances for which a better approximation factor than the symmetry gap is impossible. This unifies several known hardness results for submodular maximization, e.g. the impossibility of (1/2+\epsilon)-approximation for unconstrained (non-monotone) submodular maximization, and the optimality of (1-1/e)-approximation for monotone submodular maximization under a cardinality or matroid constraint. As a new application, we consider the problem of maximizing a non-monotone submodular function over the bases of a matroid. A (1/6-o(1))-approximation has been developed for this problem, assuming that the matroid contains two disjoint bases. We show that the best approximation one can achieve is indeed related to packings of bases in the matroid. Specifically, for any k>=2, there is a class of matroids of fractional base packing number \nu = k/(k-1), such that any algorithm achieving a better than (1-1/\nu)-approximation for this problem would require exponentially many value queries. On the positive side, we present a (1-1/\nu-o(1))/2-approximation algorithm for the same problem. Aaron Bernstein. Fully Dynamic $(2 + \eps)$ Approximate All-Pairs Shortest Paths with $O(\log \log n)$ Query and Close to Linear Update Time Abstract: For any fixed 1 > epsilon> 0, we present a fully dynamic algorithm for maintaining (2 + epsilon)-approximate all-pairs shortest paths in undirected graphs with positive edge weights. The query and update times are somewhat faster for unweighted graphs than they are for weighted ones. We use a randomized (Las Vegas) update algorithm (but a deterministic query procedure), so the time given is the *expected* amortized update time. In unweighted graphs, the query time is O(log log n). The update time, up to log factors, is O(mn^{O(1 / sqrt(log n)}). Unfortunately, the update time does have the drawback of a super-polynomial dependence on epsilon. In weighted graphs, the query time is O(log log (nR)), where R is the ratio between the heaviest and the lightest edge weight in the graph. The update time, up to log factors,  is O(mn^{O(1 / sqrt(log n)} (log (nR))^2). Our algorithm has a significantly faster update time than any other known algorithm with sub-polynomial query time. Ilias Diakonikolas, Parikshit Gopalan, Ragesh Jaiswal, Rocco Servedio and Emanuele Viola. Bounded Independence Fools Halfspaces Abstract: We show that any distribution on {-1, 1}^n that is k-wise independent fools any halfspace h : {-1,1}^n --> {-1,1}, i.e., any function of the form h(x) = sgn(sum_{i = 1}^n w_i x_i - theta) where the w_1,...,w_n, theta are arbitrary real numbers, with error epsilon for k = O(epsilon^{-2}*log^2 (1/epsilon)). Our result is tight up to logarithmic factors. Using standard constructions of k-wise independent distributions, we obtain the first explicit pseudorandom generators G : {-1,1}^s --> {-1,1}^n that fool halfspaces. Specifically, we fool halfspaces with error epsilon and seed length s = k * log n = O (log n * epsilon^{-2} log^2(1/epsilon)). Our approach combines classical tools  from real approximation theory with structural results on halfspaces by Servedio (Comput. Complexity 2007). Iftach Haitner. A Parallel Repetition Theorem for Any Interactive Argument Abstract: The question whether or not parallel repetition reduces the soundness error is a fundamental question in the theory of protocols. While parallel repetition reduces (at an exponential rate) the error in interactive proofs and (at a weak exponential rate) in special cases of interactive arguments (e.g., 3-message protocols - Bellare, Impagliazzo and Naor [FOCS '97], and public-coin protocols - Haastad, Pass, Pietrzak and Wikstrom [Manuscript '08]), Bellare et. al gave an example of interactive arguments for which parallel repetition does not reduce the soundness error at all. We show that by slightly modifying any interactive argument, in a way that preserves its completeness and only slightly deteriorates its soundness, we get a protocol for which parallel repetition does reduce the error at a weak exponential rate. In this modified version, the verifier flips at the beginning of each round an (1 – (1/4m), 1/4m) biased coin (i.e., 1 is tossed with probability 1/4m), where m is the round complexity of the (original) protocol. If the coin is one, the verifier halts the interaction and accepts, otherwise it sends the same message that the original verifier would. At the end of the protocol (if reached), the verifier accepts if and only if the original verifier would. Rahul Jain, Sarvagya Upadhyay and John Watrous. Two-message quantum interactive proofs are in PSPACE Abstract: We prove that QIP(2), the class of problems having two-message quantum interactive proof systems, is a subset of PSPACE. This relationship is obtained by means of an efficient parallel algorithm, based on the multiplicative weights update method, for approximately solving a certain class of semidefinite programs. Zeev Dvir, Swastik Kopparty, Shubhangi Saraf and Madhu Sudan. Extensions to the Method of Multiplicities, with applications to Kakeya sets and Mergers Abstract: We extend the "method of multiplicities" to get the following results, of interest in combinatorics and randomness extraction. (A) We show that every Kakeya set (a set of points that contains a line in every direction) in F_q^n must be of size at least q^n/2^n. This bound is tight to within a 2 + o(1)  factor for every n as q tends to infinity, compared to previous bounds that were off by exponential factors in n. (B) We give improved explicit randomness extractors and randomness mergers using shorter seeds than were previously required. In particular,we construct explicit extractors with logarithmic seed that extract 1 - o(1) fraction of the min-entropy of the source. The "method of multiplicities", as used in prior work, analyzed subsets of vector spaces over finite fields by constructing somewhat low degree interpolating polynomials that vanish on every point in the subset with high multiplicity. The typical use of this method involved showing that the interpolating polynomial also vanished on some points outside the subset, and then used simple bounds on the number of zeroes to complete the analysis. Our augmentation to this technique is that we prove, under appropriate conditions, that the interpolating polynomial vanishes with high multiplicity outside the set. This novelty leads to significantly tighter analyses. Nikhil Bansal and Subhash Khot. Optimal Long Code Test with One Free Bit Abstract: For arbitrarily small constants $\eps, \delta > 0$, we present a long code test with one free bit, completeness $1-\eps$ and soundness $\delta$. Using the test, we prove the following two inapproximability results: 1. Assuming the Unique Games Conjecture of Khot, given an $n$-vertex graph that has two disjoint independent sets of size $(\frac{1}{2}-\eps)n$ each, it is NP-hard to find an independent set of size $\delta n$. 2. Assuming a (new) stronger version of the Unique Games Conjecture,  the scheduling problem of {\it minimizing weighted completion time with precedence constraints} is inapproximable within factor $2-\eps$. Or Meir. Combinatorial PCPs with efficient verifiers Abstract: The PCP theorem asserts the existence of proofs that can be verified by a verifier that reads only a very small part of the proof. The theorem was originally proved by Arora and Safra (J. ACM 45(1)) and Arora et al. (J. ACM 45(3)) using sophisticated algebraic tools. More than a decade later, Dinur (J. ACM 54(3)) gave a simpler and arguably more intuitive proof using alternative combinatorial techniques. One disadvantage of Dinur's proof when compared to the previous algebraic proof is that it yields less efficient verifiers. In this work, we provide a combinatorial construction of PCPs with verifiers that are as efficient as the ones obtained by the algebraic methods. The result is the first combinatorial proof of the PCP theorem for NEXP (originally proved by combining the proof of Arora et. al. of the PCP theorem for  with the ideas of Babai et. al. from STOC 1991), and a combinatorial construction of super-fast PCPs of Proximity for NP (first constructed by Ben-Sasson et. al., CCC 2005). Jeff Cheeger, Bruce Kleiner and Assaf Naor. A $(\log n)^{\Omega(1)}$ integrality gap for the Sparsest Cut SDP Abstract: We show that the Goemans-Linial semidefinite relaxation of the Sparsest Cut problem with general demands has integrality gap (\log n)^{\Omega(1)}. This is achieved by exhibiting n-point metric spaces of negative type whose L_1 distortion is (\log n)^{\Omega(1)}. Our result is based on quantitative bounds on the rate of degeneration of Lipschitz maps from the Heisenberg group to L_1 when restricted to cosets of the center. Ben Reichardt. Span programs and quantum query complexity: The general adversary bound is nearly tight for every boolean function Abstract: The general adversary bound is a semi-definite program (SDP) that lower-bounds the quantum query complexity of a function. We turn this lower bound into an upper bound, by giving a quantum walk algorithm based on the dual SDP that has query complexity at most the general adversary bound, up to a logarithmic factor. In more detail, the proof has two steps, each based on "span programs," a certain linear-algebraic model of computation. First, we give an SDP that outputs for any boolean function a span program computing it that has optimal "witness size." The optimal witness size is shown to coincide with the general adversary lower bound. Second, we give a quantum algorithm for evaluating span programs with only a logarithmic query overhead on the witness size. The first result is motivated by a quantum algorithm for evaluating composed span programs. The algorithm is known to be optimal for evaluating a large class of formulas. The allowed gates include all constant-size functions for which there is an optimal span program. So far, good span programs have been found in an ad hoc manner, and the SDP automates this procedure. Surprisingly, the SDP's value equals the general adversary bound. A corollary is an optimal quantum algorithm for evaluating "balanced" formulas over any finite boolean gate set. The second result extends span programs' applicability beyond the formula evaluation problem. A strong universality result for span programs follows. A good quantum query algorithm for a problem implies a good span program, and vice versa. Although nearly tight, this equivalence is nontrivial. Span programs are a promising model for developing more quantum algorithms. The extended abstract and a full version of the paper are available at arXiv:0904.2759. Dang-Trinh Huynh-Ngoc and Paul Beame. Multiparty Communication Complexity and Threshold Circuit Complexity of AC^0 Abstract: We prove an n^Omega(1)/4^k lower bound on the randomized k-party communication complexity of depth-4 AC^0 functions in the number-on-forehead (NOF) model for up to Theta(log n) players. These are the first non-trivial lower bounds for general NOF multiparty communication complexity for any AC^0 function for omega(log log n) players. For non-constant k the bounds are larger than all previous lower bounds for any AC^0 function even for simultaneous communication complexity. Using an approach outlined by Viola (2007), our lower bounds imply the first superpolynomial lower bounds for the simulation of AC^0 by MAJ-SYMM-AND circuits, showing that the well-known quasipolynomial simulations of AC^0 by such circuits due to Allender (1989) and Yao (1990) are qualitatively optimal. We also exhibit a depth-5 read-once formula in NPc-BPPc for k up to Theta(log n) and derive an Omega(2^{sqrt{log n}/sqrt{k}}) lower bound on the randomized k-party NOF communication complexity of Set-Disjointness for up to Theta(log^{1/3} n) players which is significantly larger than the O(log log n) players allowed in the best previous lower bounds for this problem. We also prove other strong results for depth-3 and -4 AC^0 functions. Aaron Archer, MohammadHossein Bateni, MohammadTaghi Hajiaghayi and Howard Karloff. Improved Approximation Algorithms for Prize-Collecting Steiner Tree and TSP Abstract: We study the prize-collecting versions of the Steiner tree (PCST) and traveling salesman (PCTSP) problems: given a graph (V,E) with costs on each edge and a penalty (a.k.a., prize) on each node, the goal is to find a tree (for PCST) or cycle (for PCTSP), that minimizes the sum of the edge costs in the tree/cycle and the penalties of the nodes not spanned by it.  In addition to being a useful theoretical tool for helping to solve other optimization problems, PCST has been applied fruitfully by AT&T to the optimization of real-world telecommunications networks.  The most recent improvements for these problems, giving a 2-approximation algorithm for each, appeared first in 1992.  Moreover, the natural linear programming (LP) relaxation of PCST has an integrality ratio of 2, which has been a barrier to further improvements for this problem. We present (2-epsilon)-approximation algorithms for both problems, connected by a unified technique for improving prize-collecting algorithms that allows us to circumvent the integrality ratio barrier. Kevin Buchin and Wolfgang Mulzer. Delaunay Triangulations in O(sort(n)) and Other Transdichotomous and Hereditary Algorithms in Computational Geometry Abstract: We present several results about Delaunay triangulations (DTs) and convex hulls in transdichotomous and hereditary settings: (i) the DT of a planar point set can be computed in time O(sort(n)) on a word RAM, where sort(n) is the time to sort n numbers; (ii) if we know the ordering of a planar point set in x- and in y-direction, its DT can be found by a randomized algebraic computation tree of expected linear depth; (iii) given a universe U of points in the plane, we construct a data structure D for Delaunay queries: for any subset P of U, D can find the DT of P in time O(|P|  log log |U|); (iv) given a universe U of three-dimensional points in general convex  position, there is a data structure D for _convex hull queries: for any subset P of U, D can find the convex hull of P in time O(|P| ( log log |U|)^2); (v) given a convex polytope in 3-space with n vertices which are colored with k > 2 colors, we can split it into the convex hulls of the individual color classes in time O(n (log log n)^2). The results (i)--(iii) generalize to higher dimensions. We need a wide range of techniques. Most prominently, we describe a reduction from nearest-neighbor graphs to DTs that relies on a new variant of randomized incremental constructions which allow dependent sampling. Saugata Basu and Thierry Zell. Polynomial hierarchy, Betti numbers and a real analogue of Toda's Theorem Abstract: Toda proved in 1989 that the (discrete) polynomial time hierarchy, is contained in the class of languages that can be decided by a Turing machine in polynomial time given access to an oracle with the power to compute a function in the counting complexity class #P. This result which illustrates the power of counting is considered to be a seminal result in computational complexity theory. An analogous result in the complexity theory over the reals   (in the sense of  Blum-Shub-Smale real Turing machines) has been missing so far. In this paper we formulate and prove a real analogue of Toda's theorem. Unlike Toda's proof in the discrete case, which relied on sophisticated combinatorial arguments, our proof is topological in nature. As a consequence of our techniques we are also able to relate the computational hardness of two extremely well-studied problems in algorithmic semi-algebraic geometry  -- namely the problem of deciding sentences in the first order theory of the reals with a constant number of quantifier alternations, and that of computing the Betti numbers of semi-algebraic sets. We obtain a polynomial time reduction of the compact version of the first problem to the second. This latter result might be of independent interest to researchers in algorithmic semi-algebraic geometry. Adam Kalai, Shang-Hua Teng and Alex Samorodnitsky. Learning Decision Trees From Random Examples: a Smoothed Analysis Abstract: We consider a new model of learning from random examples, motivated by smoothed analysis.  Like previous work, we assume that the distribution over unlabeled examples is a product distribution -- the individual bits are chosen independently.  However, we consider "typical" product distributions.  Formally, we assume that the parameters are chosen with some (constant) amount of randomness, for example from the cube [0.49,0.51]^n.  In this model we show how to agnostically learn polynomially-sized decision trees from random examples alone. We also consider a second model, which is a specific distribution over unlabeled data from which we efficiently learn all O(log n)-depth decision trees.  In particular, we consider a distribution where the data exhibits even a small amount of diversity, e.g., symmetrically-distributed data where the fraction of 1's in the attributes span a constant range such as [.49,.51]. In recent work, Arpe and Mossel (2008) efficiently learn Juntas of size O(log(n) log log(n)) in a related model. Vitaly Feldman. A Complete Characterization of Statistical Query Learning with Applications to Evolvability Abstract: Statistical query (SQ) learning model of Kearns (1993) is a natural restriction of the PAC learning model in which a learning algorithm is allowed to obtain estimates of statistical properties of the examples but cannot see the examples themselves. We describe a new and simple characterization of the query complexity of learning in the SQ learning model. Unlike the previously known bounds on SQ learning (Blum et al., 1994; Bshouty and Feldman 2002; Yang, 2005; Balcazar et al., 2007; Simon 2007) our characterization preserves the accuracy and the efficiency of learning. The preservation of accuracy implies that that our characterization gives the first characterization of SQ learning in the agnostic learning framework of Haussler (1992) and Kearns et al. (1992). The preservation of efficiency allows us to derive a new technique for the design of evolutionary algorithms in Valiant's model of evolvability (2006). We use this technique to demonstrate the existence of a large class of monotone evolutionary learning algorithms based on square loss fitness estimation. These results differ significantly from the few known evolutionary algorithms and give evidence that evolvability in Valiant's model is a more versatile phenomenon than there had been previous reason to suspect. André Chailloux and Iordanis Kerenidis. Optimal quantum strong coin flipping Abstract: Coin flipping is a fundamental cryptographic primitive that enables two distrustful and far apart parties to create a uniformly random bit. Quantum information allows for protocols in the information theoretic setting where no dishonest party can perfectly cheat. The previously best-known quantum protocol by Ambainis achieved a cheating probability of at most 3/4. On the other hand, Kitaev showed that no quantum protocol can have cheating probability less than 1/sqrt{2}. Closing this gap has been one of the important open questions in quantum cryptography. In this paper, we resolve this question by presenting a quantum strong coin flipping protocol with cheating probability arbitrarily close to 1/sqrt{2}. More precisely, we show how to use any weak coin flipping protocol with cheating probability 1/2+epsilon in order to achieve a strong coin flipping protocol with cheating probability 1/sqrt{2}+O(epsilon). The optimal quantum strong coin flipping protocol follows from our construction and the optimal quantum weak coin flipping protocol described by Mochon. Ankur Moitra. Approximation Algorithms for Multicommodity-Type Problems with Guarantees Independent of the Graph Size Abstract: The seminal paper of Leighton and Rao established an approximate min-cut max-flow theorem for uniform multicommodity flows. Linial, London and Rabinovich and Aumann and Rabani solved a major open question and proved that the min-cut max-flow ratio for general maximum concurrent flow problems (when there are k commodities) is O(log k). Here we attempt to derive a more general theory of Steiner cut and flow problems, and we prove bounds that are poly-logarithmic in k for a much broader class of multicommodity flow and cut problems. Our structural results are motivated by the meta question: Suppose we are given a poly(log n) approximation algorithm for a flow or cut problem - when can we give a poly(log k) approximation algorithm for a generalization of this problem to a Steiner cut or flow problem? Thus we require that these approximation guarantees be independent of the size of the graph, and only depend on the number of commodities (or the number of terminal nodes in a Steiner cut problem). For many natural applications (when k = n^o(1)) this yields much stronger guarantees. We construct vertex-sparsifiers that approximately preserve the congestion of all multicommodity flows (when demands are restricted to a set K). We prove such sparsifiers exist through zero-sum games and metric geometry, and we construct such sparsifiers through oblivious routing guarantees. These results let us reduce a broad class of multicommodity-type problems to a uniform case (on k nodes) at the cost of a loss of a poly(log k) in the approximation guarantee. We cannot concisely define this class but we can use our results to give poly(log k) approximation algorithms for a number of problems for which such results were previously unknown, such as requirement cut, l-multicut, and natural generalizations of oblivious routing, min-cut linear arrangement and minimum linear arrangement. Satoru Iwata and Kiyohito Nagano. Submodular Function Minimization under Covering Constraints Abstract: This paper addresses the problems of minimizing nonnegative submodular functions under covering constraints, which include the vertex cover, edge cover, and set cover constraints. We give approximation algorithms for these problems exploiting the discrete convexity of submodular functions. We first present a rounding 2-approximation algorithm for the submodular vertex cover problem based on the half-integrality of the continuous relaxation problem, and show that the rounding algorithm can be performed by one application of submodular function minimization on a ring family. We also show that a rounding algorithm and a primal-dual algorithm for the submodular cost set cover problem are both constant factor approximation algorithms if the maximum frequency is fixed. In addition, we give an essentially tight lower bound on the approximability of the submodular edge cover problem. Julia Chuzhoy and Sanjeev Khanna. An O(k^3 log n)-Approximation Algorithm for Vertex-Connectivity Survivable Network Design Abstract: In the Survivable Network Design problem (SNDP), we are given an undirected graph G(V,E) with costs on edges, along with a connectivity requirement r(u,v) for each pair u,v of vertices. The goal is to find a minimum-cost subset E' of edges satisfying the given set of pairwise connectivity requirements. In the edge-connectivity version we need to ensure that there are r(u,v) edge-disjoint paths for every pair u,v of vertices, while in the vertex-connectivity version the paths are required to be vertex-disjoint. The edge-connectivity version of SNDP is known to have a 2-approximation. However, no non-trivial approximation algorithms have been known so far for the vertex version of SNDP, except for special cases of the problem. We present an extremely simple O(k^3 log |T|)-approximation algorithm for this problem, where k denotes the maximum connectivity requirement, and T is the set of terminals (vertices appearing in one or more pairs with positive connectivity requirements). We also give a simple proof of the recently discovered O(k^2 log |T|)-approximation algorithm for the single-source version of vertex-connectivity SNDP. Our results establish a natural connection between vertex-connectivity and a well-understood generalization of edge-connectivity, namely, element-connectivity, in that, any instance of vertex-connectivity can be represented by a small number of instances of the element-connectivity problem. Neeraj Kayal and Shubhangi Saraf. Blackbox Polynomial Identity Testing for Depth 3 Circuits Abstract: We study depth three arithmetic circuits with bounded top fanin. We give the first deterministic polynomial time blackbox identity test for depth three circuits with bounded top fanin over the field of rational numbers, thus resolving a question posed by Klivans and Spielman (STOC 2001). Our main technical result is a structural theorem for depth three circuits with bounded top fanin that compute the zero polynomial. In particular we show that if a circuit C with real coefficients is simple, minimal and computes the zero polynomial, then the rank of C can be upper bounded by a function only of the top fanin. This proves a weak form of a conjecture of Dvir and Shpilka (STOC 2005) on the structure of identically zero depth three arithmetic circuits. Our blackbox identity test follows from this structural theorem by combining it with a construction of Karnin and Shpilka (CCC 2008). Our proof of the structure theorem exploits the geometry of finite point sets in R^n. We identify the linear forms appearing in the circuit C with points in R^n. We then show how to apply high dimensional versions of the Sylvester--Gallai Theorem, a theorem from incidence-geometry, to identify a special linear form appearing in C, such that on the subspace where the linear form vanishes, C restricts to a simpler circuit computing the zero polynomial. This allows us to build an inductive argument bounding the rank of our circuit. While the utility of such theorems from incidence geometry for identity testing has been hinted at before, our proof is the first to develop the connection fully and utilize it effectively. Shankar Kalyanaraman and Christopher Umans. The Complexity of Rationalizing Network Formation Abstract: We study the complexity of rationalizing network formation. In this problem we fix an underlying model describing how selfish parties (the vertices) produce a graph by making individual decisions to form or not form incident edges. The model is equipped with a notion of stability (or equilibrium), and we observe a set of “snapshots” of graphs that are assumed to be stable. From this we would like to infer some unobserved data about the system: edge prices, or how much each vertex values short paths to each other vertex. We study two rationalization problems arising from the network formation model of Jackson and Wolinsky [JW96]. When the goal is to infer edge prices, we observe that the rationalization problem is easy. The problem remains easy even when rationalizing prices do not exist and we instead wish to find prices that maximize the stability of the system. The latter situation may arise from noisy data or inadequacies in the model. In contrast, when the edge prices are given and the goal is instead to infer valuations of each vertex by each other vertex, we prove that the rationalization problem becomes NP-hard. Our proof exposes a close connection between rationalization problems and the Inequality-SAT (I-SAT) problem. Finally and most significantly, we prove that an approximation version of this NP-complete rationalization problem is NP-hard to approximate to within better than a 1/2 ratio. This shows that the trivial algorithm of setting everyone’s valuations to infinity (which rationalizes all the edges present in the input graphs) or to zero (which rationalizes all the non-edges present in the input graphs) is best-possible assuming P is not equal to NP. To do this we prove a (1/2 + delta)-approximation hardness for a variant of I-SAT in which all coefficients are non-negative. This in turn follows from a tight hardness result for MAX-LINR+ (linear equations over the reals, with non-negative coefficients), which we prove by a (non-trivial) modification of the recent result of Guruswami and Raghavendra [GR07] which achieved tight hardness for this problem without the non-negativity constraint. Our technical contributions regarding the hardness of I-SAT and MAX-LINR+ may be of independent interest, given the generality of these problems. Ely Porat and Benny Porat. Exact And Approximate Pattern Matching In The Streaming Model Abstract: We present a fully online randomized algorithm for the classical pattern matching problem that uses merely O(log m) space. This problem is very interesting from a theoretical point of view since for a long time it seemed impossible to break the O(m) barrier. In addition, it can be used as a tool in many practical applications, including monitoring Internet traffic and firewall applications. In this online model we first receive the pattern P of size m and preprocess it. After the preprocessing phase, the characters of the text T of size n arrive one by one in online fashion. For each index of the text input we wish to indicate whether the pattern matches the text at that location index or not. Clearly, for index i, an indication can only be given after all characters from index i till index i+m-1 have arrived. Our goal is to provide such answers using little space, and spending a minimal amount of time on each character (time and space which are in O(poly(log n)). We present an algorithm which guarantees that whenever a positive answer is given - i.e. index i is a match, then a match of the pattern exists at index i. However, a false negative at index i is possible with a probability at most 1/n^3. Thus, overall, the correct answer for all positions is returned with a probability of 1/n^2. The time which our algorithm spends on each input character is bounded by O(log m), and the space complexity is O(log m) words. In addition we present a solution in the same model for the pattern matching with k mismatches problem. In this problem, a match means allowing up to k symbol mismatches between the pattern and the subtext beginning at index i. We provide an algorithm in which the time spent on each character is bounded by O(k^2*poly(log m)), and the space complexity is O(k^3*poly(log m)) words. Ravindran Kannan. A new probability inequality using typical moments and concentration results Abstract: We present two probability inequalities substantially strengthening the classical Hoeffding-Azuma (H-A) result. The simpler first inequality replaces the absolute bound requirement in H-A by a bound on the moments; it also weakens the Martingale requirement. Both H-A and Chernoff bounds are very special cases of this inequality. Also, several hard concentration results made easy by Talagrand's celebrated result - like the (geometric) Travelling Salesperson problem, Minimum weight spanning trees, Longest Increasing Sequence and bin-packing- are also simply tackled by it. In the stronger second inequality, we use information on conditional moments of each variable conditioned on typical as well as on worst-case values of previous variables. Using this, we settle the (discrete case of the) well-studied stochastic bin-packing problem and give several new combinatorial and other concnetration results. Peyman Afshani, Lars Arge and Kasper Dalgaard Larsen. Orthogonal Range Reporting in Three and Higher Dimensions Abstract: In orthogonal range reporting we are to preprocess N points in d dimensions such that the points inside an axis-aligned query box can be found efficiently. This is a fundamental problem in various fields, including spatial databases and computational geometry. In this paper we provide a number of improvements for three and higher dimensional orthogonal range reporting: In the pointer machine model, we improve all the best previous results,  some of which have not seen any improvements in almost two decades. In the I/O-model, we give the first polylogarithmic query bounds that exactly match the bounds in the pointer machine model but with logarithms taken to base B; no such results were known in more than three dimensions. We also prove a space lower bound that shows our new structures are space-optimal in the I/O-model. Our main pointer machine data structure uses optimal O(N (log N / loglog N)^{d-1}) space and answers queries in O(log^{d-1}N / (loglog N)^{d-2} + K) time; here K is the size of the output. Our main I/O-efficient structure answers queries in O(Log^{d-1}N / (LogLog N)^{d-2} + K/B) I/Os and uses optimal O(N (Log N / LogLog N)^{d-1}) space where Log N is logarithm taken to base B. Yossi Azar, Benjamin Birnbaum, L. Elisa Celis, Nikhil R. Devanur and Yuval Peres. Convergence of Local Dynamics to Balanced Outcomes in Exchange Networks Abstract: Bargaining games on exchange networks have been studied by both economists and sociologists. A Balanced Outcome [9, 15] for such a game is an equilibrium concept that combines notions of stability and fairness. In a recent paper, Kleinberg and Tardos [13] introduced balanced outcomes to the computer science community and provided a polynomial time algorithm to compute the set of such outcomes. Their work left open a pertinent question: are there natural, local dynamics that converge to a balanced outcome? In this paper, we answer this question in the affirmative by describing such a process and showing that it converges to a balanced outcome whenever one exists. Subhash Khot and Rishi Saket. SDP Integrality Gaps with Local $\ell_1$-Embeddability Abstract: We construct integrality gap instances for SDP relaxation of the Maximum Cut and the Sparsest Cut problems. It is well-known that if the  triangle inequality constraints were added to the SDP, then the SDP vectors naturally define an n-point negative type (or ell_2^2) metric where n is the number of vertices in the problem instance. Our gap-instances have the property that every sub-metric on t = O((log log log n)^{1/6}) points is isometrically embeddable into ell_1. The local ell_1-embeddability constraints are are implied when the basic SDP relaxation is augmented with t rounds of the Sherali-Adams LP-relaxation. For the Maximum Cut problem, we obtain an optimal gap of alpha_{GW}^{-1} - eps, where alpha_{GW} is the Goemans-Williamson constant and eps > 0 is an arbitrarily small constant. For the Sparsest Cut problem, we obtain a gap of Omega((log log log n)^{1/13}). The latter result can be rephrased as a construction of an n-point negative type metric such that every t-point sub-metric is isometrically ell_1-embeddable, but embedding the whole metric into ell_1 incurs distortion Omega((log log log n)^{1/13}). Shahar Dobzinski and Shaddin Dughmi. On the Power of Randomization in Algorithmic Mechanism Design Abstract: In many settings the power of truthful mechanisms is severely bounded. In this paper we use randomization to overcome this problem. In particular, we construct an FPTAS for multi-unit auctions that is truthful in expectation, whereas there is evidence that no polynomial-time truthful deterministic mechanism provides an approximation ratio better than 2. We also show for the first time that truthful in expectation polynomial-time mechanisms are provably stronger than polynomial-time universally truthful mechanisms. Specifically, we show that there is a setting in which: (1) there is a non-polynomial time truthful mechanism that always outputs the optimal solution, and that (2) no universally truthful randomized mechanism can provide an approximation ratio better than 2 in polynomial time, but (3) an FPTAS that is truthful in expectation exists. Libor Barto and Marcin Kozik. Constraint Satisfaction Problems of Bounded Width Abstract: We provide a full characterization of applicability of The Local Consistency Checking algorithm to solving the non-uniform Constraint Satisfaction Problems. This settles the conjecture of Larose and Zadori. Deeparnab Chakrabarty, Julia Chuzhoy and Sanjeev Khanna. On Allocating Goods to Maximize Fairness Abstract: We consider the Max-Min Allocation problem: given a set of m agents and a set of n items, where agent A has utility u(A,i) for item i, our goal is to allocate items to agents so as to maximize fairness. Specifically, the utility of an agent is the sum of its utilities for the items it receives, and we seek to maximize the minimum utility of any agent. While this problem has received much attention recently, its approximability has not been well-understood thus far. The best known approximation algorithm achieves a roughly O(\sqrt m}-approximation, and in contrast, the best known hardness of approximation stands at 2. Our main result is an algorithm that achieves a \tilde{O}(n^{\eps})-approximation in time n^{O(1/\eps)} for any \eps=\Omega(log log n/log n). In particular, we obtain a poly-logarithmic approximation in quasi-polynomial time, and for every constant \eps > 0, we obtain an n^{\eps}-approximation in polynomial time. Our algorithm also yields a quasi-polynomial time m^{\eps}-approximation algorithm for any constant \eps > 0. An interesting technical aspect of our algorithm is that we use as a building block a linear program whose integrality gap is \Omega(\sqrt m). We bypass this obstacle by iteratively using the solutions produced by the LP to construct new instances with significantly smaller integrality gaps, eventually obtaining the desired approximation. We also investigate a special case of the problem, where every item has a non-zero utility for at most two agents. Even in this restricted setting, the problem is hard to approximate to within any factor better than 2, and we show an algorithm that matches this hardness threshold. Nikhil Bansal and Ryan Williams. Regularity Lemmas and Combinatorial Algorithms Abstract: We present new combinatorial algorithms for the basic problems of Boolean matrix multiplication (BMM) and preprocessing a graph to answer independent set queries. We give the first improvements on combinatorial algorithms for dense BMM in many years, improving on the old Four Russians'' O(n^3/(w log n)) bound for machine models with wordsize w. (For a pointer machine, we can set w = log n.) The algorithms utilize notions from Regularity Lemmas for graphs in a novel way. As our best algorithms rely on the Weak Regularity Lemma, we believe they have the potential to be useful in practice. -- We give two randomized combinatorial algorithms for BMM. The first algorithm is essentially a reduction from BMM to the Triangle Removal Lemma. The currently known bounds for the Triangle Removal Lemma only imply an O((n^3 \log b)/(b w log n)) algorithm for BMM where b = (log*n)^d for some d > 0, but improvements on the Triangle Removal Lemma will imply an improvement in the running time for BMM. The second algorithm applies the Weak Regularity Lemma of Frieze and Kannan and runs in O(n^3 (log log n)^2/(w (log n)^{5/4})) time with probability exponentially close to 1. The algorithms also use standard word tricks and the graph compression technique of Feder and Motwani based on Ramsey theory. Our result immediately implies improved combinatorial methods for other problems such as CFG parsing, detecting triangle-freeness, recognizing median graphs, and computing transitive closure. -- We also give a deterministic algorithm for answering graph queries of the form: is a subset S of V an independent set? Improving on prior work, we show how to preprocess a graph in n^{2+e} time so that w independent set queries can be answered in O(n^2 (log log n)^2/(log^{5/4} n)) time, for all e > 0. This has several applications, including a deterministic combinatorial O(n^3 (log log n)^2/(w log^{5/4} n)) algorithm for triangle-freeness. The algorithm uses the Weak Regularity Lemma along with other combinatorial arguments. In addition to its applications, this case is interesting in that it is not known how to do better than O(n^2) using algebraic'' methods. Steven Myers and abhi shelat. One bit encryption is complete Abstract: Under CPA and CCA-1 attacks, a secure one-bit encryption scheme can be applied bit-by- bit to construct a secure many-bit encryption scheme. The same construction fails, however, under a CCA2 attack. Since the notion of CCA2 security was introduced by Rackoff and Simon [rs92], it has been an open question if single bit CCA2 secure encryption implies the existence of many-bit CCA2 security. We positively resolve this long-standing question and establish that bit encryption is complete. Our construction is black-box, and thus requires novel techniques to avoid known im- possibility results concerning trapdoor predicates [gmr01]. Our work is also an example of a shielding reduction discussed in [gmm07], which to our knowledge, is the ﬁrst one in the standard (i.e., not random-oracle) model. Irit Dinur and Prahladh Harsha. Composition of low-error 2-query PCPs using decodable PCPs Abstract: The main result of this paper is a simple, yet generic, composition theorem for low error two-query probabilistically checkable proofs (PCPs). Prior to this work, composition of PCPs was well-understood only in the constant error regime. Furthermore, until recently, composition in the low error regime suffered from incurring an extra consistency' query, resulting in PCPs that are not two-query' and hence, much less useful for hardness-of-approximation reductions. In a recent breakthrough, Moshkovitz and Raz (FOCS, 2008) constructed almost linear-sized low-error 2-query PCPs for every language in NP. Indeed, the main technical component of their construction is a novel composition of certain specific PCPs. We give an alternate, modular and, considerably simpler proof of their result by repeatedly applying the new composition theorem to known PCP components. To facilitate the new modular composition, we introduce a new variant of PCP, which we call a "decodable PCP (dPCP)". A dPCP is an encoding of an NP witness that is both locally checkable and locally decodable. The dPCP verifier in addition to verifying the validity of the given proof like a standard PCP verifier, also locally decodes the original NP witness. Our composition is generic in the sense that it works regardless of the way the component PCPs are constructed. Heiko Roeglin and Shang-Hua Teng. Smoothed Analysis of Multiobjective Optimization Abstract: We prove that the number of Pareto-optimal solutions in any multiobjective binary optimization problem with a finite number of linear objective functions is polynomial in the model of smoothed analysis. This resolves a conjecture of Rene Beier. Moreover, we give polynomial bounds on all finite moments of the number of Pareto-optimal solutions, which yields the first non-trivial concentration bound for this quantity. Using our new technique, we give a complete characterization of polynomial smoothed complexity for binary optimization problems, which strengthens an earlier result due to Beier and Voecking. David Arthur, Bodo Manthey and Heiko Roeglin. k-Means has Polynomial Smoothed Complexity Abstract: The k-means method is one of the most widely used clustering algorithms, drawing its popularity from its speed in practice. Recently, however, it was shown to have exponential worst-case running time. In this paper, we settle the smoothed running time of the k-means method. We show that the smoothed number of iterations is bounded by a polynomial in the number of data points and the reciprocal of the standard deviation of the Gaussian perturbations. This means that if an arbitrary input data set is randomly perturbed, then the k-means method will run in expected polynomial time on that input set. Shiva Kintali, Laura Poplawski, Rajmohan Rajaraman, Ravi Sundaram and Shang-Hua Teng. Reducibility Among Fractional Stability Problems Abstract: In a landmark paper, Papadimitriou introduced a number of syntactic subclasses of TFNP based on proof styles, that (unlike TFNP), admit complete problems. A recent series of results has shown that finding Nash equilibria is complete for PPAD, a particularly notable subclass of TFNP. A major goal of this work is to expand the universe of known PPAD-complete problems. We resolve the computational complexity of a number of outstanding open problems with practical applications.    Here is the list of problems we show to be PPAD-complete, along with the domains of practical significance: Fractional Stable Paths Problem (FSPP) [Haxell and Wilfong, 2008] - Internet routing, Core of Balanced Games [Scarf, 1967] - Economics and Game theory, Scarf’s Lemma [Scarf, 1967] - Combinatorics, Hypergraph Matching [Aharoni and Fleiner, 2003] - Social Choice and Preference Systems, Fractional Bounded Budget Connection Games (FBBC) [Laoutaris et al, 2008] - Social networks, and Strong Fractional Kernel [Aharoni and Holzman, 1998] - Graph Theory. In fact, no fully polynomial-time approximation schemes exist (unless PPAD is in FP).    This paper is entirely a series of reductions that build in nontrivial ways on the framework established in previous work. In the course of deriving these reductions, we created two new concepts - preference games and personalized equilibria. The entire set of new reductions can be presented as a lattice with the above problems sandwiched between preference games (at the "easy" end) and personalized equilibria (at the "hard" end). Our completeness results extend to natural approximate versions of most of these problems. On a technical note, we wish to highlight our novel "continuous-to-discrete" reduction from exact personalized equilibria to approximate personalized equilibria using a linear program augmented with an exponential number of "min" constraints of a specific form. In addition to enhancing our repertoire of PPAD-complete problems, we expect the concepts and techniques in this paper to find future use in algorithmic game theory. Jon Feldman, Aranyak Mehta, Vahab Mirrokni and S. Muthukrishnan. Online Stochastic Matching: Beating 1-1/e Abstract: We study the online stochastic bipartite matching problem, in a form motivated by display ad allocation on the Internet.  In the online, but adversarial case, the celebrated result of Karp, Vazirani and Vazirani gives an approximation ratio of 1-1/e = 0.632.., a very familiar bound that holds for many online problems; further, the bound is tight in this case.  In the online, stochastic case when nodes are drawn repeatedly from a known distribution, the greedy algorithm matches this approximation ratio,  but still, no algorithm is known that beats the 1-1/e bound. Our main result is a 0.67-approximation online algorithm for stochastic bipartite matching, breaking this 1-1/e barrier. Furthermore, we show that no online algorithm can produce a 1-epsilon approximation for an arbitrarily small epsilon for this problem. Our algorithms are based on computing an optimal offline solution to the expected instance, and using this solution as a guideline in the process of online allocation. We employ a novel application of the idea of the power of two choices from load balancing: we compute two disjoint solutions to the expected instance, and use both of them in the online algorithm in a prescribed preference order. To identify these two disjoint solutions, we solve a max flow problem in a boosted flow graph, and then carefully decompose this maximum flow to two edge-disjoint (near-)matchings. In addition to guiding the online decision making, these two offline solutions are used to characterize an upper bound for the optimum in any scenario.  This is done by identifying a cut whose value we can bound under the arrival distribution. At the end, we discuss extensions of our results to more general bipartite allocations that are important in a display ad application. Sandy Irani and Daniel Gottesman. The Quantum and Classical Complexity of Translationally Invariant Tiling and Hamiltonian Problems Abstract: We study the complexity of a class of problems involving satisfying constraints which remain the same under translations in one or more spatial directions.  In this paper, we show hardness of a classical tiling problem on an (N x N) 2-dimensional grid and a quantum problem involving finding the ground state energy of a 1-dimensional quantum system of N particles.  In both cases, the only input is N, provided in binary.  We show that the classical problem is NEXP-complete and the quantum problem is QMAEXP-complete.  Thus, an algorithm for these problems that runs in time polynomial in N (exponential in the input size) would imply EXP = NEXP or BQEXP = QMAEXP, respectively.  Although tiling in general is already known to be NEXP-complete, to our knowledge, all previous reductions require that either the set of tiles and their constraints or some varying boundary conditions be given as part of the input. In the problem considered here, these are fixed, constant-sized parameters of the problem. Instead, the problem instance is encoded solely in the size of the system. Xi Chen, Decheng Dai, Ye Du and Shang-Hua Teng. Settling the Complexity of Arrow-Debreu Equilibria in Markets with Additively Separable Utilities Abstract: We prove that the problem of computing an Arrow-Debreu market equilibrium is PPAD-complete even when all traders use additively separable, piecewise-linear and concave utility functions. In fact, our proof shows that this market-equilibrium problem does not have a fully polynomial-time approximation scheme unless every problem in PPAD is solvable in polynomial time. Yi Deng, Vipul Goyal and Amit Sahai. Resolving the Simultaneous Resettability Conjecture and a New Non-Black-Box Simulation Strategy Abstract: Canetti, Goldreich, Goldwasser, and Micali (STOC 2000) introduced the notion of resettable zero-knowledge proofs, where the protocol must be zero-knowledge even if a cheating verifier can reset the prover and have several interactions in which the prover uses the same random tape.  Soon afterwards, Barak, Goldreich, Goldwasser, and Lindell (FOCS 2001) studied the closely related notion of resettable soundness, where the soundness condition of the protocol must hold even if the cheating prover can reset the verifier to have multiple interactions with the same verifier's random tape. The major problem left open by this work was whether it is possible to have a single protocol that is simultaneously resettable zero knowledge and resettably sound.  We resolve this question by constructing such a protocol. At the heart of our construction is a new non-black-box simulation strategy, which we believe to be of independent interest.  This new strategy allows for simulators which marry'' recursive rewinding techniques (common in the context of concurrent simulation) with non-black-box simulation.   Previous non-black-box strategies led to exponential blowups in computational complexity in such circumstances, which our new strategy is able to avoid. Wing Kai Hon, Rahul Shah and Jeffrey Scott Vitter. Space-Efficient Framework for Top-k String Retrieval Problems Abstract: Given a set of D strings (or documents) {d1, d2, ....dD} of total length n, our task is to report the "most relevant" documents for a given query pattern P. Our notion of relevance is given by an arbitrary score function and it automatically captures metrics like frequency, proximity, and pagerank. Our solutions are a blend between suffix tree and inverted index based approaches.  Based on this we give linear space data structures, which answers top k most relevant document queries in O(P + k log k) time and it gives all the documents satisfying the query with score more that a given threshold in optimal O(P + output) time. We also show succinct solutions taking space equivalent to entropy compressed text while taking O(P + k polylog n) time. This improves the previously known data structures for some specific relevance metrics like frequency and proximity taking O(n log n) words of space. Hans Bodlaender, Fedor Fomin, Daniel Lokshtanov, Eelko Penninkx, Saket Saurabh and Dimitrios Thilikos. (Meta) Kernelization Abstract: Polynomial time preprocessing to reduce instance size is one of the most commonly deployed heuristics to tackle computationally hard problems. In a parameterized problem, every instance I comes with a positive integer k. The problem is said to admit a polynomial kernel if, in polynomial time, we can reduce the size of the instance I to a polynomial in k, while preserving the answer. In this paper, we show that all problems expressible in Counting Monadic Second Order Logic and satisfying a compactness property admit a polynomial kernel on graphs of bounded genus. Our second result is that all problems that have finite integer index and satisfy a weaker compactness condition admit a linear kernel on graphs of bounded genus. The study of kernels on planar graphs was initiated by a seminal paper of Alber,  Fellows, and Niedermeier [J. ACM, 2004 ] who showed that Planar Dominating Set admits a linear kernel. Following this result, a multitude of problems have been shown to admit linear kernels on planar graphs by combining the ideas of Alber et al. with problem specific reduction rules. Our theorems unify and extend all previously known kernelization results for planar graph problems. Combining our theorems with the Erdos-Posa property we obtain various new results on linear kernels for a number of packing and covering problems. Noga Alon, Ori Gurel-Gurevich and Eyal Lubetzky. Choice-memory tradeoff in allocations Abstract: In the classical balls-and-bins setup, n balls are thrown independently and uniformly into n bins. The most loaded bin then has log n/log log n balls with high probability. A famous result of Azar, Broder, Karlin and Upfal states that, when given k uniformly and independently selected bins to choose from for the location of each ball, one can achieve a maximal load of log_k log n, simply by putting each ball in the least loaded of the k bins. To implement this simple algorithm, one needs to keep track of the status of the entire array of n bins, which requires about n bits of memory. In this work, we find a tradeoff between the number of choices, k, and the number of memory bits available, m. This tradeoff has a sharp threshold governing the performance: If km>>n then one can achieve a constant maximal load, while for km<{+1,-1} is the least degree of a real polynomial p such f(x)=sgn p(x).  We construct two halfspaces on {0,1}^n whose intersection has threshold degree Theta(sqrt n), an exponential improvement on previous lower bounds. This solves an open problem due to Klivans (2002) and rules out the use of perceptron-based techniques for PAC-learning the intersection of two halfspaces, a central unresolved challenge in computational learning.  We also prove that the intersection of two majority functions has threshold degree Omega(log n), which is tight and settles a conjecture of O'Donnell and Servedio (2003). Our proof consists of two parts. First, we show that for any Boolean functions f and g, the intersection f(x)^g(y) has threshold degree d if and only if there exist rational functions F, G of degree Theta(d) with  ||f-F||_infty + ||g-G||_infty < 1.  Second, we determine the least degree required for approximating a halfspace and a majority function to any given accuracy by rational functions. Our technique further allows us to make considerable progress on Aaronson's recent challenge (FOCS'08 tutorial) and prove strong direct product theorems for the sign-representation and approximation of composed Boolean functions of the form F(f_1,...,f_n). Christian Sommer, Elad Verbin and Wei Yu. Distance Oracles for Sparse Graphs Abstract: Thorup and Zwick, in their seminal work, introduced the distance oracle, which is a data structure that answers distance queries for a graph. For any integer k, they provide an efficient algorithm to construct an approximate distance oracle of size O(kn^{1+1/k}) that can answer queries in time O(k) with a distance estimate that is at most 2k-1 times larger than the actual shortest distance (this last term is called the stretch). They proved that their data structure is optimal in terms of space, however, the proof only holds for dense graphs, and the best bound it can prove is that the size of the data structure is lower bounded by the number of edges of the graph. We give a new tight (up to constants) space lower bound for approximate distance oracles in the cell-probe model. The lower bound follows by a new reduction from lopsided set disjointness to distance oracles, based on recent work by Patrascu. Contrary to existing lower bounds for distance oracles, our bound also holds for sparse graphs. We show that when the query time is t and the stretch is a, then the space has to be at least n^{1+1/\Omega(ta)}. Anne Broadbent, Joseph Fitzsimons and Elham Kashefi. Universal Blind Quantum Computation Abstract: We present a protocol which allows a client to have a server carry out a quantum computation for her such that the client's inputs, outputs and computation remain perfectly private, and where she does not require any quantum computational power or memory. The client only needs to be able to prepare single qubits randomly chosen from a finite set and send them to the server, who has the balance of the required quantum computational resources. We give an authentication protocol that allows the client to detect an interfering server; our scheme can also be made fault-tolerant.  We also generalize our result to the setting of a purely classical client who communicates classically with two non-communicating entangled servers, in order to perform a blind quantum computation.  By incorporating the authentication protocol, we show that any problem in BQP has an entangled two-prover interactive proof with a purely classical verifier. The novelty of our approach is in using the unique features of measurement-based quantum computing which allows us to clearly distinguish between the quantum and classical aspects of a quantum computation. Tanmoy Chakraborty, Zhiyi Huang and Sanjeev Khanna. Dynamic and Non-Uniform Pricing Strategies for Revenue Maximization Abstract: We study the Item Pricing problem for revenue maximization in the limited supply setting, where a single seller with n distinct items caters to m buyers with unknown subadditive valuation functions who arrive in a sequence. The seller sets the prices on individual items. Each buyer buys a subset of yet unsold items that maximizes her utility. Our goal is to design pricing strategies that guarantee an expected revenue that is within a small factor of the optimal social welfare -- an upper bound on the maximum revenue possible. Most earlier work has focused on the unlimited supply setting, where selling an item to a buyer does not affect the availability of the item to the future buyers. Recently, Balcan et. al. studied the limited supply setting, giving a randomized pricing strategy that achieves a super-poly-logarithmic approximation; their strategy assigns a single price to all items (uniform pricing), and never changes it (static pricing). They also showed that no pricing strategy that is both static and uniform can give a poly-logarithmic approximation. We design dynamic uniform pricing strategies (all items are identically priced but item prices can change over time), as well as static non-uniform pricing strategies (different items can have different prices but prices do not change over time), that give poly-logarithmic approximation for revenue maximization. Thus in the limited supply setting, our results highlight a strong separation between the power of dynamic and non-uniform pricing strategies versus static uniform pricing strategy. Matt Gibson and Kasturi Varadarajan. Decomposing Coverings and the Planar Sensor Cover Problem Abstract: We show that a k-fold covering using translates of an arbitrary convex polygon can be decomposed into Omega(k) covers (using an efficient algorithm).  We generalize this result to obtain a constant factor approximation to the sensor cover problem where the ranges of the sensors are translates of a given convex polygon.  The crucial ingredient in this generalization is a constant factor approximation for a one-dimensional version of the sensor cover problem, called the Restricted Strip Cover (RSC) problem, where sensors are intervals of possibly different lengths.  Our algorithm for RSC improves on the previous O(log log log n) approximation. Yuval Ishai, Eyal Kushilevitz, Rafail Ostrovsky and Amit Sahai. Extracting Correlations Abstract: Randomness extractors convert dirty sources of randomness into clean sources of randomness. We consider an extension of randomness extraction and the related notion of privacy amplification to the case of two correlated sources. Our main result is an efficient interactive two-party protocol which extracts m clean independent instances of a given joint distribution (X,Y) from n=O(m) dirty (or "leaky") instances of the same distribution. The classical case corresponds to X and Y being identical random bits. We present several cryptographic applications of correlation extractors and our main result. These include protecting certain cryptographic protocols against information leakage, basing communication-efficient secure computation on a number theoretic intractability assumption, and efficient reductions between different communication channels. T.S. Jayram and David Woodruff. The Data Stream Space Complexity of Cascaded Norms Abstract: We consider the problem of estimating cascaded aggregates over an matrix presented as a sequence of updates and deletions in a data stream. A cascaded aggregate P \circ Q is defined by evaluating aggregate Q repeatedly over each row of the matrix, and then evaluating aggregate P over the resulting vector of values. They have applications in the analysis of scientific data, stock market transactions, credit card fraud, and IP traffic. The problem was introduced by Cormode and Muthukrishnan, PODS 2005. We analyze the space complexity of estimating cascaded norms on an n x d matrix to within a small relative error. Let L_p denote the the p-th norm, where p is a non-negative integer. We abbreviate the cascaded norm L_k \circ L_p by L_{k,p}. -- For any constant k >= p >= 2,  we obtain a 1-pass O~(n^{1-2/k} d^{1-2/p})-space algorithm for estimating L_{k,p}. This is optimal up to polylogarithmic factors. In particular, we resolve an open question of Cormode and Muthukrishnan regarding the space complexity of estimating L_{4,2}. We also obtain 1-pass space-optimal algorithms for estimating L_{\infty,k} and L_{k,\infty}. -- We prove a space lower bound of \Omega(n^{1-1/k}) on estimating L_{k,0} and L_{k,1}. This resolves an open question due to Piotr Indyk, IITK Workshop on Algorithms for Data Streams (Problem 8), 2006. -- For any k >= 0, we obtain a 1-pass space-optimal algorithm for estimating L_{k,2}. Our techniques also solve the heavy hitters'' problem for rows of the matrix weighted by L_2 norm. These resolve two more open questions of Cormode and Muthukrishnan. Previously, Ganguly, Bansal, and Dube, 2008 claimed an \$O~(1)-space algorithm for estimating L_{k,p} for any k,p in [0,2]. A simple reduction from multiparty set disjointness shows this claim is incorrect for any k,p for which k p > 2. Jonathan Kelner and Aleksander Madry. Faster generation of random spanning trees Abstract: In this paper, we set forth a new algorithm for generating approximately uniformly random spanning trees in undirected graphs.  We show how to sample from a distribution that is within a multiplicative (1+delta) of uniform in expected time O(m sqrt{n}log 1/delta).  This improves upon the previous best worst-case bound of O(min(mn, n^{2.376})), which has stood for twenty years. To achieve this goal, we exploit the connection between random walks on graphs and electrical networks, and we use this to introduce a new approach to the problem that integrates discrete random walk-based techniques with continuous linear algebraic methods.  We believe that our use of electrical networks and sparse linear system solvers in conjunction with random walks and combinatorial partitioning techniques is a useful paradigm that will find further applications in algorithmic graph theory. Prasad Raghavendra and David Steurer. Integrality gaps for Strong SDP Relaxations of Unique Games Abstract: The Unique Games Conjecture (UGC) has led to hardness of approximation results (often optimal) for several fundamental optimization problems such as MaxCut and Vertex Cover. The hardness results obtained using UGC assert that simple semidefinite programs yield the best approximation ratios for a variety of problems. Yet, there is little evidence supporting the claim that stronger semidefinite programming relaxations do not yield better approximations to these problems. In this work, we obtain SDP integrality gaps for certain strong relaxations of Unique games.  Specifically, we exhibit an Unique games integrality gap for the natural semidefinite program, along with all valid constraints on at most R=O(2^{loglog^{1/4}n}) vectors. For a stronger relaxation consisting of the semidefinite program along with a R-round Sherali-Adams hierarchy, we prove a Unique games integrality gap for R=O(loglog^{1/4}n). By composing these SDP gaps through the reductions, the above results imply corresponding integrality gaps for every problem for which a UGC based hardness is known. Prasad Raghavendra and David Steurer. How to Round Any CSP Abstract: A large number of interesting combinatorial optimization problems like Max3SAT, MaxCut fall under the class of constraint satisfaction problems.  Recent work by one of the authors identifies a semidefinite programming relaxation that yields the optimal approximation ratio for every CSP, under the Unique Games Conjecture. In this work, we present an efficient rounding scheme that achieves the integrality gap of this SDP, unconditionally for every CSP.  The SDP relaxation under consideration is stronger or equivalent to any relaxation used in literature to approximate a CSP.  Thus, irrespective of the truth of UGC, our work yields an efficient generic algorithm that for every CSP, achieves an approximation at least as good as the best known algorithm in literature. Ryan O'Donnell and Karl Wimmer. KKL, Kruskal-Katona, and monotone nets Abstract: We generalize the Kahn-Kalai-Linial (KKL) Theorem to random walks on Cayley and Schreier graphs, making progress on an open problem of Hoory, Linial, and Wigderson.  In our generalization, the underlying group need not be abelian so long as the generating set is closed under conjugation.  An example corollary is that if f is a boolean function on the set of n-bit strings of Hamming weight k, with E[f] and k/n bounded away from 0 and 1, then there is a pair of indices i, j such that Inf_{ij}(f) is at least Omega(log n / n).  Here Inf_{ij}(f) denotes the "influence" on f of swapping the ith and jth coordinates.  Using this corollary we obtain a "robust" version of the Kruskal-Katona Theorem:  Given a constant-density subset A of a middle slice of the Hamming n-cube, the density of A's "shadow" is greater by at least Omega(log n / n), unless A noticeably correlated with a single coordinate. As an application of these results, we show that the set of functions {0, 1, x_1, ..., x_n, Majority} is a (1/2 - gamma)-net for the set of all n-bit monotone boolean functions, where gamma = Omega(log n / sqrt{n}).  This distance is optimal for polynomial-size nets and gives an optimal weak-learning algorithm for monotone functions under the uniform distribution, solving a problem of Blum, Burch and Langford. Jonathan Kelner, James Lee, Gregory Price and Shanghua Teng. Higher eigenvalues of graphs Abstract: We present a general method for proving upper bounds on the eigenvalues of the graph Laplacian.  In particular, we show that for any positive integer k, the kth smallest eigenvalue of the Laplacian on a bounded-degree planar graph is O(k/n).  This bound is asymptotically tight for every k, as it is easily seen to be achieved for planar grids.  We also extend this spectral result to graphs with bounded genus, graphs which forbid fixed minors, and other natural families. Previously, such spectral upper bounds were only known for k=2, i.e. for the Fiedler value of these graphs. In addition, our result yields a new, combinatorial proof of the celebrated result of Korevaar in differential geometry. Alexandr Andoni, Khanh Do Ba, Piotr Indyk and David Woodruff. Efficient sketches for Earth-Mover Distance, with applications Abstract: We provide the first sub-linear sketching algorithm for estimating the planar Earth-Mover Distance with a constant approximation. For sets living in the two-dimensional grid [\Delta]^2, we achieve space \Delta^{\eps} for approximation O(1/\epsilon), for any desired 0<\epsilon<1. Our sketch has immediate applications to the streaming and nearest neighbor search problems. Vitaly Feldman, Venkatesan Guruswami, Prasad Raghavendra and Yi Wu. Agnostic Learning of Monomials by Halfspaces is Hard Abstract: We prove the following strong hardness result for learning: Given a distribution on labeled examples from the hypercube such that there exists a monomial (or conjunction) consistent with (1-e)-fraction of the examples, it is NP-hard to find a halfspace that is correct on (1/2+e)-fraction of the examples, for arbitrary constant e > 0. In learning theory terms, weak agnostic learning of monomials by halfspaces is NP-hard. This hardness result bridges between and subsumes two previous results which showed similar hardness results for the proper learning of monomials and halfspaces. As immediate corollaries of our result, we give the first optimal hardness results for weak agnostic learning of decision lists and  majorities. Our techniques are quite different from previous hardness proofs for learning. We use an invariance  principle and sparse approximation of halfspaces from recent work on fooling halfspaces to give a new natural list decoding of a halfspace in the context of dictatorship tests/label cover reductions. In addition, unlike previous invariance principle based proofs which are only known to give Unique Games hardness, we give a reduction from a smooth version of Label Cover that is known to be NP-hard. Ashish Goel and Ian Post. An Oblivious O(1)-Approximation for Single Source Buy-at-Bulk Abstract: We consider the single-source (or single-sink) buy-at-bulk problem with an unknown concave cost function.  We want to route a set of demands along a graph to or from a designated root node, and the cost of routing x units of flow along an edge is proportional to some concave, non-decreasing function f such that f(0) = 0.  We present a polynomial time algorithm that finds a distribution over trees such that the expected cost of a tree for any f is within an O(1)-factor of the optimum cost for that f. The previous best simultaneous approximation for this problem, even ignoring computation time, was O(log |D|), where D is the multi-set of demand nodes. We design a simple algorithmic framework using the ellipsoid method that finds an O(1)-approximation if one exists, and then construct a separation oracle using a novel adaptation of the Guha, Meyerson, and Munagala algorithm for the single-sink buy-at-bulk problem that proves an O(1) approximation is possible for all f. The number of trees in the support of the distribution constructed by our algorithm is at most 1+log |D|. Gagan Goel, Chinmay Karande, Pushkar Tripathi and Lei Wang. Approximability of Combinatorial Problems with Multi-agent Submodular Cost Functions Abstract: In this paper, we introduce an algorithmic framework for studying combinatorial problems in the presence of multiple agents with submodular cost functions. We study several fundamental covering'' type problems (like Vertex Cover, Spanning Tree, Perfect Matching) in this setting and establish tight upper and lower bounds for their approximability. For instance, consider the network design problem of constructing a spanning tree of some graph, assuming there are many agents willing to construct different parts of the tree. The cost of each agent for constructing a particular set of edges could be a complex function. Agents might provide discounts depending on how many and which edges they construct. The algorithmic question that we consider is to find a min-cost spanning tree when the agents' cost function satisfy submodularity. Note that such an algorithm will have to find a spanning tree, and partition its edges among the agents. Avinatan Hassidim, Jonathan Kelner, Huy Nguyen and Krzysztof Onak. Local Graph Partitions for Approximation and Testing Abstract: We introduce a new tool for approximation and testing algorithms called local partitioning oracles and develop tools for constructing them for graph families with an excluded minor.  These oracles utilize only local computation to consistently answer queries about a global partition that breaks the graph into small connected components by removing only a small fraction of the edges.  We illustrate the power of this technique by using it to unify, extend, and simplify previous testing results about minor-free graphs, and to provide several new results that were unachievable with existing techniques. 1. For any class of graphs with an excluded minor, we provide algorithms for a wide range of problems, including minimum vertex cover, minimum dominating set, and maximum independent set, that approximate the optimal solutions size up to epsilon*n in 2^poly(1/epsilon) time. It is known that for general graphs of bounded degree this is not possible in time independent of the graph size. 2. We give a simpler proof that any minor-closed graph property is testable in constant time in the bounded degree model. Benjamini, Schramm, and Shapira (STOC 2008) showed a tester for this problem that runs in time that is triple exponential in 1/epsilon. Our algorithm solves this problem in single exponential time. 3. Not only do we reprove the result of Czumaj, Shapira, and Sohler (SICOMP 2009) on testability of hereditary properties of "non-expanding hereditary" graphs in the bounded degree model, but we also show that in fact, it is possible to approximate the distance to a hereditary property in such a setting. Hereditary properties include all minor-closed properties, bipartiteness, k-colorability, and perfectness. Arkadev Chattopadhyay and Avi Wigderson. Linear systems over composite moduli Abstract: We study solution sets to systems of 'generalized' linear equations of the form ell_i (x_1, x_2,...,x_n) in A_i (mod m) where ell_1,...,ell_t are linear forms over Z_m in n Boolean variables,  each A_i is an arbitrary subset of Z_m, and m is a composite integer that is a product of two distinct primes, like 6. Our main technical result is that such solution sets have exponentially small correlation with the boolean function MOD_q, when m and q are relatively prime. This bound is independent of the number t of equations. Using this, we derive the first exponential lower bound on the size of depth-three boolean circuits having a MAJORITY gate at the top, AND/OR gates at the middle layer and generalized MOD_m gates at the base, computing the function MOD_q. This settles a decade-old open problem posed by Beigel and Maciel for the case of such modulus m. Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, Alessandro Panconesi and Prabhakar Raghavan. Models for the compressible Web Abstract: Graphs resulting from human behavior (the web graph, friendship graphs, etc.) have hitherto been viewed as a monolithic class of graphs with similar characteristics; for instance, their degree distributions are markedly heavy-tailed.  In this paper we take our understanding of behavioral graphs a step further, by showing that an intriguing empirical property of web graphs --- their compressibility --- cannot be exhibited by well-known graph models for the web and for social networks. We then develop a more nuanced model for web graphs and show that it does exhibit compressibility, in addition to previously modeled web graph properties. Jonah Sherman. Breaking the Multicommodity Flow Barrier for O(sqrt(log n))-approximations to Sparsest Cut Abstract: This paper ties the line of work on algorithms to approximate sparsest cut to within an O(sqrt(log n)) factor together with the line of work on algorithms that run in sub-quadratic time.  We present an algorithm that simultaneously achieves the approximation factor of the former and the nearly max-flow running time of the latter. Our main technical contribution is a stronger, algorithmic version of the structure theorem central to the analysis of Arora et al.'s SDP rounding algorithm. In particular, we show that the matching-chaining argument at the heart of their proof can be viewed as an algorithm that finds good augmenting paths in a multicommodity flow problem related to the SDP's dual.  By using this specialized algorithm in place of a black-box solver, we are able to solve these instances much more efficiently. We also show the recent cut-matching game framework can not achieve an approximation better than Omega(log(n)/log(log n)), implying our algorithm can not be expressed in that framework. Falk Unger. A Probabilistic Inequality with Applications to Threshold Direct Product Theorems Abstract: Threshold direct product theorems are statements of the following form: If one instance of a problem can be solved with probability at most p, then for large k the probability of solving significantly more than p*k out of k independent instances becomes negligible.  Results of this kind are crucial when distinguishing whether a process succeeds with probability s or c, for 00). For this we need at least a constant u(\delta) (which depends on \delta) number of honest players, and we need that the one-way permutation is hard to invert even on polynomially small min-entropy sources.