# Theoretical Computer Science @ IIT Bombay

## Theory Seminar and Reading Group

The talks are announced in the theory-seminar mailing-list.
• Some Closure Results for Polynomial Factorization and Applications [Jan 09 2019 +]

Date: January 9, 2019
Time: 11:00 am - 12:00 noon
Venue: CC 109, New CSE Building

In a sequence of seminal results in the 80's, Kaltofen showed that if an n-variate polynomial of degree poly(n) can be computed by an arithmetic circuit of size poly(n), then each of its factors can also be computed an arithmetic circuit of size poly(n). In other words, the complexity class VP (the algebraic analog of P) of polynomials, is closed under taking factors.

A fundamental question in this line of research, which has largely remained open is to understand if other natural classes of multivariate polynomials, for instance, arithmetic formulas, algebraic branching programs, constant depth arithmetic circuits or the complexity class VNP (the algebraic analog of NP) of polynomials, are closed under taking factors. In addition to being fundamental questions on their own, such 'closure results' for polynomial factorization play a crucial role in the understanding of hardness randomness tradeoffs for algebraic computation.

I will talk about the following two results, whose study was motivated by these questions.

1. The class VNP is closed under taking factors. This proves a conjecture of B{\"u}rgisser.
2. All factors of degree at most poly(log n) of polynomials with constant depth circuits of size poly(n) have constant (a slightly larger constant) depth arithmetic circuits of size poly(n). This partially answers a question of Shpilka and Yehudayoff and has applications to hardness-randomness tradeoffs for constant depth arithmetic circuits.

Based on joint work with Chi-Ning Chou and Noam Solomon.

• Enabling Secure Computation with Minimal Interaction [Nov 26 2018 +]

Date: November 26, 2018
Time: 11:00 am - 12:00 noon
Venue: CC 109, New CSE Building

Cryptography, originally the art of facilitating secret communication, has now evolved to enable a variety of secure tasks. Some fundamental challenges that modern cryptography addresses are: Can we prevent adversaries from tampering with encrypted communication? Can we verify that computation is performed correctly while preserving the privacy of data on which computation occurs? Can we enable mutually distrusting participants to jointly compute on distributed private data?

In this talk, I will discuss new research that builds minimally-interactive secure protocols to accomplish these tasks, based on widely believed cryptographic hardness assumptions. I will also present new techniques that overcome known barriers and enable new kinds of secure protocols. These techniques have made it possible to achieve non-malleability, strong variants of zero-knowledge, and other security properties while requiring only a single message from each participant.

This is based on joint works with Abhishek Jain, Yael Kalai, Ron Rothblum and Amit Sahai.

• Algorithms from Physics [Oct 25 2018 +]

Date: October 25, 2018
Time: 11:30 am - 12:30 pm
Venue: CC 109, New CSE Building

In understanding physical systems over hundreds of years, Physicists have developed a wealth of dynamics and viewpoints. Some of these methods, when abstracted appropriately, could lead to new algorithmic techniques with applications to machine learning and TCS. I will present a couple of recent examples from my own research on such interactions between Physics and Algorithms – a Hamiltonian Dynamics inspired algorithm for sampling from continuous distributions and a Boltzmann's equation based algorithm for estimating the partition function for discrete distributions.

• Fairness and Diversity in Online Social Systems [Oct 25 2018 +]

Date: October 25, 2018
Time: 10:30 am - 11:30 am
Venue: CC 109, New CSE Building

Social systems are now fueled by algorithms that facilitate and control connections and information. Simultaneously, computational systems are now fueled by people - their interactions, data, and behavior. Consequently, there is a pressing need to design new algorithms that are socially responsible in how they learn, and socially optimal in the manner in which they use information. Recently, we have made initial progress in addressing such problems at this interface of social and computational systems. In this talk, we will first understand the emergence of bias in data and algorithmic decision making and present first steps towards developing a systematic framework to control biases in classical problems such as data summarization and personalization. This work leads to new algorithms that have the ability to alleviate bias and increase diversity while often simultaneously maintaining their theoretical or empirical performance with respect to the original metrics.

• A Short List of Equalities Induces Large Sign Rank [Sept 18 2018 +]

Date: September 18, 2018
Time: 2:00 pm - 4:00 pm
Venue: CC 105, New CSE Building

We exibit a natural function $$F_n$$ on $$n$$ variables, computable by a linear sized decision list of "Equalities", but whose sign rank is $$\exp(\Omega(n^{1/4}))$$. The simplicity of $$F_n$$ yields the following two consequences.

1) $$F_n$$ can be computed by linear sized depth-2 threshold formulas when the weights of the threshold gates are unrestricted (THR o THR), but any THR o MAJ circuit computing it requires size $$\exp(\Omega(n^{1/4}))$$. This provides the first exponential separation between the boolean circuit complexity classes THR o MAJ and THR o THR, answering a question of Amano and Maruoka [MFCS'05] and Hansen and Podolskii [CCC'10]. In contrast, Goldmann, Hastad and Razborov [Computational Complexity'92] had shown more than 25 years ago that functions efficiently computable by MAJ o THR circuits can also be efficiently computed by MAJ o MAJ circuits. Thus, while weights at the bottom layer do not matter when the top is light, we discover that they do when the top is heavy.

2) The function $$F_n$$ lies in the communication complexity class PMA under the natural partition of the inputs. Since $$F_n$$ has large sign rank, this implies PMA is not a subset of UPP, strongly resolving a conjecture of Goos, Pitassi and Watson [ICALP'16].

In order to prove our main result, we view $$F_n$$ as an XOR function, and develop a technique to lower bound the sign rank of such functions. This requires novel arguments that allow us to conclude that polynomials of low Fourier weight cannot approximate certain functions, even if they are allowed unrestricted degree.

Based on joint work with Arkadev Chattopadhyay (TIFR), to appear in FOCS 2018.

• Bootstrapping Identity Tests for Algebraic Circuits [Aug 1 2018 +]

Date: August 1, 2018
Time: 2:00 pm - 3:00 pm
Venue: CC 109, New CSE Building

The classical Ore-DeMillo-Lipton-Schwartz-Zippel lemma [O22,DL78,Z79,S80] states that any nonzero polynomial of degree at most $$s$$ over $$n$$ variables will evaluate to a nonzero value at some point on a grid $$S^n$$ with $$|S| > s$$. This leads to a deterministic polynomial identity test (PIT) for all degree-$$s$$ size-$$s$$ algebraic circuits over $$n$$ variables that runs in time $$poly(s) \cdot (s+1)^n$$.

Agrawal, Ghosh and Saxena recently introduced the technique of bootstrapping that can be used to asymptotically improve the running time of blackbox identity tests. They showed that a running time of as bad as $$(s+1)^{n^{0.5 - \delta}} H(n)$$ for any $$\delta > 0$$ and arbitrary $$H$$ can be improved to a running time of $$s^{\exp \circ \exp(\log^{\ast} s)}$$, which is slightly super-polynomial. Building on their work we show that the hypothesis can be weakened further to a running time of $$(s+1)^{o(n)} H(n)$$ (slightly better than the trivial $$s^{O(n)}$$) to get essentially the same conclusion.

This is a joint work with Mrinal Kumar (Harvard, Cambridge, MA) and Ramprasad Saptharishi (TIFR, Mumbai).

• A Quadratic Size-Hierarchy Theorem for Small-Depth Multilinear Formulas [Jul 4 2018 +]

Date: July 4, 2018
Time: 12:05 pm - 1:00 pm
Venue: CC 109, New CSE Building

Arithmetic formulas are directed trees where the leaves are labelled by variables and elements of the field, and internal vertices are labelled by + or x. Every internal vertex computes a polynomial by operating on its inputs. It is easy to see that they are a natural model of computation of polynomials, syntactically (symbolically). The size of the arithmetic formulas refers to the number of + and x operations needed to compute the polynomial.

It is an important question in the study of algebraic computational complexity to understand the amount of computational resources (refers to size of the formulas in this case) needed to compute a given polynomial.

In this talk, we will restrict ourselves to arithmetic formulas where computation at every vertex is a multilinear polynomial and in this scenario we show explicit separations between the expressive powers of multilinear formulas of small-depth and all polynomial sizes. The multilinear restriction is a reasonable one since most polynomials of interest are indeed multilinear, Determinant, Permanent, Iterated Matrix Multiplication, Elementary Symmetric Polynomial. etc.

Formally, we will show that for any $$s = \mathrm{poly}(n)$$ and $$\delta > 0$$, we show that there are explicit polynomial families that have multilinear formulas of size at most s but no ‘small’-depth multilinear formulas of size less than $$s^{0.5 - \delta}$$.

Our proof can be viewed as a derandomization of a lower bound technique of Raz (JACM 2009) using $$\epsilon$$-biased spaces.

(Joint work with Nutan Limaye and Srikanth Srinivasan)

• Lower Bound Techniques for QBF Proof Systems [Apr 17 2018 +]

Date: April 17, 2018
Time: 11:30 am - 12:30 pm
Venue: CC 109, New CSE Building

How do we prove that a false QBF is indeed false? How big a proof is needed? The special case when all quantifiers are existential is the well-studied setting of propositional proof complexity. Expectedly, universal quantifiers change the game significantly. Several proof systems have been designed in the last couple of decades to handle QBFs. Lower bound paradigms from propositional proof complexity cannot always be extended - in most cases feasible interpolation and consequent transfer of circuit lower bounds works, but obtaining lower bounds on size by providing lower bounds on width fails dramatically. A new paradigm with no analogue in the propositional world has emerged in the form of strategy extraction, allowing for transfer of circuit lower bounds, as well as obtaining independent genuine QBF lower bounds based on a semantic cost measure.

This talk will provide a broad overview of some of these developments.

• Finding Euler Tours in One Pass in the W-Streaming Model with O(n log n) Memory [Mar 13 2018 +]

Date: March 13, 2018
Time: 11:30 am - 12:30 pm
Venue: CC 101, New CSE Building

We study the problem of finding an Euler tour in an undirected graph G in the W-Streaming model with O(n polylog(n)) RAM, where n, m are the number of nodes, edges of G. Our main result is the first one pass W-Streaming algorithm computing an Euler tour of G in the form of an edge successor function with only O(n log(n)) RAM which is optimal for this setting (e.g., Sun and Woodruff (2015)). The previously best-known result in this model is implicitly given by Demetrescu et al. (2010) with the parallel algorithm of Atallah and Vishkin (1984) using O(m/n) passes under the same RAM limitation. For graphs with omega(n) edges this is non-constant.

Our overall approach is to partition the edges into edge-disjoint cycles and to merge the cycles until a single Euler tour is achieved. Note that in the W-Streaming model such a merging is far from being obvious as the limited RAM allows the processing of only a constant number of cycles at once. This enforces us to merge cycles that partially are no longer present in RAM. Furthermore, the successor of an edge cannot be changed after the edge has left RAM. So, we steadily have to output edges and their designated successors, not knowing the appearance of edges and cycles yet to come. We solve this problem with a special edge swapping technique, for which two certain edges per node are sufficient to merge tours without having all of their edges in RAM. Mathematically, this is controlled by structural results on the space of certain equivalence classes corresponding to cycles and the characterization of associated successor functions. For example, we give conditions under which the swapping of edge successors leads to a merging of equivalence classes. The mathematical methods of our analysis are new and might be of independent interest for other routing problems in streaming models.

(Joint work with Christian Glazik, Jan Schiemann.)

• Homomorphic Secret Sharing [Feb 27 2018 +]

Date: February 27, 2018
Time: 2:00 pm to 3:00 pm
Venue: CC 109, New CSE Building

A homomorphic secret sharing (HSS) scheme is a secret sharing scheme that supports a compact local evaluation of functions of the shared secret. For the purpose of applications of HSS, it is useful to make the stronger requirement that the output can be reconstructed via addition. That is, if a secret $$s$$ is split into shares $$s_1,\ldots,s_m,$$ then $$f(s)$$ can be written as $$F(s_1)+\ldots+F(s_m)$$ for some efficiently computable $$F$$, where addition is over a finite abelian group.

The talk will survey the state of the art on HSS and its applications, focusing mainly on: (1) computationally secure 2-out-of-2 HSS schemes for simple function classes using any one-way function, and (2) similar HSS schemes for branching programs from assumptions related to the hardness of discrete logarithms. These can provide alternatives to fully homomorphic encryption in the context of minimizing the communication complexity and round complexity of secure computation.

The talk is based mostly on joint works with Elette Boyle and Niv Gilboa.

• On Expressing Majority as a Majority of Majorities [Jan 25 2018 +]

Date: January 25, 2018
Time: 10:30 am to 11:30 am
Venue: CC 109, New CSE Building

If $$k < n$$, can one express the majority of $$n$$ bits as the majority of at most $$k$$ majorities, each of at most $$k$$ bits? We prove that such an expression is possible only if $$k=\Omega(n^{4/5})$$. This improves on a bound proved by Kulikov and Podolskii, who showed that $$k=\Omega(n^{0.7-\delta})$$. Our proof is based on ideas originating in discrepancy theory, as well as a strong concentration bound for sums of independent Bernoulli random variables and a strong anticoncentration bound for the hypergeometric distribution.

• The Chasm at Depth Four, and Tensor Rank : Old results, new insights [Oct 25 2017 +]

Date: October 25, 2017
Time: 11:00 am to 12:00 pm
Venue: Conference room, 1st Floor, CSE Kanwal Rekhi Bldg

Proving "good" lower bounds on the size of the arithmetic formulas computing an explicit polynomial is one of the central themes in the study of algebraic complexity. But the best known lower bound on the size of an arithmetic formula computing an explicit polynomial is quadratic in the number of variables, due to Kalorkoti [SIAM J. Comp. 1985]. There hasn’t been any progress on that front in the last 32 years.

But in a surprising result, Raz [J. ACM 2013] showed that for any n and d where d is sub-logarithmic in n, constructing explicit tensors of high enough rank would imply superpolynomial lower bounds for general arithmetic formulas. We first give a new and arguably simpler proof of this connection. We do this by showing a structured depth reduction to depth four for arithmetic formulas. This is a simpler proof of the depth reduction results of Agrawal and Vinay [FOCS 2008], Koiran [TCS 2012], and Tavenas [MFCS 2013]. We also extend the result of Raz to homogeneous formulas. In this setting, wee show that such a connection holds for any d that is sub-polynomial in n. As a corollary of our extension, we show that for any n and d where d is O(log n) (as opposed to the almost-logarithmic in n as in Raz’s result), constructing explicit tensors of high enough rank would imply superpolynomial lower bounds for general arithmetic formulas.

Joint work with Mrinal Kumar, Ramprasad Saptharishi and V. Vinay Available at https://eccc.weizmann.ac.il/report/2016/096/.

• An approach to Distributed Synthesis [Oct 18 2017 +]

Date: October 18, 2017
Time: 11:00 am to 12:00 pm
Venue: Conference room, 1st Floor, CSE Kanwal Rekhi Bldg

The Distributed Synthesis Problem(DSP) refers to the question of algorithmically synthesizing distributed systems from specifications. We consider some specification classes known to have a decidable DSP and show that these decidability results can be explained alternatively, in terms of the "bounded-model property" of a particular class of structures. Using this approach, we are able to show that DSP is decidable on a larger class of specifications than known previously.

• A finitary analogue of the downward Lowenheim-Skolem property [Sep 20 2017 +]

Date: September 20, 2017
Time: 11:00 am to 12:00 pm
Venue: Room 105, New CSE Building

We present a model-theoretic property of finite structures, that can be seen as a finitary analogue of the well-studied downward Lowenheim-Skolem property from classical model theory. We call this property the *equivalent bounded substructure property*, denoted EBSP. Intuitively, EBSP states that a large finite structure contains a small "logically similar" substructure, where logical similarity means indistinguishability with respect to sentences of FO/MSO having a given quantifier nesting depth. It turns out that this simply stated property is enjoyed by a variety of classes of interest in computer science: examples include regular languages of words, trees (unordered, ordered, ranked or partially ranked) and nested words, and various classes of graphs, such as cographs, graph classes of bounded tree-depth, those of bounded shrub-depth and m-partite cographs. Further, EBSP remains preserved in the classes generated from the above using various well-studied operations, such as complementation, transpose, the line-graph operation, disjoint union, join, and various products including the Cartesian and tensor products. Furthermore, by considering relations other than "substructure" in the EBSP definition, many more computationally interesting classes turn out to satisfy (the corresponding variant of) EBSP: these include graph classes of bounded tree-width, those of bounded clique-width, and those of bounded rank-width.

All of the aforementioned classes admit natural tree representations for their structures. We use this fact to show that the small and logically similar substructure of a large structure in any of these classes, can be computed in time linear in the size of the tree representation of the structure, giving linear time fixed parameter tractable (f.p.t.) algorithms for checking FO/MSO definable properties of the large structure (algorithmic meta-theorems). We conclude by presenting a strengthening of EBSP, that asserts "logical self-similarity at all scales" for a suitable notion of scale. We call this the *logical fractal* property and show that most of the classes mentioned above are indeed, logical fractals.

• Approximating Geometric Knapsack via L-packings [Aug 23 2017 +]

Date: August 23, 2017
Time: 11:30 am to 12:30 pm
Venue: Conference room, KReSIT, first floor

We study the two-dimensional geometric knapsack problem (2DK), a geometric variant of the classical knapsack problem. In this problem we are given a set of axis-aligned rectangular items, each one with an associated profit, and an axis-aligned square knapsack. The goal is to find a (non-overlapping) packing of a maximum profit subset of items inside the knapsack without rotating items. This is a very well-studied optimization problem and finds applications in scheduling, memory allocation, advertisement placement etc. The best-known polynomial-time approximation factor for this problem (even just in the cardinality case) is $$2+\epsilon$$ [Jansen and Zhang, SODA 2004].

After more than a decade, in this paper we break the $$2$$-approximation barrier, achieving a polynomial-time $$17/9+\epsilon$$ (which is less than $$1.89$$) approximation, which improves to $$558/325+\epsilon$$ (i.e $$1.72$$) in the cardinality case.

We also consider the variant of the problem with rotations (2DKR) , where the items can be rotated by 90 degrees. Also in this case the best-known polynomial-time approximation factor (even for the cardinality case) is $$2+\epsilon$$ [Jansen and Zhang, SODA 2004]. Exploiting part of the machinery developed for 2DK plus a few additional ideas, we obtain a polynomial-time $$3/2+\epsilon$$-approximation for 2DKR, which improves to $$4/3+\epsilon$$ in the cardinality case.

This is a joint work with Waldo Galvez, Fabrizio Grandoni, Sandy Heydrich, Salvatore Ingala and Andreas Wiese. This result will appear in FOCS 2017.

• General Strong-Polarization [Aug 11 2017 +]

Date: August 11, 2017
Time: 11:30 am to 12:30 pm
Venue: 105, New CC, first floor

A martingale is a sequence of random variables that maintain their future expected value conditioned on the past. A $$[0,1]$$-bounded martingale is said to polarize if it converges in the limit to either $$0$$ or $$1$$ with probability $$1$$. A martingale is said to polarize strongly, if in $$t$$ steps it is sub-exponentially close to its limit with all but exponentially small probability. In 2008, Arikan built a powerful class of error-correcting codes called Polar codes. The essence of his theory associates a martingale with every invertible square matrix over a field (and a channel) and showed that polarization of the martingale leads to a construction of codes that converge to Shannon capacity. In 2013, Guruswami and Xia, and independently Hassani et al. showed that strong polarization of the Arikan martingale leads to codes that converge to Shannon capacity at finite block lengths, specifically at lengths that are inverse polynomial in the gap to capacity, thereby resolving a major mathematical challenge associated with the attainment of Shannon capacity.

We show that a simple necessary condition for an invertible matrix to polarize over any non-trivial channel is also sufficient for strong polarization over all symmetric channels over all prime fields. Previously the only matrix which was known to polarize strongly was the $$2\times 2$$ Hadamard matrix. In addition to the generality of our result, it also leads to arguably simpler proofs. The essence of our proof is a local definition'' of polarization which only restricts the evolution of the martingale in a single step, and a general theorem showing the local polarization suffices for strong polarization.

In this talk I will introduce polarization and polar codes and, time permitting, present a full proof of our main theorem. No prior background on polar codes will be assumed.

Based on joint work with Jaroslaw Blasiok, Venkatesan Guruswami, Preetum Nakkiran and Atri Rudra.

• Derandomizing Isolation Lemma: A geometric approach [Jul 31 2017 +]

Date: July 31, 2017.
Time: 2pm to 3pm
Venue: Conference Room, Kanwal Rekhi Building, first floor

We present a geometric approach towards derandomizing the Isolation lemma for a given family of subsets, i.e., deterministically constructing a weight assignment which ensures a unique minimum weight set in the family. The idea is to work with a polytope corresponding to the family of sets. In this talk, we present the approach in terms of general polytopes and describe a sufficient condition on the polytope for this approach to work. The approach gives a quasi-polynomially bounded weight assignment. Finally, we show that two specific families - perfect matchings in bipartite graphs and common base sets of two matroids - satisfy the required condition and thus, we get an isolating weight assignment for these cases. This also puts the two problems in quasi-NC.
Based on joint works with Stephen Fenner and Thomas Thierauf.