# Theoretical Computer Science @ IIT Bombay

Apart from the seminar talks, we also have a reading group that periodically meets. Please visit the Spring 2020 reading group page for more details.

## Theory Seminar

The talks are announced in the theory-seminar mailing-list.- APPROXIMATING PROFILE MAXIMUM LIKELIHOOD EFFICIENTLY: NEW BOUNDS ON THE BETHE PERMANENT
[Jan 08 2020 +]
Date: Jan 08, 2020

Time: 2:00 pm - 3:00 pm

Venue: CC109, 1st Floor, New CSE Building

Abstract: Symmetric properties of distributions arise in multiple settings. For each of these, separate estimators and analysis techniques have been developed. Recently, Orlitsky et al showed that a single estimator that maximizes profile maximum likelihood (PML) is sample competitive for all symmetric properties. Further, they showed that even a 2^{n^{1-delta}}-approximate maximizer of the PML objective can serve as such a universal plug-in estimator. (Here n is the size of the sample). Unfortunately, no polynomial time computable PML estimator with such an approximation guarantee was known. We provide the first such estimator and show how to compute it in time nearly linear in n. The PML objective is related to the permanent of a certain Vandermonde matrix. A surprising connection between the convex relaxation we introduce and the Bethe free energy approximation originating in statistical physics leads to new bounds on the Bethe permanent of non-negative matrices. This is in joint work with Moses Charikar and Aaron Sidford

- Junta Correlation is Testable
[Jan 08 2020 +]
Date: Jan 08, 2020

Time: 11:00 am - 12:00 pm

Venue: CC109, 1st Floor, New CSE Building

Abstract: A Boolean function f on the n-dimensional hypercube is said to be a k-junta if it is dependent only on some k coordinates of the input. These functions have been widely studied in the context of PCPs, learning theory and property testing. In particular, a flagship result in property testing is that k-juntas can be tested with \tilde{O}(k) queries -- i.e., there is an algorithm which when given a black box to f, makes only \tilde{O}(k) queries and decides between the following two cases: 1. f is a k-junta. 2. f is at least 0.01 far from every k-junta g in Hamming distance. Surprisingly, the query complexity is completely independent of the ambient dimension n. In this work, we achieve the same qualitative guarantee for the noise tolerant version of this problem. In particular, we give a 2^k query algorithm to distinguish between the following two cases. 1. f is 0.48-close to some k-junta. 2. f is at least 0.49 far from every k-junta. The algorithm and its proof of correctness are simple and modular employing some classical tools like “random restrictions” with elementary properties of the noise operator on the cube. Joint work with Elchanan Mossel and Joe Neeman.

- Overview of Quantum Algorithms and Complexity
[Jan 02 2020 +]
Date: Jan 02, 2020

Time: 2:15 pm - 3:15 pm

Venue: CC109, 1st Floor, New CSE Building

Abstract: In this informal talk, I\'ll give a brief overview of quantum computing and its intersection with theoretical computer science. I\'m happy to talk in more depth about whatever the audience is interested in. Please come with your questions about quantum computing (about the future of quantum computing, its impact on society, recent breakthroughs, research themes, the job market, or whatever you want to know more about).

- Can we learn algorithms?
[Dec 30 2019 +]
Date: Dec 30, 2019

Time: 10:30 am - 11:30 am

Venue: CC109, 1st Floor, New CSE Building

Abstract: Deep Learning has been applied successfully to many basic human tasks such as object recognition and speech recognition, and increasingly to the more complex task of language understanding. In addition, deep learning has been extremely successful in the context of planning tasks in constrained environments (e.g., game playing). We ask: Can deep learning be applied to design algorithms? We formulate this question precisely, and show how to apply deep reinforcement learning to design, from scratch, online algorithms for a number of classic optimization problems such as the AdWords problem (choosing which advertisers to include in a keyword auction), the online knapsack problem, and optimal stopping problems. Our results indicate that the models have learned behaviours that are consistent with the traditional optimal algorithms for these problems. Based on joint work with William Kong, Chris Liaw, and D. Sivakumar (ICLR 2019), and with D. Sivakumar, Di Wang, and Goran Zuzic (ongoing work).

- Non-malleable Encodings
[Oct 24 2019 +]
Date: Oct 24, 2019

Time: 11:00 am - 12:00 pm

Venue: CC109, 1st Floor, New CSE Building

Abstract: Non-malleable Encodings provide the following protection against tampering: a decoded message is either the same or completely independent of the underlying message. They have many applications to cryptography with connections to pseudo-random objects such as extractors, expanders and so on. In this talk, I will survey some of our recent results in this area.

- Two new results about quantum exact learning
[Sep 26 2019 +]
Date: Sep 26, 2019

Time: 2:00 pm - 3:00 pm

Venue: CC109, 1st Floor, New CSE Building

Abstract: We present two new results about exact learning by quantum computers. First, we show how to exactly learn a $k$-Fourier-sparse $n$-bit Boolean function from $O(k^{1.5}(\log k)^2)$ uniform quantum examples for that function. This improves over the bound of $\widetilde{\Theta}(kn)$ uniformly random \emph{classical} examples (Haviv and Regev, CCC'15). Our main tool is an improvement of Chang's lemma for the special case of sparse functions. Second, we show that if a concept class $\Cc$ can be exactly learned using $Q$ quantum membership queries, then it can also be learned using $O\left(\frac{Q^2}{\log Q}\log|\Cc|\right)$ \emph{classical} membership queries. This improves the previous-best simulation result (Servedio and Gortler, SICOMP'04) by a $\log Q$-factor.

- Low variate polynomials: hitting-sets and bootstrapping
[Aug 12 2019 +]
Date: Aug 12, 2019

Time: 10:35 am - 11:30 am

Venue: CC109, 1st Floor, New CSE Building

One central concept in complexity theory is Derandomization. Polynomial Identity Testing (PIT) problem is a good test case for that purpose. PIT problem asks to decide whether a given arithmetic circuit computes a zero polynomial. It has a polynomial time randomized algorithm. However, designing a *deterministic* polynomial time algorithm for PIT is a long-standing open question in complexity theory. In our work, we study PIT problem in the blackbox setting, where we are not allowed to see the internal structure of the circuit and only evaluation at points is allowed. Designing blackbox PIT algorithm is equivalent to finding a set of points, called hitting set, such that for every non-zero circuit the set contains a point where it evaluates a non-zero value. In a joint work with Manindra Agrawal and Nitin Saxena, we study the phenomenon of bootstrapping in PIT: converting a hitting set of size-s degree-s n(s)-variate circuits to hitting set of a size s degree s s-variate circuits, where n(s) < < s. We study the bootstrapping phenomenon in the following cases:

Perfect Bootstrapping: when n(s)=log ^{\circ c} s, for example log^{\circ 2} s=loglog s.

Constant Bootstrapping: when n(s)=constant.

Shallow Bootstrapping: when n(s)=constant with depth-4 circuits.

Our previous work inspires us to study PIT for circuits with few variables, eg. logarithmic in the size s. In a joint work with Michael Forbes and Nitin Saxena, we give a poly(s)-time blackbox identity test for n=O(log s)-variate size s circuits that have poly(s) dimensional partial derivative space; eg. depth-3 diagonal circuits, which compute sum of power of linear polynomials. The former model is well-studied but no poly(s2^n)-time identity test was known before us.

- Derandomization from Algebraic Hardness: Treading the borders
[July 01 2019 +]
Date: July 01, 2019

Time: 2:15 pm - 3:15 pm

Venue: CC109, 1st Floor, New CSE Building

Abstract: A common phenomenon in complexity theory is that "hardness" and "derandomization" have very close connections between each other. Often, the existence of one implies the other. In algebraic complexity, this phenomenon manifests itself via construction of "pseudorandom/hitting set generators" given a "hard function". [Kabanets and Impagliazzo] gave such a construction using "combinatorial designs", and more result results have shown that this construction can also be `bootstrapped'. In these results, the "harder" the function is, the better the derandomization is. However, from these results, no matter how hard the function is, they do not yield a complete derandomization. In this work, we primarily address the following question: Q: Is there any algebraic hardness assumption that yields a generator achieving complete derandomization? We answer this question in the affirmative by constructing a suitable generator from a suitably hard polynomial. With some additional help from the "border", we also obtain application to improve the recent `bootstrapping' results. This is joint work with Zeyu Guo, Mrinal Kumar and Noam Solomon.

- UNIQUE GAMES WITH VALUE HALF AND ITS APPLICATIONS
[Apr 01 2019 +]
Date: Apr 01, 2019

Time: 10:35 am - 11:35 am

Venue: CC109, 1st Floor, New CSE Building

Abstract: PCP Theorem characterizes the class NP and gives hardness of approximation for many optimization problems. Despite this, several tight hardness of approximation results remain elusive via the PCP theorem. In 2002, Khot proposed the Unique Games Conjecture. Since the formulation of the conjecture, it has found interesting connections to the tight hardness of approximation results for many optimization problems. In this talk, I will discuss recent developments on the Unique Games Conjecture. The 2-to-2 Games Theorem implies that it is NP-hard to distinguish between Unique Games instances with assignment satisfying at least 1/2 fraction of the constraints vs. no assignment satisfying more than \eps fraction of the constraints, for every constant \eps>0. We show that the reduction can be transformed in a non-trivial way to give a stronger guarantee on the Unique Games instance and use this guarantee to convert the known Unique Games-hardness results to NP-hardness for several opt imization problems.

- Conditional Disclosure of Secrets: Lower Bounds and Perspectives
[Mar 22 2019 +]
Date: March 22, 2019

Time: 11:00 pm - 12:00 pm

Venue: CC109, 1st Floor, New CSE Building

Conditional Disclosure of Secrets (CDS), introduced by Gertner et al in 2000, is a minimal model of secure multi-party computation. Alice and Bob, who hold n-bit inputs x and y respectively, wish to release a common secret z to Carol (who knows both x and y) if and only if the input (x, y) satisfies some predefined predicate f . Alice and Bob are allowed to send a single message to Carol which may depend on their inputs and some shared randomness, and the goal is to minimize the communication complexity while providing information-theoretic security.

CDS protocols have applications to numerous cryptographic tasks, such as Private Information Retrieval and constructing Attribute-Based Encryption schemes, and are behind recent breakthrough constructions of secret sharing schemes with sub-exponential-sized shares for general access structures.

In this talk, I will present what we have learnt so far about the complexity of CDS protocols, focusing on lower bounds and connections to communication complexity and the study of statistical zero-knowledge proofs.

Based on joint work with Benny Applebaum, Barak Arkis, and Pavel Raykov.

- Algebraic independence testing, approximate polynomials satisfiability, and the GCT chasm
[Mar 06 2019 +]
Date: March 06, 2019

Time: 4:00 pm - 5:00 pm

Venue: CC109, 1st Floor, New CSE Building

Algebraic independence testing of polynomials over a field F is a basic problem with several applications in theoretical computer science. When char(F) = 0, this problem has an efficient randomized algorithm thanks to the classical Jacobian criterion. Over finite fields, however, no general subexponential-time algorithm is known due to the failure of the Jacobian criterion in positive characteristic. In this talk, I will show that algebraic independence testing over finite fields are in the intersection of AM and coAM. In particular, it is unlikely to be NP-hard and joins the league of problems of "intermediate" complexity, e.g. graph isomorphism & integer factoring. Our proof is based on a novel "point counting" approach that avoids the use of the Jacobian criterion.

Next, I will talk about another problem called approximate polynomials satisfiability, which asks if a given system of polynomial equations has an approximate solution over an algebraically closed field. I will show that this problem can be solved in PSPACE using properties of projective spaces. As an unexpected application to approximative complexity theory, we prove that hitting-sets for the class "border VP" can be designed in PSPACE. This solves an open problem posed in (Mulmuley, FOCS'12, J.AMS 2017), greatly mitigating the GCT Chasm (exponentially in terms of space complexity).

This is joint work with Nitin Saxena and Amit Sinhababu.

- Two Unsolved Problems: Birkhoff-von Neumann Graphs and PM-compact Graphs
[Jan 30 2019 +]
Date: January 30, 2019

Time: 2:30 pm - 3:30 pm

Venue: SIC 201, 2nd Floor, KReSIT Building

A geometric object of great interest in combinatorial optimization is the perfect matching polytope of a graph \(G\). In any investigation concerning the perfect matching polytope, one may assume that \(G\) is matching covered --- that is, it is a connected graph (of order at least two) and each edge lies in some perfect matching.

A graph \(G\) is Birkhoff-von Neumann if its perfect matching polytope is characterized solely by non-negativity and degree constraints. A result of Balas (1981) implies that \(G\) is Birkhoff-von Neumann if and only if \(G\) does not contain a pair of vertex-disjoint odd cycles \((C_1,C_2)\) such that \(G-V(C_1)-V(C_2)\) has a perfect matching. It follows immediately that the corresponding decision problem is in co-NP. However, it is not known to be in NP. The problem is in P if the input graph is planar --- due to a result of Carvalho, Lucchesi and Murty (2004). More recently, in a joint work with these three authors, we have shown that this problem is equivalent to the seemingly unrelated problem of deciding whether a given graph is \(\overline{C_6}\)-free.

The combinatorial diameter of a polytope is the diameter of its \(1\)-skeleton graph. A graph \(G\) is PM-compact if the combinatorial diameter of its perfect matching polytope equals one. A result of Chvátal (1975) implies that \(G\) is PM-compact if and only if \(G\) does not contain a pair of vertex-disjoint even cycles \((C_1,C_2)\) such that \(G-V(C_1)-V(C_2)\) has a perfect matching. Once again the corresponding decision problem is in co-NP, but it is not known to be in NP. The problem is in P if the input graph is bipartite or is near-bipartite --- due to a result of Wang, Lin, Carvalho, Lucchesi, Sanjith and Little (2013).

Recently, in a joint work with Marcelo H. de Carvalho, Xiumei Wang and Yixun Lin, we considered the "intersection" of the aforementioned problems. We give a complete characterization of matching covered graphs that are Birkhoff-von Neumann as well as PM-compact. (Thus the corresponding decision problem is in P.)

- Some Closure Results for Polynomial Factorization and Applications
[Jan 09 2019 +]
Date: January 9, 2019

Time: 11:00 am - 12:00 noon

Venue: CC 109, New CSE BuildingIn a sequence of seminal results in the 80's, Kaltofen showed that if an n-variate polynomial of degree poly(n) can be computed by an arithmetic circuit of size poly(n), then each of its factors can also be computed an arithmetic circuit of size poly(n). In other words, the complexity class VP (the algebraic analog of P) of polynomials, is closed under taking factors.

A fundamental question in this line of research, which has largely remained open is to understand if other natural classes of multivariate polynomials, for instance, arithmetic formulas, algebraic branching programs, constant depth arithmetic circuits or the complexity class VNP (the algebraic analog of NP) of polynomials, are closed under taking factors. In addition to being fundamental questions on their own, such 'closure results' for polynomial factorization play a crucial role in the understanding of hardness randomness tradeoffs for algebraic computation.

I will talk about the following two results, whose study was motivated by these questions.

- The class VNP is closed under taking factors. This proves a conjecture of B{\"u}rgisser.
- All factors of degree at most poly(log n) of polynomials with constant depth circuits of size poly(n) have constant (a slightly larger constant) depth arithmetic circuits of size poly(n). This partially answers a question of Shpilka and Yehudayoff and has applications to hardness-randomness tradeoffs for constant depth arithmetic circuits.

Based on joint work with Chi-Ning Chou and Noam Solomon.

- Enabling Secure Computation with Minimal Interaction
[Nov 26 2018 +]
Date: November 26, 2018

Time: 11:00 am - 12:00 noon

Venue: CC 109, New CSE BuildingCryptography, originally the art of facilitating secret communication, has now evolved to enable a variety of secure tasks. Some fundamental challenges that modern cryptography addresses are: Can we prevent adversaries from tampering with encrypted communication? Can we verify that computation is performed correctly while preserving the privacy of data on which computation occurs? Can we enable mutually distrusting participants to jointly compute on distributed private data?

In this talk, I will discuss new research that builds minimally-interactive secure protocols to accomplish these tasks, based on widely believed cryptographic hardness assumptions. I will also present new techniques that overcome known barriers and enable new kinds of secure protocols. These techniques have made it possible to achieve non-malleability, strong variants of zero-knowledge, and other security properties while requiring only a single message from each participant.

This is based on joint works with Abhishek Jain, Yael Kalai, Ron Rothblum and Amit Sahai.

- Algorithms from Physics
[Oct 25 2018 +]
Date: October 25, 2018

Time: 11:30 am - 12:30 pm

Venue: CC 109, New CSE BuildingIn understanding physical systems over hundreds of years, Physicists have developed a wealth of dynamics and viewpoints. Some of these methods, when abstracted appropriately, could lead to new algorithmic techniques with applications to machine learning and TCS. I will present a couple of recent examples from my own research on such interactions between Physics and Algorithms – a Hamiltonian Dynamics inspired algorithm for sampling from continuous distributions and a Boltzmann's equation based algorithm for estimating the partition function for discrete distributions.

- Fairness and Diversity in Online Social Systems
[Oct 25 2018 +]
Date: October 25, 2018

Time: 10:30 am - 11:30 am

Venue: CC 109, New CSE BuildingSocial systems are now fueled by algorithms that facilitate and control connections and information. Simultaneously, computational systems are now fueled by people - their interactions, data, and behavior. Consequently, there is a pressing need to design new algorithms that are socially responsible in how they learn, and socially optimal in the manner in which they use information. Recently, we have made initial progress in addressing such problems at this interface of social and computational systems. In this talk, we will first understand the emergence of bias in data and algorithmic decision making and present first steps towards developing a systematic framework to control biases in classical problems such as data summarization and personalization. This work leads to new algorithms that have the ability to alleviate bias and increase diversity while often simultaneously maintaining their theoretical or empirical performance with respect to the original metrics.

- A Short List of Equalities Induces Large Sign Rank
[Sept 18 2018 +]
Date: September 18, 2018

Time: 2:00 pm - 4:00 pm

Venue: CC 105, New CSE BuildingWe exibit a natural function \(F_n\) on \(n\) variables, computable by a linear sized decision list of "Equalities", but whose sign rank is \(\exp(\Omega(n^{1/4}))\). The simplicity of \(F_n\) yields the following two consequences.

1) \(F_n\) can be computed by linear sized depth-2 threshold formulas when the weights of the threshold gates are unrestricted (THR o THR), but any THR o MAJ circuit computing it requires size \(\exp(\Omega(n^{1/4}))\). This provides the first exponential separation between the boolean circuit complexity classes THR o MAJ and THR o THR, answering a question of Amano and Maruoka [MFCS'05] and Hansen and Podolskii [CCC'10]. In contrast, Goldmann, Hastad and Razborov [Computational Complexity'92] had shown more than 25 years ago that functions efficiently computable by MAJ o THR circuits can also be efficiently computed by MAJ o MAJ circuits. Thus, while weights at the bottom layer do not matter when the top is light, we discover that they do when the top is heavy.

2) The function \(F_n\) lies in the communication complexity class P

^{MA}under the natural partition of the inputs. Since \(F_n\) has large sign rank, this implies P^{MA}is not a subset of UPP, strongly resolving a conjecture of Goos, Pitassi and Watson [ICALP'16].In order to prove our main result, we view \(F_n\) as an XOR function, and develop a technique to lower bound the sign rank of such functions. This requires novel arguments that allow us to conclude that polynomials of low Fourier weight cannot approximate certain functions, even if they are allowed unrestricted degree.

Based on joint work with Arkadev Chattopadhyay (TIFR), to appear in FOCS 2018.

- Bootstrapping Identity Tests for Algebraic Circuits
[Aug 1 2018 +]
Date: August 1, 2018

Time: 2:00 pm - 3:00 pm

Venue: CC 109, New CSE BuildingThe classical Ore-DeMillo-Lipton-Schwartz-Zippel lemma [O22,DL78,Z79,S80] states that any nonzero polynomial of degree at most \(s\) over \(n\) variables will evaluate to a nonzero value at some point on a grid \(S^n\) with \(|S| > s\). This leads to a deterministic polynomial identity test (PIT) for all degree-\(s\) size-\(s\) algebraic circuits over \(n\) variables that runs in time \(poly(s) \cdot (s+1)^n\).

Agrawal, Ghosh and Saxena recently introduced the technique of

*bootstrapping*that can be used to asymptotically improve the running time of*blackbox*identity tests. They showed that a running time of as bad as \((s+1)^{n^{0.5 - \delta}} H(n)\) for any \(\delta > 0\) and arbitrary \(H\) can be improved to a running time of \(s^{\exp \circ \exp(\log^{\ast} s)}\), which is slightly super-polynomial. Building on their work we show that the hypothesis can be weakened further to a running time of \((s+1)^{o(n)} H(n)\) (slightly better than the trivial \(s^{O(n)}\)) to get essentially the same conclusion.This is a joint work with Mrinal Kumar (Harvard, Cambridge, MA) and Ramprasad Saptharishi (TIFR, Mumbai).

- A Quadratic Size-Hierarchy Theorem for Small-Depth Multilinear Formulas
[Jul 4 2018 +]
Date: July 4, 2018

Time: 12:05 pm - 1:00 pm

Venue: CC 109, New CSE BuildingArithmetic formulas are directed trees where the leaves are labelled by variables and elements of the field, and internal vertices are labelled by + or x. Every internal vertex computes a polynomial by operating on its inputs. It is easy to see that they are a natural model of computation of polynomials, syntactically (symbolically). The size of the arithmetic formulas refers to the number of + and x operations needed to compute the polynomial.

It is an important question in the study of algebraic computational complexity to understand the amount of computational resources (refers to size of the formulas in this case) needed to compute a given polynomial.

In this talk, we will restrict ourselves to arithmetic formulas where computation at every vertex is a multilinear polynomial and in this scenario we show explicit separations between the expressive powers of multilinear formulas of small-depth and all polynomial sizes. The multilinear restriction is a reasonable one since most polynomials of interest are indeed multilinear, Determinant, Permanent, Iterated Matrix Multiplication, Elementary Symmetric Polynomial. etc.

Formally, we will show that for any \(s = \mathrm{poly}(n)\) and \(\delta > 0\), we show that there are explicit polynomial families that have multilinear formulas of size at most s but no ‘small’-depth multilinear formulas of size less than \(s^{0.5 - \delta}\).

Our proof can be viewed as a derandomization of a lower bound technique of Raz (JACM 2009) using \(\epsilon\)-biased spaces.

(Joint work with Nutan Limaye and Srikanth Srinivasan)

- Lower Bound Techniques for QBF Proof Systems
[Apr 17 2018 +]
Date: April 17, 2018

Time: 11:30 am - 12:30 pm

Venue: CC 109, New CSE BuildingHow do we prove that a false QBF is indeed false? How big a proof is needed? The special case when all quantifiers are existential is the well-studied setting of propositional proof complexity. Expectedly, universal quantifiers change the game significantly. Several proof systems have been designed in the last couple of decades to handle QBFs. Lower bound paradigms from propositional proof complexity cannot always be extended - in most cases feasible interpolation and consequent transfer of circuit lower bounds works, but obtaining lower bounds on size by providing lower bounds on width fails dramatically. A new paradigm with no analogue in the propositional world has emerged in the form of strategy extraction, allowing for transfer of circuit lower bounds, as well as obtaining independent genuine QBF lower bounds based on a semantic cost measure.

This talk will provide a broad overview of some of these developments.

- Finding Euler Tours in One Pass in the W-Streaming Model with O(n log n) Memory
[Mar 13 2018 +]
Date: March 13, 2018

Time: 11:30 am - 12:30 pm

Venue: CC 101, New CSE BuildingWe study the problem of finding an Euler tour in an undirected graph G in the W-Streaming model with O(n polylog(n)) RAM, where n, m are the number of nodes, edges of G. Our main result is the first one pass W-Streaming algorithm computing an Euler tour of G in the form of an edge successor function with only O(n log(n)) RAM which is optimal for this setting (e.g., Sun and Woodruff (2015)). The previously best-known result in this model is implicitly given by Demetrescu et al. (2010) with the parallel algorithm of Atallah and Vishkin (1984) using O(m/n) passes under the same RAM limitation. For graphs with omega(n) edges this is non-constant.

Our overall approach is to partition the edges into edge-disjoint cycles and to merge the cycles until a single Euler tour is achieved. Note that in the W-Streaming model such a merging is far from being obvious as the limited RAM allows the processing of only a constant number of cycles at once. This enforces us to merge cycles that partially are no longer present in RAM. Furthermore, the successor of an edge cannot be changed after the edge has left RAM. So, we steadily have to output edges and their designated successors, not knowing the appearance of edges and cycles yet to come. We solve this problem with a special edge swapping technique, for which two certain edges per node are sufficient to merge tours without having all of their edges in RAM. Mathematically, this is controlled by structural results on the space of certain equivalence classes corresponding to cycles and the characterization of associated successor functions. For example, we give conditions under which the swapping of edge successors leads to a merging of equivalence classes. The mathematical methods of our analysis are new and might be of independent interest for other routing problems in streaming models.

(Joint work with Christian Glazik, Jan Schiemann.)

- Homomorphic Secret Sharing
[Feb 27 2018 +]
Date: February 27, 2018

Time: 2:00 pm to 3:00 pm

Venue: CC 109, New CSE BuildingA homomorphic secret sharing (HSS) scheme is a secret sharing scheme that supports a compact local evaluation of functions of the shared secret. For the purpose of applications of HSS, it is useful to make the stronger requirement that the output can be reconstructed via addition. That is, if a secret \(s\) is split into shares \(s_1,\ldots,s_m,\) then \(f(s)\) can be written as \(F(s_1)+\ldots+F(s_m)\) for some efficiently computable \(F\), where addition is over a finite abelian group.

The talk will survey the state of the art on HSS and its applications, focusing mainly on: (1) computationally secure 2-out-of-2 HSS schemes for simple function classes using any one-way function, and (2) similar HSS schemes for branching programs from assumptions related to the hardness of discrete logarithms. These can provide alternatives to fully homomorphic encryption in the context of minimizing the communication complexity and round complexity of secure computation.

The talk is based mostly on joint works with Elette Boyle and Niv Gilboa.

- On Expressing Majority as a Majority of Majorities
[Jan 25 2018 +]
Date: January 25, 2018

Time: 10:30 am to 11:30 am

Venue: CC 109, New CSE BuildingIf \(k < n\), can one express the majority of \(n\) bits as the majority of at most \(k\) majorities, each of at most \(k\) bits? We prove that such an expression is possible only if \(k=\Omega(n^{4/5})\). This improves on a bound proved by Kulikov and Podolskii, who showed that \(k=\Omega(n^{0.7-\delta})\). Our proof is based on ideas originating in discrepancy theory, as well as a strong concentration bound for sums of independent Bernoulli random variables and a strong anticoncentration bound for the hypergeometric distribution.

- The Chasm at Depth Four, and Tensor Rank : Old results, new insights
[Oct 25 2017 +]
Date: October 25, 2017

Time: 11:00 am to 12:00 pm

Venue: Conference room, 1st Floor, CSE Kanwal Rekhi BldgProving "good" lower bounds on the size of the arithmetic formulas computing an explicit polynomial is one of the central themes in the study of algebraic complexity. But the best known lower bound on the size of an arithmetic formula computing an explicit polynomial is quadratic in the number of variables, due to Kalorkoti [SIAM J. Comp. 1985]. There hasn’t been any progress on that front in the last 32 years.

But in a surprising result, Raz [J. ACM 2013] showed that for any n and d where d is sub-logarithmic in n, constructing explicit tensors of high enough rank would imply superpolynomial lower bounds for general arithmetic formulas. We first give a new and arguably simpler proof of this connection. We do this by showing a structured depth reduction to depth four for arithmetic formulas. This is a simpler proof of the depth reduction results of Agrawal and Vinay [FOCS 2008], Koiran [TCS 2012], and Tavenas [MFCS 2013]. We also extend the result of Raz to homogeneous formulas. In this setting, wee show that such a connection holds for any d that is sub-polynomial in n. As a corollary of our extension, we show that for any n and d where d is O(log n) (as opposed to the almost-logarithmic in n as in Raz’s result), constructing explicit tensors of high enough rank would imply superpolynomial lower bounds for general arithmetic formulas.

Joint work with Mrinal Kumar, Ramprasad Saptharishi and V. Vinay Available at https://eccc.weizmann.ac.il/report/2016/096/. - An approach to Distributed Synthesis
[Oct 18 2017 +]
Date: October 18, 2017

Time: 11:00 am to 12:00 pm

Venue: Conference room, 1st Floor, CSE Kanwal Rekhi BldgThe Distributed Synthesis Problem(DSP) refers to the question of algorithmically synthesizing distributed systems from specifications. We consider some specification classes known to have a decidable DSP and show that these decidability results can be explained alternatively, in terms of the "bounded-model property" of a particular class of structures. Using this approach, we are able to show that DSP is decidable on a larger class of specifications than known previously.

- A finitary analogue of the downward Lowenheim-Skolem property
[Sep 20 2017 +]
Date: September 20, 2017

Time: 11:00 am to 12:00 pm

Venue: Room 105, New CSE BuildingWe present a model-theoretic property of finite structures, that can be seen as a finitary analogue of the well-studied downward Lowenheim-Skolem property from classical model theory. We call this property the *equivalent bounded substructure property*, denoted EBSP. Intuitively, EBSP states that a large finite structure contains a small "logically similar" substructure, where logical similarity means indistinguishability with respect to sentences of FO/MSO having a given quantifier nesting depth. It turns out that this simply stated property is enjoyed by a variety of classes of interest in computer science: examples include regular languages of words, trees (unordered, ordered, ranked or partially ranked) and nested words, and various classes of graphs, such as cographs, graph classes of bounded tree-depth, those of bounded shrub-depth and m-partite cographs. Further, EBSP remains preserved in the classes generated from the above using various well-studied operations, such as complementation, transpose, the line-graph operation, disjoint union, join, and various products including the Cartesian and tensor products. Furthermore, by considering relations other than "substructure" in the EBSP definition, many more computationally interesting classes turn out to satisfy (the corresponding variant of) EBSP: these include graph classes of bounded tree-width, those of bounded clique-width, and those of bounded rank-width.

All of the aforementioned classes admit natural tree representations for their structures. We use this fact to show that the small and logically similar substructure of a large structure in any of these classes, can be computed in time linear in the size of the tree representation of the structure, giving linear time fixed parameter tractable (f.p.t.) algorithms for checking FO/MSO definable properties of the large structure (algorithmic meta-theorems). We conclude by presenting a strengthening of EBSP, that asserts "logical self-similarity at all scales" for a suitable notion of scale. We call this the *logical fractal* property and show that most of the classes mentioned above are indeed, logical fractals.

- Approximating Geometric Knapsack via L-packings
[Aug 23 2017 +]
Date: August 23, 2017

Time: 11:30 am to 12:30 pm

Venue: Conference room, KReSIT, first floorWe study the two-dimensional geometric knapsack problem (2DK), a geometric variant of the classical knapsack problem. In this problem we are given a set of axis-aligned rectangular items, each one with an associated profit, and an axis-aligned square knapsack. The goal is to find a (non-overlapping) packing of a maximum profit subset of items inside the knapsack without rotating items. This is a very well-studied optimization problem and finds applications in scheduling, memory allocation, advertisement placement etc. The best-known polynomial-time approximation factor for this problem (even just in the cardinality case) is \(2+\epsilon\) [Jansen and Zhang, SODA 2004].

After more than a decade, in this paper we break the \(2\)-approximation barrier, achieving a polynomial-time \(17/9+\epsilon\) (which is less than \(1.89\)) approximation, which improves to \(558/325+\epsilon\) (i.e \(1.72\)) in the cardinality case.

We also consider the variant of the problem with rotations (2DKR) , where the items can be rotated by 90 degrees. Also in this case the best-known polynomial-time approximation factor (even for the cardinality case) is \(2+\epsilon\) [Jansen and Zhang, SODA 2004]. Exploiting part of the machinery developed for 2DK plus a few additional ideas, we obtain a polynomial-time \(3/2+\epsilon\)-approximation for 2DKR, which improves to \(4/3+\epsilon\) in the cardinality case.

This is a joint work with Waldo Galvez, Fabrizio Grandoni, Sandy Heydrich, Salvatore Ingala and Andreas Wiese. This result will appear in FOCS 2017. - General Strong-Polarization
[Aug 11 2017 +]
Date: August 11, 2017

Time: 11:30 am to 12:30 pm

Venue: 105, New CC, first floorA martingale is a sequence of random variables that maintain their future expected value conditioned on the past. A \([0,1]\)-bounded martingale is said to polarize if it converges in the limit to either \(0\) or \(1\) with probability \(1\). A martingale is said to polarize strongly, if in \(t\) steps it is sub-exponentially close to its limit with all but exponentially small probability. In 2008, Arikan built a powerful class of error-correcting codes called Polar codes. The essence of his theory associates a martingale with every invertible square matrix over a field (and a channel) and showed that polarization of the martingale leads to a construction of codes that converge to Shannon capacity. In 2013, Guruswami and Xia, and independently Hassani et al. showed that strong polarization of the Arikan martingale leads to codes that converge to Shannon capacity at finite block lengths, specifically at lengths that are inverse polynomial in the gap to capacity, thereby resolving a major mathematical challenge associated with the attainment of Shannon capacity.

We show that a simple necessary condition for an invertible matrix to polarize over any non-trivial channel is also sufficient for strong polarization over all symmetric channels over all prime fields. Previously the only matrix which was known to polarize strongly was the \(2\times 2\) Hadamard matrix. In addition to the generality of our result, it also leads to arguably simpler proofs. The essence of our proof is a ``local definition'' of polarization which only restricts the evolution of the martingale in a single step, and a general theorem showing the local polarization suffices for strong polarization.

In this talk I will introduce polarization and polar codes and, time permitting, present a full proof of our main theorem. No prior background on polar codes will be assumed.

Based on joint work with Jaroslaw Blasiok, Venkatesan Guruswami, Preetum Nakkiran and Atri Rudra. - Derandomizing Isolation Lemma: A geometric approach
[Jul 31 2017 +]
Date: July 31, 2017.

Time: 2pm to 3pm

Venue: Conference Room, Kanwal Rekhi Building, first floorWe present a geometric approach towards derandomizing the Isolation lemma for a given family of subsets, i.e., deterministically constructing a weight assignment which ensures a unique minimum weight set in the family. The idea is to work with a polytope corresponding to the family of sets. In this talk, we present the approach in terms of general polytopes and describe a sufficient condition on the polytope for this approach to work. The approach gives a quasi-polynomially bounded weight assignment. Finally, we show that two specific families - perfect matchings in bipartite graphs and common base sets of two matroids - satisfy the required condition and thus, we get an isolating weight assignment for these cases. This also puts the two problems in quasi-NC.

Based on joint works with Stephen Fenner and Thomas Thierauf.