The weekly Theory seminar provides an opportunity to get exposed to
cutting edge research in theoretical computer science, to learn to
understand technical material from talks and to give technical
presentations. Graduate students interested in doing research in
algorithms, complexity theory, cryptography, quantum
computing and related areas are strongly encouraged to attend.

All talks will be at 2:45 pm on Fridays in 116 Earth and Engineering Science Building (unless they
are part of the departmental colloquium). Any unusual date/time/location is highlighted in the schedule.

If you would like to speak,
please email us at: theory [at] cse [dot] psu [dot] edu.

If you want to receive announcements of upcoming talks, please
join the theory
mailing list.

Near Optimal LP Rounding for Correlation Clustering on Complete Graphs

Introduced about 10 years ago by Bansal, Blum and Chawla, correlation clustering has become one of the standard techniques in machine learning and data mining.
This due to several advantages of correlation clustering as compared to other standard clustering methods (e.g. k-means):
-Correlation clustering only requires qualitative information about similarities between objects. This makes it applicable in scenarios such as crowdsourced duplicate finding when
information about similarities between objects is generated by humans.
-Correlation clustering doesn’t require the number of clusters to be specified in advance, producing the number of clusters that best fits the data.

We give new rounding schemes for the standard linear programming relaxation of the correlation clustering problem, achieving approximation factors almost matching the integrality gaps:
-For complete graphs our approximation is 2.06-ε for a fixed constant ε, which almost matches the previously known integrality gap of 2.
-For complete k-partite graphs our approximation is 3. We also show a matching integrality gap.
-For complete graphs with edge weights satisfying triangle inequalities and probability constraints, our approximation is 1.5, and we show an integrality gap of 1.2.

Our results improve a long line of work on approximation algorithms for correlation clustering in complete graphs, previously culminating in a ratio of 2.5 for the complete case by
Ailon, Charikar and Newman (JACM'08). In the weighted complete case satisfying triangle inequalities and probability constraints, the same authors give a 2-approximation; for the
bipartite case, Ailon, Avigdor-Elgrabli, Liberty and van Zuylen give a 4-approximation (SICOMP'12).

Joint work with Shuchi Chawla, Konstantin Makarychev and Tselil Schramm."

Adaptive Multiparty Non-Interactive Key Exchange without Setup in the Standard Model

Non-interactive key exchange (NIKE) is a fundamental notion in Cryptography. This notion was introduced by Diffie and Hellman in 1976. They proposed the celebrated 2-party NIKE
protocol and left open as a fascinating question, whether NIKE could be realized in the multiparty setting. NIKE has since then been an active area of research with an ultimate goal
of obtaining best possible security in the multiparty setting. Although this has evaded researchers for many decades, advancements have been made through relaxations in multiple
directions such as restricting to 3-parties, static/semi-static model (where the adversary needs to commit to the set of parties he wishes to be challenged upon ahead of time),
random-oracle model, allowing initial setup, etc.

In this talk, we shall see the first multiparty NIKE protocol that is adaptively secure and in the standard model. (Time permitting; we shall also see how to remove the setup.)

Our construction is based on indistinguishability obfuscation and obliviously-patchable puncturable pseudorandom functions, a new notion that we introduce.

We shall witness employing a novel technique of using indistinguishability obfuscation, which is interesting in its own right and which I believe would find wider applications in other settings. One such technique pertains overcoming, the somewhat inherent, drawback of non-adaptively of the puncturing technique introduced by Sahai and Waters [STOC'14]. Central to this technique is our new notion of obliviously-patchable puncturable pseudorandom functions.

The Hardness of Inverting "One-Way Functions"

One-way functions are functions that are easy to compute but hard to invert in the average case. Almost all crypto systems are based on the assumption that there exists a function which
we can compute in polynomial time but which we cannot invert by classical (and quantum) computer in polynomial time. However, the existence of one-way functions is still a mystery.

In this talk, I am going to discuss the problem of whether there exists a polynomial-time computable function such that inverting that function is NP-Hard. Firstly, I will briefly
introduce the definitions of one-way function, some complexity classes (NP, AM and IP) and the notion of average-case complexity. Then, I will present results by Akavia and Bogdanov
et al., who try to answer the following question: Is it possible to reduce a worst-case NP-Hard problem to the task of inverting a polynomial-time computable function in the average case?
Akavia et al show that the existence of a non-adaptive randomized reduction implies NP is in AM intersects coAM, which in turn implies the collapse of polynomial hierarchy. Bogdanov et
al also show that if the function is size-verifiable, the existence of an adaptive randomized reduction implies NP is in AM intersects coAM. Their results could be regarded as negative
evidences of the existence of one-way functions. Due to the constraint of the time, I will just focus on one of the results. Finally, I will discuss whether quantum computing makes any
difference to this question.

An Optimal Nonadaptive Tester for Monotonicity on the Boolean Hypercube

A function f is said to be monotone if for all points x,y in the domain, x < y implies f(x) < f(y). Testing the monotonicity of functions is a fundamental problem in the field of
property testing. Given oracle access to a function f, the problem is to determine whether the function is monotone or "far from being monotone".

One of the most studied questions in monotonicity testing is that of testing the property for Boolean functions defined on {0,1}^n. Coming up with an optimal testing algorithm
for this problem is an important open question in the area. In a recent breakthrough, Khot et al came up with an optimal nonadaptive testing algorithm for the problem. In the talk,
I will formalize the notion of monotonicity testing, list some of the important results and give a high level description of the result due to Khot et al.