\documentclass[12pt]{article}
\usepackage{amstex,epsfig}
\newcommand{\ti}{\hbox{TIME}}
\newcommand{\p}{\hbox{P}}
\newcommand{\pspace}{\hbox{PSPACE}}
\newcommand{\np}{\hbox{NP}}
\newcommand{\conp}{\hbox{coNP}}
\newcommand{\exptime}{\hbox{EXPTIME}}
\newcommand{\ntime}{\hbox{NTIME}}
\renewcommand {\or} {\hbox{OR}}
\renewcommand {\and} {\hbox{AND}}
\newcommand {\fp} {\hbox{FP}}
\newcommand{\clique}{\hbox{CLIQUE}}
%%\newcommand{\iff} {if and only if}
\begin{document}
\input{preamble.tex}
\lecture{14}{April 3, 1997}{Daniel A. Spielman}{Xiangwei Liu}
In today's lecture, we will discuss the circuit lower bounds problem
and prove the first part of razborov theorem.
\section{Circuit Lower Bounds}
The {\em circuit lower bounds problem} is to study the difficulty of
deciding a class of functions(maybe defined by other means, such as
$\fp$ or $\sharp\p$) by circuits, on the hope of a better understanding
of the mysterious complexity hierachy.
Obviously, for general circuits,
i.e., a circuits with all three different gates, it maybe too powerful
for us to get a nice result. Actually, the best known lower bounds for
deciding $\np$ funtions under general circuits is only 4n. (Note that
Linear lower bounds for circuits,
or more generally, {\em all kinds of linear stuffs} are
not so useful in complexity theory). So to make our task of
proving a lower bound easier,
we will restrict the power of our circuits. Specifically,
we will study the circuit lower bounds for deciding {\bf CLIQUE functions}
under {\bf monotone circuits}.
First, let's define the $\clique$ functions and monotone circuits.
\begin{definition}
A function f is {\bf monotone} if
$f(\bar{x})=1\Rightarrow f(\bar{y})=1$
for all $\bar{y}$'s such that all bits that are 1
in $\bar{x}$ are also 1 in $\bar{y}$. (So we can see that $f(\bar{y})=0
\Rightarrow f(\bar{x})=0$ for all $\bar{x}$'s such that all bits that
are 0 in $\bar{y}$ are also 0 in $\bar{x})$
\end{definition}
\begin{definition}
A circuit C is {\bf monotone}
if it only has $\vee$ and $\wedge$
gates, with no $\neg$ gates.
\end{definition}
We can tell by simple observation that monotone circuits only computes
monotone functions, and vice versa.
\begin{proposition}
A monoton circuit computes monotone functions and every monotone function
can be computed by a monotone circuit.
\end{proposition}
\begin{definition}
let $\clique$ function $\clique_{k,n}$ to be the function
on n-nodes graphs that outputs 1 if and only if the input graph
has a k-clique.
\end{definition}
From the defintion, we conclude that $\clique$ funtions are monotone
functions. (if $\clique(k,G)$=1, we know $G$ has a k-clique, so $f(G')$
will be 1 for all $G's$ that has all the edges $G$ have.)\newline
Now, we will present the main theorem of today's lecture, the {\em
Razborob Theorem.} Let's make one definition first:
\begin{definition}
We define the {\bf Size} function to be the function that on input of a
$\{0,1\}^*\rightarrow\{0,1\}$ function $h$ outputs the circuit complexity
needed to compute $h$. (Or, minimum circuit size needed to compute $h$.)
\end{definition}
\begin{theorem}[Razborov]
$Size(\clique_{\log{n},n})\geq{}n^{-2(\log{n})}$
\end{theorem}
%%
%% what's the name of this theory's creator??
%%
A similar case is studied by Alon and Boppana:
\begin{theorem}[Alon-Boppana]
$Size(\clique_{n^{\frac{2}{3}},n})>
2^{\frac{n^{\frac{1}{3}}}{\log{n}}}$
\end{theorem}
The rest of today's lecture will be devoted to the first part of the
proof of {\em Razborov theorem}.\newline
%(
Variables $k$, $l$, and $m$ will appear in the proof.
While we will not decide on the best values for these until
the end of the proof, we will give away an approximation
of the values we will choose so that we can have something
in mind as we go through the proof.
\[
k\sim n^{{\frac{1}{4}}}, l\sim\sqrt{k}, m\sim{}l^l\newline
\]
%)
First, let's see how can we start. It seems that to consider all cases
simultaneously is hard, so we have to simplifize the problem a little
bit somehow(But without lose of generality.) To simplifize the proplem,
We will only look at the performance of $\clique$ functions on restricted
test graphs, more specifically, we will only study the performance of
$\clique$ on minimal positive instances and maximal negative
instances, and then
argue that this way didn't make us lose any generality.
The minimal positive graphs are those graphs that contain a
clique on some set of $k$ vertices, and no other edges.
The maximal negative graphs are the maximal
$(k-1)$-colorable graphs.
These are obtained by coloring each vertex in the graph with
one of $k-1$ colors, and placing edges between each pair
of differently-colored vertices.
The should be no edges between vertices of the same color.
%\begin{definition}
%We define {\bf minimal graphs}
% as those graphs that force the clique function to be 1
%on input of it, i.e. graphs that has only one $\clique$ on some k vertices and
% no more extra edges.
%\end{definition}
%\begin{definition}
%We define {\bf maximal graphs}
%as those graphs that force the $\clique$ function to be 0on input of it,
%i.e. a (k-1)--colorable graph, i.e. a grpahs whose vertices can be divided
%into k-1 classes and put edges in between all vertices between different
%classes. No edge between vertices in the same class. (The maxmimum concept
%here in some sense means that if you add any more edge to this graph, a
%k-clique will be produced).
%\end{definition}
%To efficiently manipulate cliques, we define a set of functions that
%operates on it:
\begin{definition}
A {\bf Clique indicator} is a function that is 1 if input
graph contains some particular clique, and 0 otherwise.
The {\em size} of a clique indicator is the number of
vertices in the clique that it looks for.
\end{definition}
{\bf Implement}: Take $l$ vertices, and compute the $\wedge$ for all the
edges between them to see if for any 2 vertices in this group, there is a
edge between them.
\begin{definition}
A {\bf Clean function} is an \or \ of at most m clique indicators
of size less than $l$.
\end{definition}
%
Now comes the proof :\newline
{\bf Idea}: Every small monotone circuit can be approximated by
a clean function.\newline
We will prove this by an induction on the circuit, beginning
with the inputs.
Observe that the inputs to the circuit are clique indicators
of size 2.
We will show that the \and \ and \or \ of two functions that
are approximated by clean functions can also be
approximated by a clean functions.
But there is one problem, even if $f$ and $g$ are both clean functions,
neither $f\wedge{}g$ nor $f\vee{}g$ need to be clean.
So how do we solve this dilemma??Here is one natuaral approach.\newline
{\bf Fix}: we are going to define $\sqcup$ and $\sqcap$ such that
$f\sqcup{}g$ and $f\sqcap{}g$ are clean if $f$ and $g$ are clean, and
$f\sqcup{}g$ and $f\sqcap{}g$ approximate $f\vee{}g$ and $f\wedge{}g$
on the test graphs.
Note that if f and g are clean, then $f\vee{}g$ is the $\or$ of at most
2m clique indicators. So to make $f\vee{}g$, we must somehow reduce the
number of indicators. To reduce the number of indicator, we will need
a result of Combinatoric Theory:{\em The Sunflower Theorem.}
\begin{definition}
A {\bf sunflower} is a collection of sets $S_1,S_2,...,S_m$
such that
$$
\forall{}i{}(p-1)^ll!$,
then C contains at least one sunflower with p petals.
\end{theorem}
To define our $\sqcap$ and $\sqcup$, we first define some auxilary functions:
\begin{description}
\item[pluck($f$)]: let $f$ be an $\or$ of at least $m$ clique indicators,
pluck($f$) is obtained by finding a sunflower among the clique indicators,
discard those indicators, and adding their core as a new indicator.
\item[weed($f$)]: pluck $f$ until no more pluck possible.
\item[cover($f$,$g$)]: Replace $indicator(S_i)\wedge{}indicator(T_j)$ with
indicator($S_i\cup{}T_j$).
\item[Trim(Cover($f\wedge{}g$))]:Cover($f\wedge{}g$) excluding indicators
of size$\geq l$.
\end{description}
Now we can define the $\sqcap$ and $\sqcup$ operations:
\begin{definition}
We define
$$
f\sqcup{}g=weed(f\vee{}g)
$$
\end{definition}
If $f = \underset{i}{OR}(indicator(S_{i}))$
and $g = \underset{i}{OR}(indicator(T_{j}))$,
then
$$ f \wedge g = \underset{i,j}{OR}(indicator(S_i)\wedge{}indicator(T_j))$$.
So, we define
$$
f\sqcap{}g=
weed(trim(\underset{i,j}{OR\ }cover(indicator(S_i)\wedge{}indicator(T_j))))
$$
In next lecture, we will proof the latter part of razborov theorem.
%% we let\[
%% f\sqcup{}g=weed(f\vee{}g)\]\newline
%%\newline
%%If f = $\underset{i}{OR}(indicator(S_i))$,
%%g=$\underset{j}{OR}(indicator(T_j))$\newline
%%$f\wedge{}g=\underset{i,j}{OR}(indicator(S_i)\wedge{}indicator(T_j))$
%%\newline
%%After cover, we get
%%$\underset{i,j}{OR}(indicator(S_i)\wedge{}indicator(T_j))$
%%\newline
%%Now we define $f\sqcap{}g=Weed(Trim(Cover(f\wedge{}g)))$.
\end{document}