\documentstyle{article}
\input{preamble.tex}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% BEGINING OF PREAMBLE %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% If you need a new command add it here and put your name next to it.
% Before you start scribing, you should look at the macros below and
% use them. We are trying to make the notes consistent.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\newcommand{\mbf}[1]{\mbox{\bf{#1}}}
% Big O, small o (Andrej)
%\newcommand{\bigO}{{\mathop{\rm O}}}
\newcommand{\smallo}{{\mathop{\rm o}}}
% Complexity classes
\newcommand{\logspace}{\mbf{L}} % logarithmic space (Andrej)
\newcommand{\polytime}{\mbf{P}} % polynomial time (Andrej)
\newcommand{\EXP}{\mbf{EXP}} % exponential time (Andrej)
\newcommand{\PP}{\mbf{PP}}
\newcommand{\PSPACE}{\mbf{PSPACE}}
\newcommand{\SPACE}{\mbf{SPACE}}
\newcommand{\FSPACE}{\mbf{FSPACE}}
\newcommand{\TIME}{\mbf{TIME}}
\newcommand{\FTIME}{\mbf{FTIME}}
\newcommand{\ACZ}{\mbf{AC}^0}
\newcommand{\RL}{\mbf{RL}} % randomized logspace (Sasha)
\newcommand{\SC}{\mbf{SC}} % Steve's class (Sasha)
\newcommand{\ZPP}{\mbf{ZPP}}
\newcommand{\IP}{\mbf{IP}} % interactive proofs (Goran)
\newcommand{\AM}{\mbf{AM}} % Arthur-Merlin proofs
\newcommand{\NPpoly}{\mbf{NP/poly}} % NP/poly
\newcommand{\PH}{\mbf{PH}} % the Hierarchy
\newcommand{\NP}{\mbf{NP}} % NP
\newcommand{\coNP}{\mbf{coNP}}
\newcommand{\IPMACH}{\mbf{IP-Machine}}
\newcommand{\BPP}{\mbf{BPP}}
\newcommand{\RP}{\mbf{RP}}
% Communication complexity (Andrej)
\newcommand{\cost}{\mbf{COST}}
% Turing Machines
\newcommand{\qstart}{\mbox{$q_{start}$}}
\newcommand{\qaccept}{\mbox{$q_{accept}$}}
\newcommand{\qreject}{\mbox{$q_{reject}$}}
\newcommand{\blank}{\mbox{$\Box$}}
% New & Improved number sets (Andrej).
\font\sf=cmss10
\newcommand{\Nats}{{\hbox{\sf I\kern-.13em\hbox{N}}}} % Natural numbers
\newcommand{\Reals}{{\hbox{\sf I\kern-.14em\hbox{R}}}} % Real numbers
\newcommand{\Ints}{{\hbox{\sf Z\kern-.43emZ}}} % Integers
\newcommand{\CC}{{\hbox{\sf C\kern -.48emC}}} % Complex numbers
\newcommand{\QQ}{{\hbox{\sf C\kern -.48emQ}}} % Rational numbers
%Macros for describing sets and functions (Andrej)
%
% It is *bad* to write $f: A \to B$, because that ':' doesn't come out right.
% Use $f \cc A \to B$ instead. (Andrej)
%
\newcommand{\cc}{\colon\thinspace}
%
% When you describe a set, like {f(x) | x < 10}, you shouldn't
% write $\{ f(x) | x < 10 \}$ becuase that won't put enough space around |.
% Use $\{f(x) \such x < 10 \}$ instead. (Andrej)
%
\newcommand{\such}{\;|\;}
%Logical operators (Andrej)
%\newcommand{\AND}{\land}
\newcommand{\OR}{\lor}
\newcommand{\NOT}{\neg}
\newcommand{\EQUIV}{\;\Longleftrightarrow\;}
\newcommand{\IMPLY}{\;\Longrightarrow\;}
%more logical operators (Goran)
\newcommand{\XOR}{\oplus}
\newcommand{\MAX} {\mbf {Max}}
\newcommand {\COIN} {\mbf {Coin Flip}}
%Other operators
\newcommand{\mod}{\;{\rm mod}\;}
% Sasha
\newcommand{\GF}{{\mathop{\rm GF}}}
\newcommand{\sub}[1]{\mbox{$_{#1}$}}
\newcommand{\cat}{\circ}
\newcommand{\trans}[1]{\stackrel{_{#1}}{\rightarrow}}
\newcommand{\Lone}{$L_1\,$}
% Goran
\newcommand{\E}{{\mathop{\rm E}}}
\newcommand{\NONISO}{{\mbf{NONISO}}}
\newcommand{\ISOg}{{\mbf{ISO}}}
\newcommand{\AUTOg}{{\rm AUTO}}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Theorems are environments with numbering schemes
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%\newtheorem{theorem}{Theorem}[section]
%\newtheorem{lemma}[theorem]{Lemma}
%\newtheorem{corollary}[theorem]{Corollary}
%\newtheorem{definition}{Definition}[section]
%\newtheorem{fact}{Fact}
%\newtheorem{assumption}{Assumption}
%\newenvironment{proof}{\QuadSpace\par\noindent{\bf Proof}:}{\EndProof\HalfSpace}
%\newenvironment{proofsketch}{\QuadSpace\par\noindent{\bf Proof sketch}:}{\EndProof\HalfSpace}
%\newenvironment{notation}{\QuadSpace\par\noindent{\bf Notation}:}{\HalfSpace}
%\newenvironment{intuition}{\begin{quote}\par\noindent{\bf Intuition}:}{\end{quote}}
%\newenvironment{note}{\QuadSpace\par\noindent{\bf Note}:}{\HalfSpace}
%\newenvironment{convention}{\QuadSpace\par\noindent{\bf Convention}:}{\HalfSpace}
%\newenvironment{example}{\QuadSpace\par\noindent{\bf Example}:}{\HalfSpace}
%\newenvironment{question}{\QuadSpace\par\noindent{\bf Question}:}{\HalfSpace}
%\newenvironment{remark}{\QuadSpace\par\noindent{\bf Remark}:}{\HalfSpace}
%\newenvironment{observation}{\QuadSpace\par\noindent{\bf Observation}:}{\HalfSpace}
%\newenvironment{proposition}{\QuadSpace\par\noindent{\bf Proposition}:}{\HalfSpace}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% DO NOT EVEN THINK OF CHANGING THE PREAMBLE BELOW HERE
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\begin{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% UDPATE the header and synopsis below %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%The \lecture command takes 5 arguments as follows:
% \lecture{lecture number}{date of lecture}{lecturer (me usually)}{name of scribe (YOU)}{title of lecture}
\lecture{7}{March 1, 1998}{Madhu Sudan}{Adam R. Klivans}
% Each lecture should have a synopsis.
%\begin{quote} {\bf Synopsis:}
%$\BPP, \RP,$ Read Once Branching Programs, Schwarz's lemma
%\end{quote}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Place body of SCRIBE notes after here %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Brilliant and clear notes go here
\section{Randomness in Computation}
From a complexity theoretic perspective, a computational problem is thought to be tractable if it can be solved in deterministic polynomial time. One can wonder, however, what class of problems can be solved in polynomial time if we are allowed access to a source of randomness. Since many researchers feel that we have random (or pseudorandom) sources at our disposal, this class may more appropriately capture our notion of tractability. Before we state any definitions, we provide an illustrative example of the usefulness of randomness.
\subsection{Branching Programs}
\begin{definition}
A {\bf Branching Program} is a directed acyclic graph with the following properties:
\begin{enumerate}
\item[1] There is a start node of in-degree 0.
\item[2] There is a reject and an accept node both with out-degree 0.
\item[3] Every other (interior) node has outdegree 2 and contains a label $x_{i}$.
\end{enumerate}
\end{definition}
We can talk about the set of strings of length $n$ accepted by a branching program in the following manner: Fix some string $s$. Each interior node of our branching program contains a label $x_{i}$ where $i$ ranges from $1$ to $n$. This node ``branches'' on the $ith$ symbol of our string s. We begin at the start node of our program and look at the label. Without loss of generality we can assume it is $x_{1}$. If $x_{1} = 1$, we move to the left child of this node. If $x_{1} = 0$ we move to the right child. We continue this process until we reach either the accept or reject node. If we reach the accept state we say that this string $s$ was accepted by this program.
Notice that this program completely defines a boolean function $f$ on $n$ variables. To evaluate the function we simply move down a branch in the tree according to the input. Let
\[ {\bf NEQ_{BP}} = \{ (A,B) \such f_{A} \not \equiv f_{B} \} \] be the set of all pairs of branching programs $A$ and $B$ whose induced boolean functions are not equal.
This set is known to be $\NP$-complete.
\begin{definition}
A {\bf Read Once Branching Program} is a branching program where for any path from the start node to the accept/reject node, any variable $x_{i}$ appears at most once.
\end{definition}
We will prove the following theorem
\begin{theorem} Let
\[ {\bf NEQROBP} = \{ (A,B) \such \mbox{A and B are ROBPs and $f_{A} \not \equiv f_{B}$} \} \]
Membership in this set can be decided in randomized polynomial time.
\end{theorem}
The proof will use algebraic techniques. For every branching program we associate a multivariate polynomial. This polynomial on boolean settings to the variables will compute the same boolean function that the branching program computes. We will show that if two branching programs are not equal, then the difference of the two induced multivariate polynomials is not identically zero. Finally, we show how to test if a multivariate polynomial is identically zero in randomized polynomial time.
First, we need to show how to construct a polynomial from a braching program. We give a recursive construction as follows (working bottom up):
\begin{itemize}
\item The accept node is assigned 1. The reject node is assigned 0.
\item Assume we are at some interior node labeled $x_{i}$ and its two children get assigned, inductively, polynomials $p_{1}$ and $p_{0}$. Then the polynomial $p(x_1,\ldots,x_n)$ for this node equals $x_{i}p_{1} + (1-x_{i})p_{0}$.
\end{itemize}
\begin{lemma}
Two branching programs $A$ and $B$ are not equal if and only if the polynomials constructed as above, $p_{A}$ and $p_{B}$ are not equal.
\end{lemma}
\begin{proof}
One direction is obvious. If polynomial $p_{A}$ and $p_{B}$ are equal, then in particular they are equal on all boolean assignments to the variables. Thus, our two branching programs will be equal (since $p_{A}$ and $p_{B}$ represent the branching programs' behavior). If our branching programs $A$ and $B$ are equal, we need to show that the polynomial $p_{A}$ and $p_{B}$ are equal everywhere, not just on boolean inputs. Notice, however, that because our branching program $A$ is read once, $p_{A}$ is simply a sum of multilinear monomials corresponding to accepting paths. That is to say, if the assignment $x_{1} = 1, x_{5} =1, x_{6} =1$ (all other variables are zero) results in an accepting path, then the monomial $x_{1}x_{5}x_{6}$ will appear as one of the ``minterms'' in $p_{A}$. But, notice that $A \equiv B$. Thus, both $p_{A}$ and $p_{B}$ have identical ``minterm'' expansions (they are both sums of identical multilinear monomials). Hence, $p_{A}$ and $p_{B}$ are equal ev!
erywhere.
\end{proof}
Now we need the following lemma by Schwarz (or Schwarz-Zippel):
\begin{lemma} Assume $p(x_1, \ldots,x_n)$ is a non-zero polynomial,
where each variable $x_i$ has degree at most $d>0$. Let $I$ be any domain
of size $\ge nd$. Then $$Pr_{\{x_1,\ldots,x_n\}\in I^n}(p(x_1,\ldots,x_n))=0\le\frac{dn}{|I|}$$
\end{lemma}
{\bf Proof:} We use induction on $n$. Let $n=1$. A non-zero polynomial of
degree $\le d$ can have at most $d$ roots, so when we choose $x_1$ from
a domain of size $|I|$, we have at least $\frac{d}{|I|}$ chance
to pick a non-root, exactly as the inequality says.
Now, the inductive step. Let $d\ge{d_1}=$ highest degree of $x_1$ in $p$. And
let $p_2(x_2,\ldots,x_n)$ be the coefficient in front of $x_1^{d_1}$.
We have by inductive hypothesis that $$Pr_{\{x_2,\ldots,x_n\}\in I^{n-1}}(p(x_2,\ldots,x_n)=0)\le\frac{d(n-1)}{|I|}$$
We have:
\begin{eqnarray*}
&Pr_{\{x_1,\ldots,x_n\}\in I^n}&(p(x_1,\ldots,x_n)=0)\\
&=&Pr_{\{x_1,\ldots,x_n\}\in I^n}(p(x_1,\ldots,x_n)=0 | p_2(x_2,\ldots, x_n) = 0) \times
Pr_{\{x_2,\ldots,x_n\}\in I^{n-1}}(p(x_2,\ldots,x_n)=0) \\ &+&
Pr_{\{x_1,\ldots,x_n\}\in I^n}(p(x_1,\ldots,x_n)=0 | p_2(x_2,\ldots, x_n) \ne 0) \times
Pr_{\{x_2,\ldots,x_n\}\in I^{n-1}}(p(x_2,\ldots,x_n)\ne 0) \\ &\le&
1\times \frac{d(n-1)}{|I|} + \frac{d}{|I|}\times 1 = \frac{dn}{|I|}.
\end{eqnarray*}
The first bound $\frac{d(n-1)}{|I|}$ was made by the inductive assumption for
$n-1$, the second bound $\frac{d}{|I|}$ - by observing that as long as we know
that $p_2(x_2,\ldots,x_n)\ne 0$, no matter what $x_2,\ldots ,x_n$ are chosen,
there can be at most $d_1 \le d$ out of $|I|$ possible values of $x_1$ making
$p(x_1,\ldots,x_n)=0$, and so the above bound follows.
\vskip 0.1in
As a corollary we have the following: let $|I|\ge 2nd$, $p(x_1,\ldots ,x_n), q(x_1,\ldots ,x_n)$
be 2 polynomials, and each variable in either of them has maximum degree
$\le d$. If we set $r=p-q$ then Schwarz's lemma gives us the following:
\begin{enumerate}
\item If $r\equiv 0$ (i.e. $p\equiv q$), then
$$Pr_{\{x_1,\ldots,x_n\}\in I^n}(r(x_1,\ldots,x_n)\ne 0) = 0$$
\item If $r\not\equiv 0$ (i.e. $p\not\equiv q$), then
$$Pr_{\{x_1,\ldots,x_n\}\in I^n}(r(x_1,\ldots,x_n)\ne 0) \ge \frac{1}{2}$$
\end{enumerate}
Now our algorithm for testing equivalence of two branching programs is almost complete. Fix a field $F$ of size greater than $2n^{2}$. We convert our programs into polynomials and pick random points $x_{1},\ldots,x_{n} \in F^{n}$. Then we evaluate the difference of our polynomials on this random assignment to the variables. If the polynomials are not equal, then with probability $\frac{1}{2}$ we will discover this. Otherwise, the difference will always evaluate to 0. We can repeat this process polynomially many times to strengthen our confidence that the two polynomials are indeed identical.
We do have to be careful about evaluating these polynomials. The size of the polynomial grows exponentially in the size of the branching program. We can, however, {\bf evaluate} the polynomial given some assignment by working bottom up on our branching program and following the polynomial construction given above, substituting in the appropriate values.
\subsection{Randomized Complexity Classes}
We have seen that picking random field elements has allowed us to make a seemingly difficult problem relatively easy to solve (with a negligible trade-off in confidence). In fact, it is not known whether a deterministic polynomial time algorithm exists for determining if two read once branching programs are equivalent. It is an important and well studied problem to determine if all randomized algorithms have deterministic counterparts. We formally define the most well known randomized complexity classes below.
\begin{definition}
$BPP$ is the class of all languages $L$ for which there is a
nondeterministic polynomial time TM, $M$, whose computation
branches all have the same length, and which has the following
properties.
If $x \in L$ then at least 2/3 of the branches accept.
If $x \not\in L$ then at most 1/3 of the branches accept.
\end {definition}
\noindent
Alternatively, using probability, we could say that
\vspace{5mm}
When $x \in L$ then $Pr[ M(x)$ accepts $] \geq 2/3$
When $x \notin L$ then $Pr[ M(x)$ accepts $] < 1/3$
\vspace{5mm}
\noindent
Note that by both definitions, $M$ is guaranteed to halt in polynomial
time, with all branches halting together.
For the many probabilistic algorithms which have only a one-sided
error, we define another class.
\begin{definition}
$RP$ is the class of all languages $L$ for which there is a
nondeterministic polynomial time TM, $M$, whose computation
branches all have the same length, and which has the following
properties.
When $x \in L$ then $Pr[ M(x)$ accepts $] \geq 2/3$
When $x \notin L$ then $Pr[ M(x)$ accepts $] = 0$
\end{definition}
Actually, most languages in $BPP$ are known to be in $RP$ or $Co-RP$.
For example, COMPOSITES is in both $RP$ and $Co-RP$.
The intersection of $RP$ and $Co-RP$ is an interesting class also,
and has its own name.
\begin{definition}
$ZPP$ is the class $RP \cap Co-RP$.
\end{definition}
The numbers 1/3 and 2/3 in the above definitions may seem somewhat
arbitrary, and in fact they are. We can reduce the error probability
of an algorithm almost as much as we like by simple repetition. We will discuss the robust nature of these classes more in the next lecture.
\end{document}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
% Your SCRIBE notes should end here %
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%