\documentstyle{article}
\input{preamble.tex}
%% these are from mac90, but are here now. - Thanks Dan
\newcommand{\floor}[1]{\lfloor#1\rfloor}
\newcommand{\Floor}[1]{\left\lfloor#1\right\rfloor}
\newcommand{\ceiling}[1]{\lceil#1\rceil}
\newcommand{\Ceiling}[1]{\left\lceil#1\right\rceil}
\begin{document}
\lecture{3}{February 11, 1997}{Daniel A. Spielman}{John Dunagan}
%%\subsection*{18.405/6.841 Lecture Notes from Thursday, February 13, 2:30-4:00pm}
\section*{Alternating Turing Machines\footnote{ I would first like to
ask anyone who can show that NP can be solved by a P quantum computer to
talk to me after class.}}
ATM's (Alternating Turing Machines) are an extension of Turing Machines
which are allowed to go through both $\exists$ states and $\forall$
states. The $\exists$ states capture non-determinism, while the
$\forall$ states capture co-non-determinism. Alternating Turing Machines
yield a different but equivalent characterization of the polynomial-time hierarchy.
Define $\Sigma_{k}P$ to be those languages accepted by a PTIME ATM
that begins with $\exists$ and has k levels (i.e., alternating between
$\exists$ and $\forall$ at most k times). We define $\Pi_{k}P$
similarly:
\begin{eqnarray*}
\hbox{$\Sigma_{k}P$} = \{\hbox{languages } L : L \hbox{ is accepted by a
polynomial-time ATM }\\
\hbox{that begins with $\exists$ and has at most k levels}\}
\end{eqnarray*}
\begin{eqnarray*}
\hbox{$\Pi_{k}P$} = \{\hbox{languages } L : L \hbox{ is accepted by a
polynomial-time ATM }\\
\hbox{that begins with $\forall$ and has at most k levels}\}
\end{eqnarray*}
This is exactly equivalent to our previous definition of the polynomial
time hierarchy.
We similarly define $QBF_{k}$ to be the Quantified Boolean Formulas
beginning with $\exists$ and alternating quantifiers at most k-1
times. It can easily be shown that $QBF_{k}$ is complete for
$\Sigma_{k}P$. If $\Sigma_{k}P$ = $\Pi_{k}P$, then the polynomial time
hierarchy collapses, and $\Sigma_{k}P$ = $\Sigma_{n}P$ $\forall
n\ge k$. This, as we have previously remarked, is predominantly
conjectured not to happen.
\subsection*{Alternating Space = Time, Alternating Time = Space}
The gist of today's lecture will be ASPACE = TIME, ATIME = SPACE. We
will also derive sharp bounds on general QBF. For convenience, we repeat
several definitions here:
\begin{eqnarray*}
\hbox{$SPACE(f(n))$} = \{\hbox{languages } L : L \hbox{ is accepted by a
TM that uses at most f(n) space}\}
\end{eqnarray*}
\begin{eqnarray*}
\hbox{$NSPACE(f(n))$} = \{\hbox{languages } L : L \hbox{ is accepted by an
NTM that uses at most f(n) space}\}
\end{eqnarray*}
\begin{eqnarray*}
\hbox{$ASPACE(f(n))$} = \{\hbox{languages } L : L \hbox{ is accepted by an
ATM that uses at most f(n) space}\}
\end{eqnarray*}
\begin{eqnarray*}
\hbox{$PSPACE$} = \{\hbox{languages } L : L \hbox{ $\in SPACE(n^{k})$
for some $k$,}\\
\hbox{ where $n$ is the length of the input}\}
\end{eqnarray*}
$NPSPACE$ and $APSPACE$ are defined in the obvious manner.
We pause to note that $SPACE(f(n))$ $\subset$ $NSPACE(f(n))$ $\subset$
$ASPACE(f(n))$. We also pause to note once again a reason to exclude
constant factors and stick to Big-O notation. The space/time speedup
theorems: by doubling the alphabet size and squaring the state space, we
can convert any Turing Machine into one which uses half as much space
and half as much time.
\subsection*{LSPACE and More Theory Background}
Define L = LSPACE = log-space = $\bigcup c\ge 0$ SPACE($c\cdot log(n)$) =
SPACE($log(n)$). Note that there is some subtelty in defining
logarithmic space requirements; we must require our theoretical Turing Machine
models to have a read-only input tape, a work tape, and a write-only
output-tape. The work tape is bounded to have only log-space.
L $\subset$ P. L $\not=$ P ? This is a standard conjecture. It is known
that L $\not=$ PSPACE ; this is easy to show using the space hirearchy
theorem. We pause now to note both the space and time-hierarchy theorems,
\begin{theorem}[Time Hierarchy Theorem]
$ $ \\If $f(n) = o(g(n)/\log g(n))$, then $TIME (f(n)) \ne TIME (g(n))$.
\end{theorem}
\begin{theorem}[Space Hierarchy Theorem]
$ $ \\If $f(n) = o(g(n))$, then $SPACE(f(n)) \ne SPACE(g(n))$.
\end{theorem}
It should be noted that there are P-complete problems under log-space
reduction, just as there are NP-complete problems under poly-time
reductions (what we think of when we think of NP-complete problems). As
an interesing side note, every known NP-complete problem is known to be
NP-complete under log-space reductions.
We now proceed to two important theorems, Savitch's Theorem and the
Immerman-Szelepcsenyi Theorem.
\begin{theorem}[Savitch's Theorem]
$ $ \\ {$NSPACE(f(n)) \subset SPACE(f^{2}(n))$ if $f(n) \ge log n$.}
\end{theorem}
\begin{theorem}[Immerman-Szelepcsenyi Theorem]
$ $ \\ {$NSPACE(f(n)) = coNSPACE(f(n))$ if $f(n) \ge log n$.}
\end{theorem}
We can easily see from the second that there is no polynomial space-hierarchy
analogous to the polynomial time-hierarchy. (As a reminder,
coNSPACE(f(n)) can be defined as the complement of all NSPACE(f(n)) languages or
as the language accepted by NTM's with universal states in space f(n); that is,
ATM's in space f(n). These are equivalent definitions.) We now move on to proving two
important theorems about alternating time and space.
\begin{theorem}[Well-Known Theorem A]
$ $ \\ {$ATIME(f(n)) \subset SPACE(f(n)) \subset ATIME(f^{2}(n))$ \\
if $f(n) \ge n$.}
\end{theorem}
\begin{theorem}[Well-Known Theorem B]
$ $ \\ {$ASPACE(f(n)) = TIME(2^{\bigO(f(n))})$ \\
if $f(n) \ge log n$.}
\end{theorem}
\subsection*{Proof of Well-Known Theorem A}
We first show $ATIME(f(n)) \subset SPACE(f(n))$.
Represent the machine we are going to simulate (the f(n)-time ATM) as a
tree of $\forall$ and $\exists$ quantifiers (or normal deterministic
transition states), each branch of which
eventually ends in an accept or reject state. We note that
\begin{enumerate}
\item one can represent each node with f(n) space
\item the tree has height at most f(n)
\end{enumerate}
We traverse the tree using DFS\footnote{Depth First Search} to keep as little state around as
possible. A first (that is, naive) implementation, would store the
entire path through the tree so far. This takes space $f^{2}(n)$.
A better implementation is to store the path so far (i.e., LRRLLRLLLLR)
and to recompute the previous state when traversing the tree upwards,
instead of storing it. This takes space $f(n)$.\\
An alternative analysis which also works is to
note that the change between any two states is constant, so movement
from one node to the next takes up at most a constant amount of
space. \\
We now show $SPACE(f(n)) \subset ATIME(2^{f(n)})$. Consider the graph of the
computational universe of the machine we are going to simulate (the
$f(n)$-space machine), where every possible configuration is a node in the
graph. Define the following predicate FromTo(x,y,k) to be true iff one
can get from x to y in at most k steps. FromTo(x,y,k) = $\exists$ z :
(FromTo(x,z, $\Ceiling{\frac{k}{2}}$) $\wedge$ FromTo(z,y, $\Floor{\frac{k}{2}}$)).\footnote{Note
that $\wedge$ is the hidden universal quantifier in this expression
- we could not do the same thing with a NTM - both $\exists$ and
$\forall$ are necessary.} The maximum initial k to get from the start
node to the accept node in our graph is $2^{\bigO(f(n))}$. Generating an x or y
or z takes $f(n)$ time. Recursion (that is, running through the whole
thing) on the predicate FromTo(x,y,k) takes $log(k)$ time, for a total time of
$log(2^{\bigO(f(n))})\cdot f(n)$ = $\bigO(f^{2}(n))$ to evaluate our
initial predicate FromTo(start, accept, $2^{\bigO(f(n))}$). \\
As a caveat, $f(n)$ must be time constructible in time $f^{2}(n)$, and
we won't worry about cases in which it isn't. This concludes our proof.
\subsection*{Proof of Well-Known Theorem B}
We first show $ASPACE(f(n)) \subset TIME(2^{\bigO(f(n))})$. Consider a graph of all
the configurations in the Alternating Turing Machine's computation history,
with up to two arrows from any given $\forall$ or $\exists$ vertex to
other vertices. Given a particular input, let our
non-alternating Turing Machine write out the entire graph in $2^{\bigO(f(n))}$
time. Second, don't do DFS from the start state - making
the proof work in this direction requires more hacking than we
need. Instead, begin at the final accept state, and mark all states which lead
to the final accept state as accepting states. (By omission, we also
mark rejecting states.) Repeat this process, marking all states which
lead to states previously marked accepting until no new states are
marked. If the start state is marked, accept; otherwise, reject. This
takes constant time per node since we mark each state at most once.
We now show $TIME(2^{f(n)}) \subset ASPACE(f(n))$. Consider a tableau
representing the computation history of the Turing Machine, and require
a unique accept state, as always. Define the function $cell(i,t)$ to be
the contents of cell $i$ at time $t$.
Since $cell(i,t)$ is determined completely by $cell(i-1, t-1)$ ,
$cell(i, t-1)$ , and
$cell(i+1, t-1)$, we can verify an assertion of the form
``$cell(i,t) = x$'' by using existential quantification to
guess the values of $cell(i-1, t-1)$ ,
$cell(i, t-1)$, and
$cell(i+1, t-1)$, checking that they lead to $cell(i,t)=x$,
and then using universal quantification to verify each of
the three guesses.%
\footnote{Note that every pass
through the $\forall$ quantifier decreases the remaining time by 1.}
To check the contents of $cell(i,0)$, we just look at the
input tape.
We use this method to verify that the machine is in the
accept state after $2^{f(n)}$ steps.
Since $0 \le t \le 2^{f(n)}$, we need $log(2^{f(n)})$ = $f(n)$
space to write out $i$ and $t$.
Moreover, the machine does not need to store any other information.
This concludes our proof that $TIME(2^{\bigO(f(n))}) = ASPACE(f(n))$. We note
that corollaries of this are $AL=P$ and $AP = PSPACE$. $AP = PSPACE$ is
the same as QBF complete for $PSPACE$.
%We define our predicate to be
%
%AcceptsFromInputWithConfigAtTime($x,c,t$) = \{ $\IF$ $t = 0$, true if $x = c$ \\
%$\ELSE$ $\exists c^\prime$
%s.t. AcceptsFromInputWithConfigAtTime($x,c^\prime,t-1$) $\AND$ \\
%$\forall i, cell(i,t)$ = TransitionFunction($cell(i-1,t-1), cell(i,t),
%cell(i+1,t)$), where $cell(i,t)$ is the $i^{th}$ letter of $c$ and
%$cell(j,t-1)$ is
%the $j^{th}$ letter of $c^\prime$. \}\footnote{Note that every pass
%through the $\forall$ quantifier decreases the remaining time by 1.}
%
%This long winded definition of our predicate simply says that for every
%intermediate state in our tableau, there is a previous state which leads
%to it according to the deterministic Turing Machine transition function.
%We let $x$ be our initial input.
%We now consider TransitionFunction.
%Since $cell(i,t)$ is determined completely by $cell(i-1, t-1)$ ,
%$cell(i, t-1)$ , and
%$cell(i+1, t-1)$, each transition takes a constant amount of space to
%specify; writing out the entire machine is a constant amount of overhead
%independent of $n$.
%Since $0 \le t \le 2^{f(n)}$, we need $log(2^{f(n)})$ = $f(n)$
%space to write out $t$. Thus we can write out the inputs to the
%AcceptsFromInputWithConfigAtTime() predicate
%in space $f(n)$. This completes our simulation.
%\\
\end{document}