
CUUS063 main CUUS063 Goldreich 978 0 521 88473 0 March 31, 2008 18:49
SPACE COMPLEXITY
f
2
. That is, we do not start by computing f
1
(x), but rather start by computing
f
2
(x, f
1
(x)) although we do not have some of the bits of the relevant input (i.e., the
bits of f
1
(x)). The missing bits will be computed (and recomputed) whenever we
need them in the computation of f
2
(x, f
1
(x)). Details follow.
Let A
1
and A
2
be the algorithms (for computing f
1
and f
2
, respectively) guar-
anteed in the hypothesis.
6
Then, on input x ∈{0, 1}
n
, we invoke algorithm A
2
(for
computing f
2
). Algorithm A
2
is invoked on a virtual input, and so when emulating
each of its steps we should provide it with the relevant bit. Thus, we should also
keep track of the location of A
2
on the imaginary (virtual) input-tape. Whenever A
2
seeks to read the i
th
bit of its input, where i ∈ [n + (n)], we provide A
2
with this bit
by reading it from x if i ≤ n and invoke A
1
(x) otherwise. When invoking A
1
(x)we
provide it with a virtual output-tape, which means that we get the bits of its output
one by one and do not record them anywhere. Instead, we count until reaching the
(i − n)
th
output-bit, which we then pass to A
2
(as the i
th
bit of x, f
1
(x)).
Note that while invoking A
1
(x), we suspend the execution of A
2
but keep its
current configuration such that we can resume the execution (of A
2
) once we get
the desired bit. Thus, we need to allocate separate space for the computation of A
2
and for the computation of A
1
. In addition, we need to allocate separate storage for
maintaining the aforementioned counters (i.e., we use log
2
(n + (n)) bits to hold the
location of the input-bit currently read by A
2
, and log
2
(n) bits to hold the index of
the output-bit currently produced in the current invocation of A
1
).
A final (and tedious) issue is that our description of the composed algorithm
refers to two storage devices, one for emulating the computation of A
1
and the other
for emulating the computation of A
2
. The issue is not the fact that the storage (of
the composed algorithm) is partitioned between two devices, but rather that our
algorithm uses two pointers (one per each of the two storage devices). In contrast,
a (“fair”) composition result should yield an algorithm (like A
1
and A
2
) that uses a
single storage device with a single pointer to locations on this device. Indeed, such
an algorithm can be obtained by holding the two original pointers in memory; the
additional δ(n) ter m accounts for this additional storage.
Reflection. The algorithm presented in the proof of Lemma 5.2 is wasteful in ter ms of
time: it recomputes f
1
(x) again and again (i.e., once per each access of A
2
to the second
part of its input). Indeed, our aim was economizing on space and not on time (and the two
goals may be conflicting (see, e.g., [59, Sec. 4.3])).
5.1.3.2. An Obvious Bound
The time complexity of an algorithm is essentially upper bounded by an exponential
function in its space complexity. This is due to an upper bound on the number of pos-
sible instantaneous “configurations” of the algorithm (as formulated in the proof of
Theorem 5.3), and to the fact that if the computation passes through the same
configuration twice then it must loop forever.
Theorem 5.3: If an algorithm A has binar y space complexity s and halts on every
input then it has time complexity t such that t (n) ≤ n · 2
s(n)+log
2
s(n)
.
6
We assume, for simplicity, that algorithm A
1
never rewrites on (the same location of) its write-only output-tape.
As shown in Exercise 5.2, this assumption can be justified at an additive cost of O(log (n)). Alternatively, the idea
presented in Exercise 5.2 can be incorporated directly in the current proof.
148