
CUUS063 main CUUS063 Goldreich 978 0 521 88473 0 March 31, 2008 18:49
PSEUDORANDOM GENERATORS
In order to prove that E[|
A,T
(F(1
n
))|] (rather than E[
A,T
(F(1
n
))]) is negli-
gible, we need to modify D a little. Note that the source of trouble is that
A,T
(·)
may be positive on some x’s and negative on others, and thus it may be the case that
E[
A,T
(F(1
n
))] is small (due to cancelations) even if E[|
A,T
(F(1
n
))|] is large.
This difficulty can be overcome by determining the sign of
A,T
(·)onx = F(1
n
)
and changing the outcome of D accordingly; that is, the modified D will output
T (x, A(x, r
)) if
A,T
(x) > 0 and 1 − T (x, A(x, r
)) otherwise. Thus, in each case,
the contribution of x to the distinguishing gap of the modified D will be |
A,T
(x)|.
We further note that if |
A,T
(x)| is small then it does not matter much whether we
act as in the case of
A,T
(x) > 0 or in the case of
A,T
(x) ≤ 0. Thus, it suffices to
correctly determine the sign of
A,T
(x) in the case that |
A,T
(x)| is large, which is
certainly a feasible (approximation) task. Details follow.
We start by assuming, toward the contradiction, that
E[|
A,T
(F(1
n
))|] >ε(n)for
some non-negligible function ε. On input r (taken from either U
(k(n))
or G(U
k(n)
)),
the modified algorithm D first obtains x ← F(1
n
), just as the basic version. Next, us-
ing a sample of size poly(n/ε(n)), it approximates p
U
(x)
def
= Pr[T (x, A(x, U
ρ(n)
))=
1] and p
G
(x)
def
= Pr[T (x, A(x, G
(U
k(n)
)))=1] such that each probability is approxi-
mated to within a deviation of ε(n)/8 with negligible error probability (say, exp(−n)).
(Note that, so far, the actions of D only depend on the length of its input r , which
determines n.)
7
If these approximations indicate that p
U
(x) ≥ p
G
(x) (equiv., that
A,T
(x) ≥ 0) then D outputs T (x, A(x, r
)) else it outputs 1 − T (x, A(x, r
)), where
r
is the ρ(|x|)-bit long prefix of r and we assume without loss of generality that the
output of T is in {0, 1}.
The analysis of the modified distinguisher D is based on the fact that if the
approximations yield a correct decision regarding the relation between p
U
(x) and
p
G
(x), then the contribution of x to the distinguishing gap of D is |p
U
(x) − p
G
(x)|.
8
We also note that if |p
U
(x) − p
G
(x)| >ε(n)/4, then with overwhelmingly high
probability (i.e., 1 − exp(−n)), the approximation of p
U
(x) − p
G
(x) maintains the
sign of p
U
(x) − p
G
(x) (because each of the two quantities is approximated to within
an additive error of ε(n)/8). Finally, we note that if |p
U
(x) − p
G
(x)|≤ε(n)/4 then
we may often err regarding the sign of p
U
(x) − p
G
(x), but the damage caused (to
the distinguishing gap of D) by this error is at most 2|p
U
(x) − p
G
(x)|≤ε(n)/2.
Combining all these observations, we get
Pr[D(U
(k(n))
) = 1|F(1
n
) = x] − Pr[D(G(U
k(n)
)) = 1|F(1
n
) = x]
≥|p
U
(x) − p
G
(x)|−η(x), (8.5)
where η(x) = ε(n)/2if|p
U
(x) − p
G
(x)|≤ε(n)/4 and η(x) = exp(−n) otherwise.
(Indeed, η(x) represents the expected damage due to an error in determining the
sign of p
U
(x) − p
G
(x), where ε(n)/2 upper-bounds the damage caused (by a wrong
decision) in the case that |p
U
(x) − p
G
(x)|≤ε(n)/4 and exp(−n) upper-bounds the
probability of a wrong decision in the case that |p
U
(x) − p
G
(x)| >ε(n)/4.) Thus,
7
Specifically, the approximation to p
U
(x ) (resp., p
G
(x )) is obtained by generating a sample of U
ρ(n)
(resp.,
G
(U
k(n)
)) and invoking the algorithms A and T ;thatis,givenasampler
1
,...,r
t
of U
ρ(n)
(resp., G
(U
k(n)
)), where
t = O(n/ε(n)
2
), we approximate p
U
(x ) (resp., p
G
(x )) by |{i ∈[t]:T (x, A(x , r
i
))=1}|/t.
8
Indeed, if p
U
(x ) ≥ p
G
(x ) then the contribution is p
U
(x ) − p
G
(x ) =|p
U
(x ) − p
G
(x )|, whereas if p
U
(x ) <
p
G
(x ) then the contribution is (1 − p
U
(x )) − (1 − p
G
(x )) =−( p
U
(x ) − p
G
(x )), which also equals |p
U
(x ) − p
G
(x )|.
294