
236 • The Derivative and Graphs
Now we have assumed that f
0
is always equal to 0, the quantity f
0
(c)
must be 0. So the above equation says that
f(x) − f (S)
x − S
= 0,
which means that f(x) = f (S). If we now let C = f(S), we have
shown that f(x) = C for all x in the interval (a, b), so f is constant! In
summary,
if f
0
(x) = 0 for all x in (a, b), then f is constant on (a, b).
Actually, we’ve already used this fact in Section 10.2.2 of the previous
chapter. There we saw that if f(x) = sin
−1
(x)+cos
−1
(x), then f
0
(x) = 0
for all x in the interval (−1, 1). We concluded that f is constant on that
interval, and in fact since f (0) = π/2, we have sin
−1
(x)+cos
−1
(x) = π/2
for all x in (−1, 1).
2. Suppose that two differentiable functions have exactly the same deriva-
tive. Are they the same function? Not necessarily. They could differ
by a constant; for example, f(x) = x
2
and g(x) = x
2
+ 1 have the
same derivative, 2x, but f and g are clearly not the same function. Is
there any other way that two functions could have the same derivative
everywhere? The answer is no. Differing by a constant is the only way:
if f
0
(x) = g
0
(x) for all x, then f(x) = g(x) + C for some constant C.
It turns out to be quite easy to show this using #1 above. Suppose
that f
0
(x) = g
0
(x) for all x. Now set h(x) = f (x) − g(x). Then we
can differentiate to get h
0
(x) = f
0
(x) − g
0
(x) = 0 for all x, so h is
constant. That is, h(x) = C for some constant C. This means that
f(x) − g(x) = C, or f(x) = g(x) + C. The functions f and g do indeed
differ by a constant. This fact will be very useful when we look at
integration in a few chapters’ time.
3. If a function f has a derivative that’s always positive, then it must be
increasing. This means that if a < b, then f(a) < f(b). In other words,
take two points on the curve; the one on the left is lower than the one
on the right. The curve is getting higher as you look from left to right.
Why is it so? Well, suppose f
0
(x) > 0 for all x, and also suppose that
a < b. By the Mean Value Theorem, there’s a c in the interval (a, b)
such that
f
0
(c) =
f(b) − f(a)
b − a
.
This means that f(b)−f (a) = f
0
(c)(b−a). Now f
0
(c) > 0, and b−a > 0
since b > a, so the right-hand side of this equation is positive. So we
have f(b) − f(a) > 0, hence f (b) > f(a), and the function is indeed
increasing. On the other hand, if f
0
(x) < 0 for all x, the function is
always decreasing; this means that if a < b then f(b) < f(a). The proof
is basically the same.