So far we’ve seen that a
computation in the abstract world will produce less
precise results than its concrete counterpart, because an abstraction
throws away detail: a route we plan on a map will tell us which way to
turn but not which lane to drive in, and a multiplication between twoSign
objects will tell us which side
of zero the answer lies on but not its actual value.
A lot of the time, it’s fine for a result to be imprecise, but for an abstraction to be
useful, it’s important that this imprecision is
safe
. Safety means
that the abstraction always tells the truth: the result of an abstract computation must
agree with the result of its concrete counterpart. If not, the abstraction is giving us
unreliable information and is probably worse than useless.
OurSign
abstraction is safe
because converting numbers intoSign
s
and multiplying them together always gives us the same result as
calculating with the numbers themselves and just converting the final
result into aSign
:
>>
(
6
*
-
9
)
.
sign
==
(
6
.
sign
*
-
9
.
sign
)
=> true
>>
(
100
*
0
)
.
sign
==
(
100
.
sign
*
0
.
sign
)
=> true
>>
calculate
(
1
,
-
2
,
-
3
)
.
sign
==
calculate
(
1
.
sign
,
-
2
.
sign
,
-
3
.
sign
)
=> true
In this respect, ourSign
abstraction is actually
quite precise. It retains exactly the right amount of information and preserves it perfectly
throughout abstract computations. The safety issue becomes more significant when the
abstraction doesn’t match up quite so perfectly with the computations we want to perform, as
we can see by experimenting with abstract addition.
There are
some
rules about how the signs of two numbers can
determine the sign of the number we get when we add them together, but they don’t work for
all possible combinations of signs. We know that the sum of two positive numbers must be
positive, and that the sum of a negative number and zero must be negative, but what about
when we add a negative and a positive number? In that case, the sign of the result depends
on the relationship between the two numbers’ absolute values: if the positive number has a
larger absolute value than the negative number, we’ll get a positive answer (−20 + 30 = 10),
if the negative number’s absolute value is larger, then we’ll get a negative answer (−30 +
20 = −10), and if they are exactly equal, we’ll get zero. But of course the absolute value
of each number is precisely the information that our abstraction has discarded, so we can’t
make this sort of decision in the abstract world.
This is a problem for our abstraction because it’s
too
abstract to be able to compute addition
accurately in every situation. How do we handle this? We could botch the
definition of abstract addition just to get it to return some
result—returnSign::ZERO
, say,
whenever we don’t know what the right answer is—but that would be
unsafe, because it would mean the abstract computation was giving an
answer that might actively disagree with the one we’d get by doing the
concrete computation instead.
The solution is to expand the abstraction to accommodate this
uncertainty. Just as we haveSign
values that mean “any positive number” and “any negative number,” we can
introduce a new one that simply means “any number.” This is really the
only honest answer that we can give when we’re asked a question that we
don’t have enough detail to answer: the result could be negative, zero,
or positive, no guarantees either way. Let’s call this new valueSign::UNKNOWN
:
class
Sign
UNKNOWN
=
new
(
:unknown
)
end
This gives us what we need to implement abstract addition safely.
The rules for calculating the sign of the sum of two numbersx
andy
are:
Ifx
andy
have the same sign (both positive,
both negative, or both zero), then that’s also the sign of their sum.
Ifx
is zero, their sum has the same sign asy
, and vice versa.
Otherwise, the sign of their sum is unknown.
We can turn that into an implementation ofSign#+
easily enough:
class
Sign
def
+
(
other_sign
)
if
self
==
other_sign
||
other_sign
==
ZERO
self
elsif
self
==
ZERO
other_sign
else
UNKNOWN
end
end
end
This gives us the behavior we want:
>>
Sign
::
POSITIVE
+
Sign
::
POSITIVE
=> #
>>
Sign
::
NEGATIVE
+
Sign
::
ZERO
=> #
>>
Sign
::
NEGATIVE
+
Sign
::
POSITIVE
=> #
In fact, this implementation happens to do the right thing when
the sign of one of the
inputs
is unknown:
>>
Sign
::
POSITIVE
+
Sign
::
UNKNOWN
=> #
>>
Sign
::
UNKNOWN
+
Sign
::
ZERO
=> #
>>
Sign
::
POSITIVE
+
Sign
::
NEGATIVE
+
Sign
::
NEGATIVE
=> #
We do need to go back and fix up our implementation ofSign#*
, though, so that it handlesSign::UNKNOWN
correctly:
class
Sign
def
*
(
other_sign
)
if
[
self
,
other_sign
].
include?
(
ZERO
)
ZERO
elsif
[
self
,
other_sign
].
include?
(
UNKNOWN
)
UNKNOWN
elsif
self
==
other_sign
POSITIVE
else
NEGATIVE
end
end
end
This gives us two abstract operations to play around with. Notice
thatSign::UNKNOWN
isn’t totally
contagious; even an unknown number multiplied by zero is still zero, so
any uncertainty that creeps in partway through a computation may get
swallowed up by the time it finishes:
>>
(
Sign
::
POSITIVE
+
Sign
::
NEGATIVE
)
*
Sign
::
ZERO
+
Sign
::
POSITIVE
=> #
We also need to adjust our idea of correctness to deal with the
imprecision introduced bySign::UNKNOWN
. Because our abstraction
sometimes doesn’t have enough information to give a precise answer, it’s
no longer true that the abstract and concrete versions of a computation
always give exactly matching results:
>>
(
10
+
3
)
.
sign
==
(
10
.
sign
+
3
.
sign
)
=> true
>>
(
-
5
+
0
)
.
sign
==
(
-
5
.
sign
+
0
.
sign
)
=> true
>>
(
6
+
-
9
)
.
sign
==
(
6
.
sign
+
-
9
.
sign
)
=> false
>>
(
6
+
-
9
)
.
sign
=> #
>>
6
.
sign
+
-
9
.
sign
=> #
So what’s going on here? Is our abstraction still safe? Well, yes,
because in the cases where it loses precision and returnsSign::UNKNOWN
, the abstract computation is
still telling us something true: “the result is a negative number, zero,
or a positive number.” It’s not as useful as the answer we can get by
doing the concrete computation, but it’s not
wrong
,
and it’s as good as we’re going to get without adding more information
to our abstract values and making abstract computations more
complex.
We can express this in code by having a better way of comparingSign
s than#==
, which is now too unforgiving for the
safety check. What we want to know is: does the result of the concrete
computation
fall within
the result predicted by the
abstract one? If the abstract computation says that a few different
results are possible, does the concrete computation actually produce one
of those results, or something else entirely?
Let’s define an operation onSign
s that can tell us whether two abstract
values relate to each other in this way. Since what we’re testing is
whether oneSign
value “fits inside”
another, let’s make it the#<=
method:
class
Sign
def
<=
(
other_sign
)
self
==
other_sign
||
other_sign
==
UNKNOWN
end
end
This gives us the test we want:
>>
Sign
::
POSITIVE
<=
Sign
::
POSITIVE
=> true
>>
Sign
::
POSITIVE
<=
Sign
::
UNKNOWN
=> true
>>
Sign
::
POSITIVE
<=
Sign
::
NEGATIVE
=> false