Thinking Functionally with Haskell (41 page)

BOOK: Thinking Functionally with Haskell
13.29Mb size Format: txt, pdf, ePub

y <- readSTRef b;

writeSTRef a y;

writeSTRef b $! (x+y)});

readSTRef a}

Note the use of the strict application operator
($!)
to force evaluation of the sum. The action
repeatFor
repeats an action a given number of times:

repeatFor :: Monad m => Int -> m a -> m ()

repeatFor n = foldr (>>) done . replicate n

All well and good, but we end up with an action
ST s Integer
when what we really want is an integer. How do we escape from the monad back into the world of Haskell values?

The answer is to provide a function similar to
runState
for the state monad, Here it is, with its type:
runST :: (forall s. ST s a) -> a

This type is unlike any other Haskell type we have met so far. It is what is called a
rank 2 polymorphic type
, while all previous polymorphic types have had rank 1. What it says is that the argument of
runST
must be universal in
s
, so it can’t
depend on any information about
s
apart from its name. In particular, every
STRef
declared in the action has to carry the same thread name
s
.

To amplify a little on rank 2 types, consider the difference between the two lists

list1 :: forall a. [a -> a]

list2 :: [forall a. a -> a]

The type of
list1
is just what we would have previously written as
[a -> a]
because in ordinary rank 1 types universal quantification at the outermost level is assumed. For example,
[sin,cos,tan]
is a possible value of
list1
with the instantiation
Float
for
a
. But there are only two functions that can be elements of
list2
, namely
id
and the undefined function
undefined
, because these are the only two functions with type
forall a. a -> a
. If you give me an element
x
of a type
a
about which absolutely nothing is known, the only things I can do if I have to give you back an element of
a
, is either to give you
x
or ⊥.

Why have a rank 2 type for
runST
? Well, it prevents us from defining things like

let v = runST (newSTRef True)

in runST (readSTRef v)

This code is not well-typed because

newSTRef True :: ST s (STref s Bool)

and in the expression
runST (newSTRef Bool)
the Haskell type checker cannot match
STRef s a
with
a
, the expected result type of
runST
. Values of type
STRef s a
cannot be exported from
ST s
, but only entities whose types do not depend on
s
. If the code were allowed, then the reference allocated in the first
runST
would be usable inside the second
runST
. That would enable reads in one thread to be used in another, and hence the result would depend on the evaluation order used to execute the threads, leading to mayhem and confusion. It is just the same problem that we prevented from occurring in the
IO
monad.

But we can safely define

fib :: Int -> Integer

fib n = runST (fibST n)

This version of
fib
runs in constant space.

For our purposes the main use of the
ST
monad resides in its ability to handle mutable arrays. The whole question of arrays deserves a section to itself.

10.5 Mutable arrays

It sometimes surprises imperative programmers who meet functional programming for the first time that the emphasis is on lists as the fundamental data structure rather than arrays. The reason is that most uses of arrays (though not all) depend for their efficiency on the fact that updates are destructive. Once you update the value of an array at a particular index the old array is lost. But in functional programming, data structures are
persistent
and any named structure continues to exist. For instance,
insert x t
may insert a new element
x
into a tree
t
, but
t
continues to refer to the original tree, so it had better not be overwritten.

In Haskell a mutable array is an entity of type
STArray s i e
. The
s
names the state thread,
i
the index type and
e
the element type. Not every type can be an index; legitimate indices are members of the type class
Ix
. Instances of this class include
Int
and
Char
, things that can be mapped into a contiguous range of integers.

Like
STRefs
there are operations to create, read from and write to arrays. Without more ado we consider an example, explaining the actions as we go along. Recall the Quicksort algorithm from
Section 7.7
:

qsort :: (Ord a) => [a] -> [a]

qsort []
= []

qsort (x:xs) = qsort [y | y <- xs, y < x] ++ [x] ++

qsort [y | y <- xs, x <= y]

There we said that when Quicksort is implemented in terms of arrays rather than lists, the partitioning phase can be performed
in place
without using any additional space. We now have the tools to write just such an algorithm. We begin with

qsort :: (Ord a) => [a] -> [a]

qsort xs = runST $

do {xa <- newListArray (0,n-1) xs;

qsortST xa (0,n);

getElems xa}

where n = length xs

First we create a mutable array with bounds
(0,n-1)
and fill it with the elements of
xs
. Sorting the array is done with the action
qsortST xa (0,n)
. At the end, the list of elements of the sorted array is returned. In the code above, the action
newListArray
has type
Ix i => (i, i) -> [e] -> ST s (STArray s i e)

and
getElems
has type

Ix i => STArray s i e -> ST s [e]

The first constructs a mutable array from a list of elements, and the second returns a list of the elements in a mutable array.

The purpose of
qsortST xa (a,b)
is to sort the elements in the sub-array of
xa
in the interval
(a,b)
, where by definition such an interval includes the lower bound but excludes the upper bound; in other words
[a .. b-1]
. Choosing intervals that are closed on the left but open on the right is almost always the best policy when processing arrays. Here is the definition of
qsortST
:

qsortST :: Ord a => STArray s Int a ->

(Int,Int) -> ST s ()

qsortST xa (a,b)

| a == b
= return ()

| otherwise = do {m <- partition xa (a,b);

qsortST xa (a,m);

qsortST xa (m+1,b)}

If
a==b
we have an empty interval and there is nothing to do. Otherwise we rearrange the array so that for some suitable element
x
in the array all elements in the interval
(a,m)
are less than
x
, and all elements in the interval
(m+1,b)
are at least
x
. The element
x
itself is placed in the array at position
m
. Sorting is then completed by sorting both sub-intervals.

It remains to define
partition
. The
only
way to find a suitable definition is by formal development using pre-and postconditions and loop invariants. But this is a book on functional programming, not on the formal development of imperative programs, so we are going to cop out and just record one version:

partition xa (a,b)

= do {x <- readArray xa a;

let loop (j,k)

= if j==k

then do {swap xa a (k-1);

return (k-1)}

else do {y <- readArray xa j;

if y < x then loop (j+1,k)

else do {swap xa j (k-1);

loop (j,k-1)}}

in loop (a+1,b)}

The action
swap
is defined by

swap :: STArray s Int a -> Int -> Int -> ST s ()

swap xa i j = do {v <- readArray xa i;

w <- readArray xa j;

writeArray xa i w;

writeArray xa j v}

Here is a brief and certainly inadequate explanation of how
partition
works. We begin by taking the first element
x
in the interval
(a,b)
as pivot. We then enter a loop that processes the remaining interval
(a+1,b)
, stopping when the interval becomes empty. We pass over elements that are less than
x
, shrinking the interval from the left. Encountering a
y
not less than
x
, we swap it with the element at the rightmost position in the interval, shrinking the interval from the right. When the interval becomes empty, we place the pivot in its final position, returning that position as a result.

Note that
loop
is defined as a local procedure within the monad. We could have defined it as a global procedure, though we would have had to add three extra parameters, namely the array
xa
, the pivot
x
and the starting position
a
.

Hash tables

A purely functional Quicksort has the same asymptotic time efficiency as one based on mutable arrays, but there are one or two places where mutable arrays seem to play a crucial role in achieving an asymptotically faster algorithm. One such place is the use of hash tables for an efficient representation of sets.

But let us approach the use of hash tables in the context of a particular problem. Consider a typical puzzle defined in terms of two finite sets, a set of
positions
and a set of
moves
. Given are the following functions:

moves
:: Position -> [Move]

move
:: Position -> Move -> Position

solved :: Position -> Bool

The function
moves
describes the set of possible moves that can be made in a given position,
move
makes a move, and
solved
determines those positions that are a solution to the puzzle. Solving the puzzle means finding some sequence of moves, preferably a shortest such sequence, that leads from a given starting position to a solved position:
solve :: Position -> Maybe [Move]

The value
solve p
is
Nothing
if there is no sequence of moves starting in position
p
that leads to a solved position, and
Just ms
otherwise, where
solved (foldl move p ms)

We are going to implement
solve
by carrying out a
breadth-first
search. What this means is that we examine all positions one move away from the starting position to see if there is a solution, then all positions two moves away, and so on. Breadthfirst will therefore find a shortest solution if one exists. To implement the search we need

type Path
= ([Move],Position)

type Frontier = [Path]

A path consists of a sequence of moves made from the starting position (in reverse order), and the position that results after making the moves. A frontier is a list of paths waiting to be extended into longer paths. A breadth-first search is then implemented by

solve p = bfs [] [([],p)]

 

bfs :: [Position] -> Frontier -> Maybe [Move]

bfs ps [] = Nothing

bfs ps ((ms,p):mps)

| solved p
= Just (reverse ms)

| p `elem` ps = bfs ps mps

| otherwise
= bfs (p:ps) (mps ++ succs (ms,p))

succs :: Path -> [Path]

succs (ms,p) = [(m:ms,move p m) | m <- moves p]

The first argument
ps
of
bfs
represents the set of positions that have already been explored. The second argument is the frontier, which is managed in a queue-like fashion to ensure that paths of the same length are inspected before their successors. Inspecting a path means accepting it if the final position is a solution, rejecting it if the end position has already been explored, and otherwise adding its successors to the end of the current frontier for future exploration. The moves in a successful path are reversed before being returned as the final result of
bfs
simply because, for efficiency,
succs
adds a new move to the front of the list rather than at the end.

There are two major sources of inefficiency with
bfs
, one concerning the use of
(++)
and the other concerning
elem
. Firstly, the size of a frontier can grow exponentially and so concatenating successors to the end of the frontier is slow. Better is the following alternative to
bfs
:

bfs :: [Position] -> Frontier -> Frontier ->

Maybe [Move]

bfs ps [] []
= Nothing

Other books

Anxious Love (Love Sick #1) by Sydney Aaliyah Michelle
Wolfsbane by Ronie Kendig
Ibrahim &amp; Reenie by David Llewellyn
Lord Beast by Ashlyn Montgomery
The Texan's Secret by Linda Warren