Mind Hacks™: Tips & Tools for Using Your Brain (18 page)

Read Mind Hacks™: Tips & Tools for Using Your Brain Online

Authors: Tom Stafford,Matt Webb

Tags: #COMPUTERS / Social Aspects / Human-Computer Interaction

BOOK: Mind Hacks™: Tips & Tools for Using Your Brain
6.06Mb size Format: txt, pdf, ePub
Neural Noise Isn’t a Bug; It’s a Feature
Neural signals are innately noisy, which might just be a good thing.

Neural signals are always noisy: the timings of when they fire, or even whether they
fire at all, is subject to random variation. We make generalizations at the psychological
level, such as saying that the speed of response is related to intensity by a certain
formula — Pieron’s Law
[
Why People Don’t Work Like Elevator Buttons
]
. And we also say that
cells in the visual cortex respond to different specific motions
[
See Movement When All Is Still
]
. But both of these are
true only
on average
. For any single cell, or any single test of
reaction time, there is variation each time it is measured. Not all the cells in the
motion-sensitive parts of the visual cortex will respond to motion, and those that do won’t
do it exactly the same each time we experience a particular movement.

In the real world, we take averages to make sense of noisy data, and somehow the brain
must be doing this too. We know that the brain is pretty accurate, despite the noisiness of
our neural signals. A prime mechanism for compensating for neural noise is the use of lots
of neurons so that the average response can be taken, canceling out the noise.

But it may also be the case that noise has some useful functions in the nervous system.
Noise could be a feature, rather than just an inconvenient bug.

In Action

To see how noise can be useful, visit Visual Perception of Stochastic Resonance (
http://www.umsl.edu/~neurodyn/projects/vsr.html
; Java) designed by Enrico Simonotto,
1
which includes a Java applet.

A grayscale picture has noise added and the result filtered through a threshold. The
process is repeated and results played like a video. Compare the picture with various
levels of noise included. With a small amount of noise, you see some of the gross features
of the picture — these are the parts with high light values so they always cross the
threshold, whatever the noise, and produce white pixels — but the details don’t show up
often enough for you to make them out. With lots of noise, most of the pixels of the
picture are frequently active and it’s hard to make out any distinction between true parts
of the picture and pixels randomly activated by noise.

But with the right amount of noise, you can clearly see what the picture is and all
the details. The gross features are always there (white pixels), the fine features are
there consistently enough (with time smoothing they look gray), and the pixels that are
supposed to be black aren’t activated enough to distract you.

How It Works

Having evolved to cope with noisy internal signals gives you a more robust
system. The brain has developed to handle the odd anomalous data point, to account for
random inputs thrown its way by the environment. We can make sense of the whole even if
one of the parts doesn’t entirely fit (you can see this in our ability to simultaneously
process information
[
Robust Processing Using Parallelism
]
, as well). “Happy
Birthday” sung down a crackly phone line is still “Happy Birthday.” Compare this with your
precision-designed PC; the wrong instruction at the wrong time and the whole thing
crashes. The ubiquity of noise in neural processing means your brain is more of a
statistical machine than a mechanistic one.

That’s just a view of noise as something to be worked around, however. There’s another
function that noise in neural systems might be performing — it’s a phenomenon from control
theory called
stochastic resonance
. This says that adding noise to a
signal raises the maximum possible combined signal level. Counterintuitively, this means
that adding the right amount of noise to a weak signal can raise it above the threshold
for detection and make it easier to detect and not less so.
Figure 2-32
shows this in a graphical form. The
smooth curve is the varying signal, but it never quite reaches the activation threshold.
Adding noise to the signal produces the jagged line that, although it’s messy, still has
the same average values
and
raises it over the threshold for
detection at certain points.

Just adding noise doesn’t always improve things of course: you might now have a
problem with your detection threshold being crossed even though there is no signal. A
situation in which stochastic resonance works best is one in which you have another
dimension, such as time, across which you can compare signals. Since noise changes with
time, you can make use of the frequency at which the detection threshold is crossed
too.

In Simonotto’s applet, white pixels correspond to where the detection threshold has
been crossed, and a flickering white pixel averages to gray over time. In this example,
you are using time and space to constrain your judgment of whether you think a pixel has
been correctly activated, and you’re working in cooperation with the noise being added
inside the applet, but this is exactly what your brain can do too.

End Note
  1. Simonotto, E., Riani, M., Seife, C., Roberts, M., Twitty, J.,
    & Moss, F. (1997). Visual perception of stochastic resonance.
    Physical Review Letters, 78
    (6), 1186–1189.
    Figure 2-32. Adding noise to a signal brings it above threshold, without changing the mean
    value of the signal
See Also
  • An example of a practical application of stochastic resonance theory, in the form
    of a hearing aid: Morse, R. P., & Evans, E. F. (1996). Enhancement of vowel
    coding for cochlear implants by addition of noise.
    Nature Medicine,
    2
    (8), 928–932.
Chapter 3. Attention: Hacks 34–43

It’s a busy world out there, and we take in a lot of input, continuously. Raw
sense data floods in through our eyes, ears, skin, and more, supplemented by memories and
associations both simple and complex. This makes for quite a barrage of information; we simply
haven’t the ability to consider all of it at once.

How, then, do we decide what to attend to and what else to ignore (at least for
now)?

Attention is what it feels like to give more resources over to some perception or set of
perceptions than to others. When we talk about attention here, we don’t mean the kind of
concentration you give to a difficult book or at school. It’s the momentary extra importance
you give to whatever’s just caught your eye, so to speak. Look around the room briefly. What
did you see? Whatever you recall seeing — a picture, a friend, the radio, a bird landing on the
windowsill — you just allocated attention to it, however briefly.

Or perhaps attention isn’t a way of allocating the brain’s scarce processing resources.
Perhaps the limiting factor isn’t our computational capacity at all, but, instead, a physical
limit on action. As much as we can perceive simultaneously, we’re able to act in only any one
way at any one time. Attention may be a way of throwing away information, of narrowing down
all the possibilities, to leave us with a single conscious experience to respond to, instead
of millions.

It’s hard to come up with a precise definition of attention. Psychologist William James,
1
in his 1890
The Principles of Psychology
, wrote: “Everyone
knows what attention is.” Some would say that a more accurate and useful definition has yet to
been found.

That said, we can throw a little light on attention to see how it operates and feels. The
hacks in this chapter look at how you can voluntarily focus your
visual attention
[
Detail and the Limits of Attention
]
, what it feels like when you do
(and when you remove it again)
[
Feel the Presence and Loss of Attention
]
, and what is capable of
overriding your voluntary behavior and grabbing attention
[
Grab Attention
]
automatically. We’ll do a little counting
[
Count Faster with Subitizing
]
too. We’ll also
test the limits of shifting attention
[
Don’t Look Back!
and
Avoid Holes in Attention
]
and run across some situations in which attention lets you down
[
Blind to Change
and
Make Things Invisible Simply by Concentrating (on Something Else)
]
. Finally, we’ll look at a way your visual attention capacity can be improved
[
Improve Visual Attention Through Video Games
]
.

End Note
  1. The Stanford Encyclopedia of Philosophy has a good biography of
    William James (
    http://plato.stanford.edu/entries/james
    ).
Detail and the Limits of Attention
Focusing on detail is limited by both the construction of the eye and the attention
systems of the brain.

What’s the finest detail you can see? If you’re looking at a computer screen from about
3 meters away, 2 pixels have to be separated by about a millimeter or more for them not to
blur into one. That’s the highest your eye’s resolution goes.

But making out detail in real life isn’t just a matter of discerning the difference
between 1 and 2 pixels. It’s a matter of being able to focus on fine-grain detail among
enormously crowded patterns, and that’s more to do with the limits of the brain’s visual
processing than what the eye can do. What you’re able to see and what you’re able to look at
aren’t the same.

In Action

Figure 3-1
shows two sets of bars. One
set of bars is within the resolution of attention, allowing you to make out details. The
other obscures your ability to differentiate particularly well by crowding.
1

Figure 3-1. One set of bars is within the resolution of attention (right), the other is too
detailed (left)
1

Hold this book up and fix your gaze on the cross in the middle of
Figure 3-1
. To notice the difference, you have
to be able to move your focus around without moving your eyes — it does come naturally, but
it can feel odd doing it deliberately for the first time. Be sure not to shift your eyes
at all, and notice that you can count how many bars are on the righthand side easily.
Practice moving your attention from bar to bar while keeping your eyes fixed on the cross
in the center. It’s easy to focus your attention on, for example, the middle bar in that
set.

Now, again without removing your gaze from the cross, shift your attention to the bars
on the lefthand side. You can easily tell that there are a number of bars there — the basic
resolution of your eyes is more than good enough to tell them apart. But can you count
them or selectively move your attention from the third to the fourth bar from the left?
Most likely not; they’re just too crowded.

How It Works

The difference between the two sets of bars is that the one on the right is within the
resolution of
visual selective attention
because it’s spread out,
while the one on the left is too crowded with detail.

“Attention” in this context doesn’t mean the sustained concentration you give (or
don’t give) the speaker at a lecture. Rather, it’s the prioritization of some objects at
the expense of others. Capacity for processing is limited in the brain, and attention is
the mechanism to allocate it. Or putting it another way, you make out more detail in
objects that you’re paying attention to than to those you aren’t. Selective attention is
being able to apply that processing to a particular individual object voluntarily. While
it feels as if we should be able to select anything we can see for closer inspection, the
diagram with the bars shows that there’s a limit on what can be picked out, and the limit
is based on how fine the detail is.

We can draw a parallel with the resolution of the eye. In the same way the resolution
of the eye is highest in the center
[
See the Limits of Your Vision
]
and decreases toward the
periphery, it’s easier for attention to select and focus on detail in the center of vision
than it is further out.
Figure 3-2
illustrates this limit.

On the left, all the dots are within the resolution required to select any one for
individual attention. Fix your gaze on the central cross, and you can move your attention
to any dot in the pattern. Notice how the dots have to be larger the further out from the
center they are in order to still be made out. Away from the center of your gaze, your
ability to select a dot deteriorates, and so the pattern has to be much coarser.

Figure 3-2. Comparing a pattern within the resolution of attention (left) with one that is too
fine (right)

The pattern on the right shows what happens if the pattern isn’t that much
coarser. The dots are crowded together just a little too much for attention to cope, and
if you keep your eyes on the central cross, you can’t voluntarily focus your attention on
any particular dot any more. (This is similar to the lefthand side set of bars in the
first diagram, as shown in
Figure 3-1
.)

Also notice, in
Figure 3-2
, left, that
the dots are closer together at the bottom of the patterns than at the top. They’re able
to sit tighter because we’re better at making out detail in the lower half of vision — the
resolution of attention is higher there. Given eye level and below is where all the action
takes place, compared to the boring sky in the upper vision field, it makes sense to be
optimized that way round. But precisely where this optimization arises in the structure of
the brain, and how the limit on attentional resolution in general arises, isn’t yet
known.

Why is selective attention important, anyway? Attention is used to figure out what to
look at next. In the dot pattern on the left, you can select a given dot before you move
your eyes, so it’s a fast process. But in the other diagram, on the right, moving your
eyes to look directly at a dot involves more hunting. It’s a hard pattern to examine, and
that makes examination a slow process.

In Real Life

Consider attentional resolution when presenting someone with a screen full of
information, like a spreadsheet. Does he have to examine each cell laboriously to find his
way around it, like the crowded
Figure 3-2
, right? Or, like the one on the left, is it broken up into large areas, perhaps using
color and contrast to make it comprehensible away from the exact center of the gaze and to
help the eyes move around?

End Note
  1. Figures reprinted from
    Trends in Cognitive Sciences
    ,
    1(3), He, S., Cavanagh, P., & Intriligator, J., Attentional Resolution,
    115–21, Copyright (1997), with permission from Elsevier.
Count Faster with Subitizing
You don’t need counting if a group is small enough; subitizing will do the job, and
it’s almost instant.

The brain has two methods for counting, and only one is officially called counting.
That’s the regular way — when you look at a set of items and check them off, one by one. You
have some system of remembering which have already been counted — you count from the top,
perhaps — and then increment: 7, 8, 9...

The other way is faster, up to five times faster per item. It’s called
subitizing
. The catch: subitizing works for only really small
numbers, up to about 4. But it’s fast! So fast that until recently it was believed to be
instantaneous.

In Action

See how many stars there are in the two sets in
Figure 3-3
. You can tell how many are in set A
just by looking (there are three), whereas it takes a little longer to see there are six
in set B.

Figure 3-3. The set of stars on the left can be subitized; the one on the right cannot

I know this feels obvious, that it takes longer to see how many stars there are in the
larger set. After it, there are more of them. But that’s exactly the point. If you can
tell, and it feels like immediately, how many stars there are when
there are three of them, why not when there are six? Why not when there are
100?

How It Works

Subitizing and counting do seem like different processes. If you look at studies of
how long it takes for a person to look at some shapes on the screen and report how many
there are, the time grows at 40–80 ms per item up to four, then increases at 250–350
milliseconds beyond that.
1
Or to put it another way, assessing the first four items takes only a
quarter of a second. It takes another second for every four items after that. That’s a big
jump.

The difference between the two is borne out by the subjective experience. Counting
feels to be a very deliberate act. You must direct your attention to each item. Your eyes
move from star to star. Subitizing, on the other hand, feels preattentive. Your eyes don’t
need to move from star to star at all. There’s no deliberate act required; you just know
that there are four coffee mugs on the table or three people in the lobby, without having
to check. You just look. It’s this that leads some researchers to believe that subitizing
isn’t an act in itself, but rather a side effect of visual processing.

We know that we are able to keep track of a limited number of objects automatically
and follow them as they move around and otherwise change. Like looking at shadows to
figure out the shape of the environment
[
Fool Yourself into Seeing 3D
]
, object tracking seems to be a
built-in feature of visual processing — an almost involuntary ability to keep persistent
files open for objects in vision
[
Feel the Presence and Loss of Attention
]
. The limit on how many
objects can be tracked and how many items can be subitized is curiously similar. Perhaps,
say some, the reason subitizing is so quick is that the items to be “counted” have already
been tagged by the visual processing system, and so there’s no further work required to
figure out how many there are.
2

In this view, counting is an entirely separate process that occurs only when the
object tracking capacity is reached. Counting then has to remember which items have been
enumerated and proceed in a serial way from item to item to see how many there are.
Unfortunately, there’s no confirmation of this view when looking at which parts of the
brain are active while each of the two mechanisms is in use.
1
Subitizing doesn’t appear to use any separate part of the brain that isn’t
also used when counting is employed. That’s not to say the viewpoint of fast subitizing as
a side effect is incorrect, only that it’s still a conjecture.

Regardless of the neural mechanism, this does give us a hint as to why it’s quicker to
count in small clusters rather than one by one. Say you have 30 items on the table. It’s
faster to mentally group them into clusters of 3 each
(using the speedy subitizing method to cluster) and slowly count the 10
clusters, than it is to use no subitizing and count every one of the 30 individually. And
indeed, counting in clusters is what adults do.

In Real Life

You don’t have to look far to see the real-life impact of the speed difference between
sensing the quantity of items and having to count them.

Some abaci have 10 beads on a row. These would be hard — and slow — to use if it weren’t
for the Russian design of coloring the two central beads. This visual differentiation
divides a row into three groups with a top size of four beads — perfect for instantly
subitizing with no need for actual counting. It’s a little design assistance to work
around a numerical limitation of the brain.

We also subitize crowds of opponents in fast-moving, first-person shooter video games
to rapidly assess what we’re up against (and back off if necessary). The importance of
sizing up the opposition as fast as possible in these types of games has the nice side
effect of training our subitizing routines
[
Improve Visual Attention Through Video Games
]
.

End Notes
  1. Piazza, M., Mechelli, A., Butterworth, B., & Price, C. J.
    (2002). Are subitizing and counting implemented as separate or functionally
    overlapping processes?
    NeuroImage, 15
    , 435–446.
  2. Trick, L. M., & Pylyshyn, Z. W. (1994). Why are small and
    large numbers enumerated differently? A limited-capacity preattentive stage in vision.
    Psychological Review, 101
    (1), 80–102.

Other books

Dark Descendant by Jenna Black
Sin and Sensibility by Suzanne Enoch
Surrender to Love by Adrianne Byrd
The Food of Love by Anthony Capella
Spirits in the Park by Scott Mebus
A Station In Life by Smiley, James
The Alpha's Desire 2 by Willow Brooks
In Stereo Where Available by Becky Anderson