Read Statistics for Dummies Online

Authors: Deborah Jean Rumsey

Tags: #Non-Fiction, #Reference

Statistics for Dummies (43 page)

BOOK: Statistics for Dummies
2.68Mb size Format: txt, pdf, ePub
ads

 

Behind the Scenes: The Ins and Outs of Surveys

Surveys and their results are a part of your daily life, and you use these results to make decisions that affect your life. (Some decisions may even be life changing.) Looking at surveys with a critical eye is important. Before taking action or making decisions based on survey results, you must determine whether those results are credible, reliable, and believable. A good way to begin developing
these detective skills is to go behind the scenes and see how surveys are designed, developed, implemented, and analyzed.

The survey process can be broken down into a series of ten steps:

  1. State the purpose of the survey.

  2. Define the target population.

  3. Choose the type of survey.

  4. Design the questions.

  5. Consider the timing of the survey.

  6. Select the sample.

  7. Collect the data.

  8. Follow up, follow up, and follow up.

  9. Organize and analyze the data.

  10. Draw conclusions.

Each step presents its own set of special issues and challenges, but each step is critical in terms of producing survey results that are fair and accurate. This sequence of steps helps you design, plan, and implement a survey, but it can also be used to critique someone else's survey, if those results are important to you.

Planning and designing a survey

The purpose of a survey is to answer questions about a target population. The
target population
is the entire group of individuals that you're interested in drawing conclusions about. In most situations, surveying the entire target population is impossible because researchers would have to spend too much time or money to do so. (When you do a survey of the entire target population, that's called a
census
.) Usually, the best you can do is to select a sample of individuals from the target population, survey those individuals, then draw conclusions about the target population based on the data from that sample.

Sounds easy, right? Wrong. Many potential problems arise after you realize that you can't survey everyone in the entire target population. Unfortunately many surveys are conducted without taking the time needed to think through these issues, and this results in errors, misleading results, and wrong conclusions.

Stating the purpose of the survey

This sounds like it should just be common sense, but in reality, many surveys have been designed and carried out that never met their purpose, or that met
only some of the objectives, but not all of them. Getting lost in the questions and forgetting what you're really trying to find out is easy to do. In stating the purpose of a survey, be as specific as possible. Think about the types of conclusions you would want to make if you were to write a report, and let that help you determine your goals for the survey.

Tip 

The more specific you can be about the purpose of the survey, the more easily you can design questions that meet your objectives and the better off you'll be when you need to write your report.

Defining the target population

Suppose, for example, that you want to conduct a survey to determine the extent to which people engage in personal e-mail usage in the workplace. You may think that the target population is e-mail users in the workplace. However, you want to determine the
extent
to which e-mail is used in the workplace, so you can't just ask e-mail users, or your results would be biased against those who don't use e-mail in the workplace. But should you also include those who don't even have access to a computer during their workday? (See how fast surveys can get tricky?)

The target population that probably makes the most sense here is all of the people who use Internet-connected computers in the workplace. Everyone in this group at least has access to e-mail, though only some of those with access to e-mail in the workplace actually use it, and of those who use it, only some use it for personal e-mail. (And that's what you want to find out—how much they do use e-mail for that purpose.)

REMEMBER 

You need to be clear in your definition of the target population. Your definition is what helps you select the proper sample, and it also guides you in your conclusions, so that you don't over-generalize your results. If the researcher didn't clearly define the target population, this can be a sign of other problems with the survey.

Choosing the type of survey

The next step in designing your survey is to choose what type of survey is most appropriate for the situation at hand. Surveys can be done over the phone, through the mail, with door-to-door interviews, or over the Internet. However, not every type of survey is appropriate for every situation. For example, suppose you want to determine some of the factors that relate to illiteracy in the United States. You wouldn't want to send a survey through the mail, because people who can't read won't be able to take the survey. In that case, a telephone interview is more appropriate.

REMEMBER 

Choose the type of survey that's most appropriate for the target population, in terms of getting the most truthful and informative data possible. When examining the results of a survey, be sure to look at whether the type of survey used is most appropriate for the situation.

Designing the questions

After the purpose of the survey has been clearly outlined and you've chosen the type of survey you're going to use, the next step is to design the questions. The way that the questions are asked can make a huge difference in the quality of the data that will be collected. One of the single most common sources of bias in surveys is the wording of the questions.
Leading questions
(questions that are designed to favor a certain response over another) can greatly affect how people answer the questions, and these responses may not accurately reflect how the respondents truly feel about an issue. For example, here are two ways that you could word a survey question about a proposed school bond issue (both of which are leading questions):

Don't you agree that a tiny percentage increase in sales tax is a worthwhile investment in improving the quality of the education of our children?

Don't you think we should stop increasing the burden on the taxpayers and stop asking for yet another sales tax hike to fund the wasteful school system?

From the wording of each of these leading questions, you can easily see how the pollster wants you to respond. Research shows that the wording of the question does affect the outcome of surveys. The best way to word a question is in a neutral way. For this example, the question should be worded like this:

The school district is proposing a 0.01% increase in sales tax to provide funds for a new high school to be built in the district. What's your opinion on the proposed sales tax? (Possible responses: strongly in favor, in favor, neutral, against, strongly against.)

In a good survey, the questions are always worded in a neutral way in order to avoid bias. The best way to assess the neutrality of a question is to ask yourself whether you can tell how the person wants you to respond by reading the question. If the answer is yes, that question is a leading question and can give misleading results.

Tip 

If the results of a survey are important to you, ask the researcher for a copy of the questions used on the survey, so you can assess the quality of the questions.

Timing the survey

In a survey, as in life, timing is everything. Current events shape people's opinions all the time, and whereas some pollsters try to determine how people feel about those events, others take advantage of events, especially negative ones, and use them as political platforms or as fodder for headlines and controversy. The timing of any survey can also cause bias, regardless of the subject matter. For example, suppose your target population for a survey
is people who work full time. If you conduct a telephone survey to get office workers' opinions on personal e-mail use at work, and you call them at home between the hours of 9 a.m. and 5 p.m., you're going to have bias in your results, because those are the hours when the majority of office workers are at work.

HEADS UP 

Check out when a survey was conducted (time and date) and see whether you can determine any relevant events that occurred at that time that may have influenced the results. Also verify that the survey was conducted during a time of the day that's most convenient for the target population to respond.

Selecting the sample

After the survey has been designed, the next step is to select the people who will participate in the survey. Because typically you don't have time or money to conduct a census (a survey of the entire target population) you need to select a subset of the population, called a
sample
. How this sample is selected can make all the difference in terms of the accuracy and the quality of the results.

Three criteria are important in selecting a good sample:

  • A good sample represents the target population.
    To represent the target population, the sample must be selected from the target population, the whole target population, and nothing but the target population. Suppose you want to find out how many hours of TV Americans watch in a day, on average. Asking students in a dorm at a local university to record their TV viewing habits isn't going to cut it. Students represent only a portion of the target population. Asking people to call in their opinions on a radio show is not going to give you a sample that represents the target population; the results will represent only the people who were listening to the show, were able to call at the appropriate time, and felt strongly enough about the issue to make the effort to call in. Likewise, a Web survey will represent only the people who have access to the Internet and who logged on to the site where the survey was posted.

    HEADS UP 

    Unfortunately, many people who conduct surveys don't take the time or spend the money to select a representative sample of people to participate in the study, and this leads to biased survey results. When presented with survey results, find out how the sample was selected before examining the results of the survey.

  • A good sample is selected randomly.
    A
    random
    sample is one in which every member of the target population has an equal chance of being selected. The easiest example to visualize here is that of a hat (or bucket) containing individual slips of paper, each with the name of a person written on it; if the slips are thoroughly mixed before each slip of paper is drawn out, the result will be a random sample of the target
    population (in this case, the population of people whose names are in the hat). A random sample eliminates bias in the sampling process.

    Reputable polling organizations, such as The Gallup Organization, use a random digit dialing procedure to telephone the members of their sample. Of course, this excludes people without telephones, but because most American households today have at least one telephone, the bias involved in excluding people without telephones is relatively small.

    HEADS UP 

    Beware of surveys that have a large sample size but where that sample is not randomly selected. Internet surveys are the biggest culprit. Someone can say that 50,000 people logged on to a Web site to answer a survey, and that means that the Webmaster of this site has a lot of information. But that information is biased, because it doesn't represent the opinions of anyone except those who knew about the survey, chose to participate in it, and had access to the Internet. In a case like this, less would have been more: This survey designer should have sampled fewer people but done so randomly.

  • A good sample is large enough for the results to be accurate.
    If you have a large sample size, and if the sample is representative of the target population and is selected at random, you can count on that information being pretty accurate. How accurate depends on the sample size, but the bigger the sample size, the more accurate the information will be (as long as that information is good information). The accuracy of most survey questions is measured in terms of a percentage. This percentage is called the
    margin of error
    , and it represents how much the researcher expects the results to vary if he or she were to repeat the survey many times using different samples of the same size. Read more about this in
    Chapter 10
    .

    Tip 

    A quick and dirty formula to estimate the accuracy of a survey is to take 1 divided by the square root of the sample size. For example, a survey of 1,000 (randomly selected) people is accurate to within
    or 3.2 percentage points. (Note that in cases where not everyone responded, you should replace the sample size with the number of respondents. See the "
    Following up, following up, and following up
    " section later in this chapter.)

BOOK: Statistics for Dummies
2.68Mb size Format: txt, pdf, ePub
ads

Other books

The Girl in the Mirror by Cathy Glass
Man Of Few Words by Whistler, Ursula
Point of Hopes by Melissa Scott
Evening Class by Maeve Binchy, Kate Binchy
Burning Up by Anne Marsh
Thieves at Heart by Tristan J. Tarwater
Not the End of the World by Rebecca Stowe
Sold to the Trillionaires by Ella Mansfield