Read Understanding Research Online

Authors: Marianne Franklin

Understanding Research (28 page)

BOOK: Understanding Research
2.34Mb size Format: txt, pdf, ePub
ads
  • Data-gathering techniques review
  • Surveys and questionnaires
  • Interviews and focus groups
  • Ethnographic fieldwork and participant-observation
PREAMBLE: INTRODUCTION TO PART 2

It is a good feeling to get a research inquiry off the drawing board and into motion. As I noted in
Chapter 2
, being on the road in a research project generates its own pay-off. There is a certain sense of satisfaction to being finally underway in itself, but also because once underway an inquiry starts to generate its own ‘critical mass’ as we learn more and become more confident about the direction we are taking.

The second part of this book (
Chapters 6
to
8
) focus on the nitty-gritty of
doing
research; some of which we need to address when planning and designing, and some of which present their own conundrums and decisions to make as we are in the middle of things. In that respect, with the overview of both the main elements and the main stages of a project (
Chapter 2
) and the survival guide from
Chapter 3
in mind, these
two chapters walk through selected ways by which researchers generate, gather, and analyse their raw data, or empirical material. These specific techniques along with various intradisciplinary and cross-cutting schools of thought about their respective ‘rights’ and ‘wrongs’ are foundational for much research undertaken by student, and faculty-level researchers in the humanities and social sciences.

For the sake of argument the act of gathering and that of analysing our core material have been split. First, this chapter takes a look at established ‘rules and procedures’ (viz.
methods
) for generating and gathering material directly from social actors; in quantifiable or qualitative form, as existing empirical material (for example, public databases) or data generated by the researcher themselves.
Chapter 7
looks at approaches that put the stress on ways to develop and then apply analytical procedures on the activities and outcomes of social interaction. Here the larger object and specific units of analysis are observed – and conceived – as ‘content’, ‘text’, or ‘discourse’.

In this respect these two chapters work in tandem. Given the range – from highly specialized to more generic methods – available today for achieving all levels of research projects, three distinctions govern the way the material is covered in these chapters:

  1. The distinction between ways of generating and gathering data on the one hand and, on the other, analysing this material work in tandem in that the researcher generates and communicates the findings, inferences drawn, and arguments made as knowledge.
  2. For methodologies that place analysis centre-stage, the material under investigation could be generated and gathered by the researcher. It can also be preexisting, pre-sorted publicly available material; for example, a historical archive, set of policy documents, television programme or series.
  3. In both cases we see that approaches to certain ways of conducting analysis and how the data is gathered are both interconnected and distinct; for example, notes or transcripts of interviews we conduct can be analysed in several ways. Conversely different sorts of specialized techniques of
    visual analysis
    apply variously to respective media – still photographs, audio-visual material, film (digital and acetate).

In light of our ongoing goal to understand research as a pragmatic and intellectual endeavour, the areas covered from here on in are selective discussions of principles and techniques that need adjusting and reappraisal perhaps according to their role in your inquiry, the wider context in which you are carrying this out, and the various rights and obligations that these all bring with them.

CHAPTER AIMS AND ORGANIZATION

The aim of this first chapter of
Part 2
is to engage with a cluster of techniques that focus on researching human subjects and ways of generating and gathering data particular to but also shared by quantitative and qualitative research sensibilities.
These are (1) surveys and questionnaires; (2) interviews; (3) focus groups; and (4) participant-observation and ethnographic fieldwork. This latter category encompasses the first three as well as denotes a locale for conducting the work by way of varying degrees and levels of (partially or fully immersed)
participant-observation
.

In this chapter and the next, the discussion is along three axes: general principles, practicalities (how-to), and wider methodological considerations. Any ‘rules and procedures’ presented here are not intended as one-size-fits-all models. Moreover, they are not always compatible in that they encompass both overlapping and divergent ways of assembling evidence; textual or numerical, experiential or performed, spontaneous or produced.

**TIP: As each aspect touches on vibrant literatures and ongoing debates that pertain to both general and highly specialized points of agreement and contradiction within and across even sub-disciplines, you need to become more conversant with these details before committing.

  • Some further reading is provided at the end of each section.
  • Touch base with mentors and supervisors as need be.
  • At some point though, you need to get on with the thing; deal with the outcomes as part of your findings, discussion or argument.
  • Trying out some of these techniques as part of a pilot (try-out) can be productive and instructive.

As a bridge, and in light of the last two chapters, the following section reviews key distinctions between data-gathering and analysis as commonly understand by quantitative or qualitative research traditions.

DATA-GATHERING TECHNIQUES – REVIEW

First,
qualitative
data-gathering and analysis entails material that cannot be, or need not be counted:

  • Using non-statistical means of collating, analysing the material, and then drawing conclusions.
  • This material encompasses ‘meanings, concepts, definitions, characteristics, metaphors, symbols, [experiences] and descriptions of things . . . [that] cannot be meaningfully expressed by numbers’ (Berg 2009: 3; see also Creswell 2009: 173–6, Gray 2009: 493
    passim
    ).

Three broad sorts of data-gathering and analysis techniques flow from here:

  1. Researching human subjects
    : face-to-face and/or computer-mediated contexts; interviews, surveys (small-scale), focus groups, auto-generation of contemporary archives (diaries, narrative work) in settings where the researcher encounters selected individuals or groups at set times and places.
  2. Research in a field, with/within communities or groups
    – human and/or virtual; fieldwork carried out by participant and/or observation (partial or full immersion, short-term or long-term) and accompanying records of observations, events, and interactions; the researcher may set up interviews, focus groups and other sorts of spontaneous or premeditated interactions with research subjects close-up and ‘in situ’.
  3. Researching documents and other ‘social texts’
    – hardcopy and/or digital; where the research accesses historical archives, policy documents, visual depositories, online production to conduct forms of content/textual analysis; these texts are treated as self-contained and/or interactively generated written, visual, or multimedia items.

These preferred approaches need not rule out the incorporation of quantitative data-gathering or analysis techniques.

Quantitative data-gathering
concentrates on collecting material that is countable or measurable:

  • The form the data takes is numerical and is made sense of primarily by the use of techniques of statistical analysis i.e. making
    inferences
    on the basis of statistical probability.
  • Research designs in these cases are largely set up to test
    hypotheses
    to make informed generalizations about past behaviours, or generate predictive models (see Berg 2009: 342).
  • Data is collected under controlled conditions in standardized and replicable ways; in both a large scale and smaller scale
  • Automated analytical tools and the requisite skills required to use them go hand-in-hand.
  • Many quantitative projects make use of publicly available data sets; for example, national census results, government statistics.

These sorts of techniques and their accompanying conditions need not exclude qualitative sorts of data or their application in mixed-method projects.

  1. Survey-based research on human subjects
    : large-scale questionnaires and/or standardized interviews comprise this work; media effects, audience research, and media uses (for example, ratings, website clicks) make use of these techniques; combination and comparisons of existing data sets, for example elections, census results, land and survey data, apply statistical and modelling techniques as well.
  2. Content analysis
    : the counting, codifying, and collating of quantifiable manifest content of written text; for example, keywords, ‘nearest neighbour’ analysis (e.g. relationship between negative terminology and terms such as ‘teenage boys’ or ‘homeless people’. For web-based texts, this also entails the counting and coding of hyperlinks and online ‘hubs’ as visualizations of quantifiable web-uses (search engine results, tweets, web-link ‘hubs’).
  3. (Quasi-) experimental
    : to test behaviour under entirely controlled – laboratory – conditions (experimental); social scientists also set up semi-controlled (quasi-) experiments, for example, on the effect of violent images on subjects’ neurological or physiological readings.
    1

As noted above, these are not necessarily mutually exclusive approaches. For instance:

  • some initial survey work can isolate potential interviewees for more in-depth interviews, prepare the ground for focus-group work;
  • a discourse analysis of the sub-text of a particular sub-set or genus of government policy-making could be accompanied by a comprehensive content-analysis of recurring or significant keywords or phrases;
  • policy-makers could be interviewed to provide the ‘back-story’ of the drafting process.

This is not to suggest that all combinations are equally feasible, all data-gathering techniques viable for all research questions in an unproblematic way. It is up to you to consider the implications of each approach for
your
inquiry; some data and the ways to collect them suit certain research questions better than others, others are more adaptable, others require more set-up time whilst others may be easier to carry out but produce copious amounts of data to process and analyse. It should be apparent by now just how mobile and intransigent some of these distinctions can be within and across academic disciplines and geographies.

Let’s start with a mainstay of much social research and, arguably an approach that exemplifies the quantitative modes of gathering and analysing large amounts of data: conducting surveys based on standardized questionnaires.

SURVEYS AND QUESTIONNAIRES
2

Surveys are suitable when a researcher is looking to capture attitudes, opinions, or gain insight into how people behave (based on what people tell you). When the aim is to gather this information from a large group of people, survey instruments – based on questionnaires and the statistical collation of the findings – are a well-established practice.

Second, surveys are used to garner detailed information in order to describe a particular population; national census surveys are one example. Survey work presumes that the population it looks to study is accessible and available for surveying; for example, by phone, on foot (door-to-door polling for instance), or online via the web or emails.

General principles

The optimal survey instrument you use or adapt for your inquiry is one that can generate as accurate a representation of the people’s opinions, preferences, or behaviours as possible. The information yielded comes in three broad categories:

  1. Reports of fact: self-disclosure of basic information (demographic for instance) such as age, gender, education level, income, behaviour (for example, which candidate voted for).
  2. Ratings of people’s opinions or preferences: responses gathered here are evaluative in response to a statement; for example, levels of satisfaction, agreement, dislikes (e.g of television programmes, or university course evaluations).
  3. Reports of intended behaviour: here the questions would be asked in such a way as to get people to disclose their motivations, or intentions about some action (for example, likeliness or willingness to buy or use a product if it were offered in a certain way).

Most of us have probably been asked to participate in a survey at some point in our lives; agreeing or refusing as well. So, recall how you yourself may have chosen to respond, or not respond to any of the above sorts of surveys by lecturers, marketers, or pollsters, and remember when compiling your own survey questions or approaching others! In everyday life, journalism, marketing, and political surveys such as public opinion polls generate important data sets for use in the social sciences. Large or small, the promises held out by survey work come with a number of trade-offs.

First the up-side. Survey work can be:

  1. An effective way to represent opinions across a population/s as well as representing information about individuals and groups of people. They provide the means by which researchers can describe populations they cannot, or need not directly observe or personally interact with.
  2. This approach comes with a large literature that debates and lays out the generic and disciplinary-specific rules and procedures for effective data-gathering and their quantitative analysis. The aim is that the results can be replicated by others; rerun and reanalysed as need be.
  3. A productive way to ascertain baseline information about a group for more in-depth interviewing, or focus group work. In other words, survey work is not necessarily the antithesis of qualitative modes of analysis.

The downside of survey work is that

  1. Drawing
    causal inferences
    from limited survey-based observations is difficult. On a larger scale, the premise that probability sampling best proves or predicts human behaviour is an ongoing debate (see Ginsberg 1986, Lewis 2001, Lippmann 1998).
  2. A survey stands and falls, whether it is large or small-scale, on the strength of its questions; weakly formulated or inappropriate questions create unsatisfactory results and response-rates.
  3. People can misreport; deliberately or through carelessness, lack of attention; in long surveys especially.
  4. Moreover, many people do not want to do surveys. In short, getting an adequate response rate for the population you want to survey is not a pre-given, it takes work.
  5. People can react differently to the same question; culturally ambiguous or controversial questions can generate strong emotions (see also the discussion of interviews below).

On deciding whether or not to embark upon a survey, large or small, note that for inquiries looking to find out what people think or experience, or those based on how people behave or interact in groups, whether this is in a simulated or naturalistic setting or over time, survey instruments will not produce the sort of data you are looking for; participant-observation, focus group work, or perhaps an experiment may well be better suited.

Practicalities

Key practicalities entail being clear about the difference between (1) the
population
for the survey and the
sampling
of that population (not all surveys study all things equally); (2) different ways of administering a survey; (3) sorts of questions and question design; and (4) the need for pre-testing before carrying out the survey proper.

Sampling

This refers to the selection made once the researcher has established the
population
from whom they will contact a group of respondents; the size and composition of the sample relates to the aims and objectives of the inquiry. Basically, you need to ascertain the characteristics of the target population, and how many from that population you need/want to survey.

  • This implies that the sample should be
    representative
    , i.e. that all possible permutations of the population are accounted for within the sample.
    Representative samples
    generally permit stronger generalizations in the final analysis; one reason why
    random sampling
    is indispensable for larger survey instruments.
  • Once you have drawn up a comprehensive list of identifiers for this population you can then sample; for example, one population could be international students; comprised of students from respective countries, doing different sorts of study, speaking different languages or coming from one area/language group. You may need to include the age-range, academic background, gender, and income-support relevant to this larger population, and so on.

In short, populations need to be clarified in terms of the research question. Moreover, the criteria by which you designate your sample are not always self-explanatory.

  • You need to define your population clearly before setting out; then consider how to sample, by random or other means.
  • A sample here can be arrived at from a population systematically as well as randomly. For example, by putting all the names of all international students enrolled in your programme in a hat and picking out the first ten, or more.

This brings us to the various
types of samples
. There are several sorts of samples along the spectrum, from fully random to highly selective.

  • Samples based on
    statistical probability
    , based on the assumption that every element or unit in the population has some likelihood – a
    non-zero probability
    – of being in the sample.
  • The thing to note here is that
    only samples based on probability can be analysed according to theories of probability, including statistical estimations of margins of error
    . In short, if your sample is not based on probability/forms of random selection, then applying an advanced level of statistical analysis on your findings is inappropriate.
  • This does not mean to say that other samples are not possible, particularly for smaller projects. Samples based on
    non-probability
    include:
    • Starting with your own friends and then their friends in what is called the
      snowballing
      technique is a popular and useful one.
    • Or using your classmates in what is a
      sample-of-convenience
      can also generate some interesting results.
    • As can more targeted forms like
      quota sampling
      ; for example, asking students from one part of the world within a certain age group to respond up to a certain total, or
    • What some call
      accidental sampling
      (for example, on-the-street surveys that stop passers-by or whoever is sitting near you in the canteen).
    • Another sort of sample is to consciously select key actors; those figures who are important to an event or decision-making process; for example, registered participants in an international summit; cabinet members or civil servants.
  • Often used alongside other, qualitative forms of data-gathering and analysis, non-probability samples can provide useful insights. Any conclusions drawn are done so on the basis that these are not random samples and so basing these claims on this level of representativeness is conditional.

This brings us to the next major decision (best taken before setting out, by the way).

How big should a sample be?

The first response to this question for individual researchers is that the size of your sample depends on time and cost restraints as well as the nature of your research question and role any survey is to play in the larger inquiry.

Size also relates to how much precision is required. For instance, larger samples selected on the basis of probability, or high-response rates from a surveyed population (for example, a class of students evaluating a course) provide more bases for generalizations; the larger the sample the closer it correlates with the population. This is why more heterogeneous populations (for example, national citizenries) require larger sample sizes to make sense.
3
Sometimes the answer to how large your survey needs to be can be ascertained by doing a pilot survey and considering the viability of not only any results but the response rate.

Figure 6.1
Surveys – a waste of time

Source
: Fran Orford:
http://www.francartoons.com

Methodological considerations

How you administer any survey, large or small, based on probability or not, requires some forethought and experimentation. Likewise for designing the questions and which questions to ask; readily available survey tools and services on the web do not get you around this part of the work; a survey tool is not in itself a method. The main pros and cons of survey work were discussed above.

Two other points relate to broader methodological considerations when doing survey work as well as ascertaining whether this approach is best-suited to your inquiry.

Modes of administration

A survey can be administered by getting people to do it themselves or by asking them the questions directly. Although the former is more convenient, and greatly enhanced by web-based or email access to potentially ever-expanding populations out there in global cyberspace, it is worth considering the trade-offs for each approach.

  • 1   
    Self-administered
    surveys see subjects responding to ready-made questionnaires, digital being the most efficacious at the data-gathering point. Here respondents can be asked longer, more complex, or even visually-based questions they can answer at their leisure and without influence from others around them (not the case in classroom evaluation surveys!).

But this is the downside: little direct control over the administration, the need for questions to be well-designed, all contingencies covered, and with respondents who will understand the questions asked. No point carrying out a text-based survey with people who cannot read or write at the requisite level (very young children, non-literate people). For projects looking to ask open-ended questions which generate complex and diverse responses, then reconsider, or consider how you will process this information in the eventual analysis and presentation.

Figure 6.2
Don’t have a category for that

Source
: Joseph Farris:
http://www.josephfarris.com

  • 2   
    Directly administered
    – on-the-spot or phone interviews – are ways of carrying out a survey when you approach respondents personally. The advantage here is that there is a greater chance of getting full cooperation in completing the survey. You can also be sure that all questions are answered, answered adequately or after due thought.

The downside is that this approach is time-consuming and expensive; why marketing and polling research projects using phone-based surveys have huge teams and big budgets. You also need to have the skills to deal with cold calling and negative responses from the public when approached for face-to-face surveys; not everyone appreciates being approached in this way.

  • 3   
    Online/digital surveys and questionnaires
    : It may seem self-explanatory, especially for cash-strapped students, that these days email, instant messaging, or texting are the best way to access people; email lists, address-books, or your own Facebook group providing a ready-made sample for you to use.

However, issues about sampling aside (see above) many a researcher has discovered that email-based and even more user-friendly web-based survey tools do not always provide satisfactory results.

  • Email surveys: The upside is that they can be sent directly to a respondent, with an automated receipt-acknowledgement message built in. They are low-cost, easy to design (cut-and-paste the questions into the message or attach a file); easy to return as well, a click of the reply button only, and require little technical skill to administer on that level. However, there are several catches:
    • Because of the do-it-yourself and text-based formats of emails and pasted-in questions the design can be simplistic and unattractive to people, many of whom have over-charged inboxes already. Email surveys need to be short, very short, to get any response rate.
    • Once received, the researcher then has to copy the results into some sort of spreadsheet, create a database in order to analyse, and then present the findings in graphic form. Straightaway the chances of data-entry error and escalating time taken to do this work presents a major impediment to effective execution.
    • Before administering, all email addresses need to be valid, requiring preparation time. Moreover, if the results are supposed to be anonymous – a basic premise of most survey work – emails immediately undermine this principle. Some respondents may trust the researcher to treat their responses in confidence, but in many cases people respond differently to anonymous and non-anonymous surveys.
  • 4   
    Web-based survey instruments
    : These products and services (see
    Chapter 5
    ) deal with many of the above problems in email surveys; layout options allow for higher production values and attractiveness for easy use. They collect, collate, and statistically analyse the data for you. Their question templates are very helpful in framing questions and ordering them in the best way for your purposes; try-outs and revisions are possible before launching the survey itself. Anonymity can be integrated into the responses, and research populations are accessible by virtue of being on the web (assuming the survey is placed in the right spot to reach them).

However, note that whilst they are very cost-effective for surveys of ten questions or less, samples of 100 respondents or less (as is the case with Survey Monkey), graphics and extended analyses require annual subscriptions and these can be costly.

  • If you rely on these tools to do the ‘dirty work’ for you then these additional costs may be a disadvantage. You still need to do the legwork to ensure you get enough responses to warrant this outlay.
  • As web-surveys become more common, sent to respondents via email, you may well find yourself with similarly low responses. Moreover, a web-survey product does not do the question formulation for you. Nor does it resolve any major deficiencies with your initial sample or unsuitable questions.
  • Finally, these tools are not failsafe; technical problems can arise and render them non-functioning, links inactive.

These drawbacks notwithstanding, web-based survey tools are handy research aids; a good way of teaching yourself the basics of survey design and actually quite fun to set up and carry out. Fancy formatting aside, how useful the findings are for your project in the final analysis is up to you to ascertain.

BOX 6.1
CURRENT WEB-BASED SURVEY TOOLS AND RESOURCES

Where these options come into play are covered in
Chapter 5
. For easy reference though, consider these useful services:

www.zipsurvey.com

www.surveymonkey.com

www.questionpro.com/web-based-survey-software.html

www.createsurvey.com/demo.htm

http://lap.umd.edu/survey_design/questionnaires.html

On doing so, be sure to check the terms and conditions of use, access, and storage; including registration fees and licensing.

Questionnaire design

The results you get depend too on the types of questions and response formats you opt for. Here there are two basic categories: open-ended and closed-ended questions. Each has their uses and disadvantages.

  • Closed-ended questions
    : These are recognizable as a list of predetermined, acceptable responses for respondents to select from. These create more reliable answers for quantifying purposes. For these reasons they lend themselves to relatively straightforward analysis. However, it could be that a respondent’s choice is not among listed alternatives, which defeats the purpose. They also generate the sorts of responses that are meaningful to the researcher and question. However, because these questions tend to be most closely allied to the researcher’s intentions, closed-ended questions may be simply generating self-fulfilling findings, the kind of response wanted.
  • Open-ended questions
    : These are formulated so as to permit respondents freedom to answer a question in their own words (without pre-specified alternatives). These allow for unanticipated rather than predictable answers. They may also reflect respondents’ thoughts and worldviews better and so provide incentive to complete the survey. They also help when you find yourself composing a closed-ended question that has an excessively long list of possible answers. For
    action research
    project designs, these sorts of question can also move the project forward in ways that make sense to those who will be involved in the outcome; for example, company department, hospital ward, community group.
    4
Administering surveys

The above section on question design can be summarized as rules of thumb for doing survey work; also relevant to more standardized formats for interviewing and focus-group work, which are discussed below. Whilst off-the-shelf and web-based survey
tools go a long way in coaching survey makers in this regard, you will save time and headaches by ensuring that that when setting up (semi-) standardized
questionnaires
you aim to

  • ask clear, easy to understand questions; how you word the questions will provide incentive or discourage responses; for example, avoid double negatives or long preambles;
  • provide a clear estimate of time needed to complete the survey (less rather than more is an incentive);
  • provide an orderly organization of the questionnaire, i.e. what logic are you following, are your questions consistent? Differences in responses are supposed to relate to differences amongst respondents, not inconsistencies, ill-defined terminology, objectionable or irrelevant questions;
  • begin with easier, information-based questions; move from the general to the more specific;
  • formulate questions that make sense to the population and respondents, i.e. know your population; consider the context and attitudes in which your questions may be taken;
  • keep the list short, if you can; do you really need more than ten questions? Do you really need so many sub-questions?
  • allow for expressions of variability, and include the ‘don’t know’, or ‘neutral’ response in multi-choice questions;
  • try mixing up the sorts of questions between single answer to multiple answers to avoid
    response bias
    . Minimize questions that generate judgmental responses (unless this is the aim, in which case a survey may not be the best approach).

BOX 6.2
OVERVIEW – MODES OF SURVEY ADMINISTRATION

  • questionnaires: standardized/non-standardized or semi-structured
  • face-to-face interviews
  • face-to-face administering of questionnaires
  • telephone interviews
  • (snail)mailing/manual distribution of printed questionnaire
  • emailing/web-based questionnaire.

When designing a questionnaire, or indeed setting up interviews and focus groups (see the section on interviews) always do a try-out, on yourself and some others, before launching the survey; at the very least have your supervisor take a look. Take a second look yourself, for the first version is seldom the final one. This includes always checking the spelling, grammar, and other factors (for example, culturally insensitive formulations). Get some feedback on the questions before going public. Your survey will be better for it.

Be prepared for a disappointing response, or no response. If responses are not what you expected consider whether this is down to the questions or, in fact, whether these
findings are results. If your project depends on survey results then ensuring an effective dissemination and working on getting responses requires time and energy. Even for limited surveys, set-up and administration time can often be in reverse proportion to the final outcome and role in the dissertation.

On that note, while surveys can be both challenging, stimulating and fun to compile and carry out, you may need to ask yourself whether you need to generate these data yourself. Have you considered using or searching for existing survey data? There are several reasons for this, including:

  1. The quality of the data is more likely to be higher in that these surveys have been carried out by established agencies or research teams, questions tested and samples larger.
  2. Why reinvent the wheel when you can use, adapt, or be inspired by these surveys? That said, the results usually require a level of skill in reading complex data-analysis packages, and being able to assess the findings and discussions of published work on these surveys. Nonetheless, public-access databases are out there, more on more coming online all the time.
    5

BOX 6.3
CHECKLIST BEFORE TAKING OFF

  • Does survey-data already exist? Can it give you ideas for questions?
  • Know your population, understand your sample.
  • Ask good and appropriate questions.
  • Be aware of alternative formulations of your questions and related responses.
  • Choose an appropriate mode of administering the survey.
  • Analyse the findings, don’t just reiterate the tables and figures.
BOOK: Understanding Research
2.34Mb size Format: txt, pdf, ePub
ads

Other books

Cross of Fire by Mark Keating
The Company We Keep by Mary Monroe
Heirs of Grace by Pratt, Tim
Dance of Ghosts by Brooks, Kevin
Critical Strike (The Critical Series Book 3) by Wearmouth, Barnes, Darren Wearmouth, Colin F. Barnes
A Friend at Midnight by Caroline B. Cooney
Wife for Hire by Christine Bell