Feeds:
Posts
Comments

Posts Tagged ‘Survey Tips’

In-depth interviews (IDIs) are an insightful qualitative research method that allows the researcher to tap into the mind of consumers. To keep costs down, many businesses choose to complete IDIs via telephone, rather than the more costly option of conducting the interviews in-person.  While telephone IDIs provide a cost advantage, the researcher loses the ability to read visual cues provided by the respondent when answering questions. Due to the inability to communicate face-to-face, there are extra steps that should be taken to ensure open-ended questions asked via telephone are transcribed effectively.

phone-456961_640

Here are four tips to transcribe open-ended responses:

  1. The importance of quality responses

Quality of data is more important than quantity. Ensure that the responses you are transcribing make logical sense. Also, don’t force a complete. If you notice that your respondent is not offering valuable information to the end client, feel free to stop the interview. There is no rule that says you have to finish every interview you start!

  1. Keep the responses clean

The RMS analytics team will go through and re-read every open-ended response, so we try to make sure all responses are free of spelling, punctuation, and grammatical errors. Not having a consistent format and clean responses will lead to poor reporting or excessive data cleaning time.

  1. Probe for additional details

Suppose your respondent replies to a satisfaction question by stating, “It was good.” From here the researcher should follow up by asking the respondent why “it was good.” To be diligent about probing, we suggest always following up on questions where respondents provide three words or less. Best practices in research suggest that probing on open-ended questions leads to more in-depth responses, and ultimately provide the client with more rich data.1

  1. Record responses verbatim

The researcher should record responses as stated by the respondents. This means the researcher should be recording responses in first person. For example, instead of, “He feels the hours should be longer,” record, “I feel the hours should be longer.” Additionally, do not try to summarize the respondent’s comments, or use your own words to make responses more concise. Try to capture as much as you can from the respondent’s own words.

Research & Marketing Strategies (RMS) is a market research firm located in Syracuse, NY. If you are interested in learning more about our market research services, please contact the Director of Business Development, Sandy Baker at SandyB@RMSresults.com or by calling 1-866-567-5422.

1 Smith, S., & Albaum, G. (2012). Basic Marketing Research: Volume 1. Handbook for Research Professionals. Official Training Guide from Qualtrics. Retrieved from: https://www.du.edu/ir/pdf/basic_marketing_research_vol_1

Read Full Post »

We’ve referenced it before on this blog, people are “time poor” and keeping a survey short is more critical than ever in our industry. If a survey goes too long (with that threshold being debatable: 5 minutes, 10 minutes, 15 minutes), the respondent will either become disengaged or simply hang up. We are not big fans of either. Shortening the length of all surveys including telephone methodologies, is critical to improving engagement and data quality, improving productivity, and quickening fieldwork.

Here are 4 basic and usable tips to help shorten telephone surveys:

1. Ask birth year open-ended instead of running through a list of age ranges. This has benefits beyond just reducing time to ask. By having the survey caller ask for year of birth, he or she can easily type that into a quantity box through CATI rather than reading through a long progressive list of “Under 18 years of age, 18 to 24, 25 to 34, 35 to 44”, and so on. Capturing birth year also helps with longitudinal uses of the data. For instance, if you want to revisit the respondents and conduct a similar survey 5 years from now you won’t need to re-ask the age question to update the ranges (as some aged 33 would now be in the next highest age range of 35 to 44) because you’ll already have the birth year data on-hand. This trick reduces the time to ask and keeps more ‘seasoned’ respondents happy.

shorten the length of a telephone survey

Reaction from a frustrated respondent who had to wait too long for his age category to be read by a survey caller.

2. Word associations. This is something that we are using more and more in each survey here at Research & Marketing Strategies (RMS). This is a quick and easy way to capture a word or phrase that describes a brand, company, or product rather than typing in a long-winded response on the topic that may be covered elsewhere in the survey through a close-ended question. Word associations work great for Wordles which are graphical displays of word counts.

3. Using categories for open-ended questions. This is a process by which the survey caller still asks the question open-ended but selects from a provided list of categories the open-ended response falls into with an option for ‘other’. For example, if the respondent provides you a long response about how he or she chooses a bank or credit union based on how close the nearest branch is to his or her home because it makes it easier to stop in on the way to and from work, you would select the ‘Location/Convenience’ category.

4. Consult with your client to determine most critical information. As a market researcher it is your job at the kickoff meeting to decipher between main objectives and secondary objectives of a study. Main objectives are the must have answers the client needs to make the results actionable (what is the awareness of my organization, what is the image of my organization, how satisfied are my customers, etc.) Secondary objectives are the less critical questions that may need to be cut to abide to time limits or question limits in the telephone survey. Just make sure you are working closely with the client to understand what you can afford to cut.

Are you looking to conduct a telephone survey and need a market research consultant to assist you with the process. The RMS team can work with you to ensure your telephone survey script will maximize your return on investment. Contact our Business Development Director, Sandy Baker at SandyB@RMSresults.com or by calling 315-635-9802.

Read Full Post »

Most people know that the wording of a question on a survey can have an impact on the way respondents answer it. A less obvious, but potentially significant source of bias is the point in the survey instrument at which a question is asked. A good example of this issue is the problem of when to ask about overall satisfaction in a survey that seeks to measure a variety of aspects of the customer experience in some detail.

Let’s say that we wanted to measure customer satisfaction with Vance’s Greasy Spoon Diner, a fine eating establishment that is looking to evaluate its place in the market after a series of Health Department-ordered shutdowns. Our client wants to know the customers’ overall satisfaction with the restaurant, as well as how they rate several sub-components. The survey could take the form of a simple rating scale with the series of items. It might look like this:

Please rate on a 5-point scale your satisfaction with the following aspects of Vance’s Greasy Spoon Diner (5=very satisfied, 1=very dissatisfied).

  1. Overall Experience
  2. Speed of Service
  3. Friendliness of Wait Staff
  4. Menu Selection
  5. Food Quality
  6. Atmosphere of the Dining Area
  7. Value

In this example, the survey seeks to capture opinion of the overall experience first, and then delves into the components that make up the overall experience. But not all survey instruments are structured that way. There is a school of thought that the order of questions should look like this:

  1. Speed of Service
  2. Friendliness of Wait Staff
  3. Menu Selection
  4. Food Quality
  5. Atmosphere of the Dining Area
  6. Value
  7. Overall Experience

In this case, the respondent is asked to think about a variety of specific issues first, and then rate their overall satisfaction. It may seem like a minor, subtle difference, but this is a case where the order of the items could potentially influence the overall satisfaction ratings. The first example, would capture the respondent’s “gut” reaction (Pun not originally intended, but it is very fitting for a restaurant survey, yes?) that they have before giving the matter much thought. In the second example, the process of getting them to think about the various aspects of their experiences at the restaurant can cause respondents to assign a rating that is swayed by that process. For example, if a person rated every aspect leading up to the overall experience item as a 4 or a 5, they might feel that it would be irrational to give the overall experience a 3 — even if that might be their initial thought.

 

This is an issue that can be argued either way, and we have had this debate a number of times in the Bunker when writing survey scripts for various projects. My own personal feeling is that in most cases, I think it makes more sense to ask the overall satisfaction question first. I believe a person’s initial gut reaction usually comes closest to being the one that determines their behavior as consumers. It may be irrational, but who ever said human behavior was entirely rational?

Another reason I favor that approach is that the second approach assumes that ALL the factors that might go into a person’s overall experience have been covered in the survey instrument. That is always a dangerous assumption to make. For example, what if one of the key drivers of dissatisfaction with Vance’s Greasy Spoon is the fact that the diner is located in between Chris’s Junkyard and George’s Horse Stable? Our survey asked about the atmosphere of the dining area, but not of the broader surroundings. Or perhaps the wait staff is friendly but they have a tendency to get the orders mixed up, and we never asked about order accuracy. Both examples suffer from those omissions, but I believe it makes more of a difference in the second example, where, as survey writers, we have tacitly told the respondent “These are the only factors you should be rating us on,” before asking for their overall rating. The first example does not introduce that level of bias. So at the very least, if the overall satisfaction scores seem at odds with everything else, the analysis will suggest that there is some other key driver out there to be explored in further research.

There are some cases where it might make sense to ask the overall satisfaction question last. If the survey was about a topic that the person might struggle to form an opinion on right off the bat, then asking about the specific components first might serve as something as a warm-up and/or memory aid to get them thinking about the overall experience. Another case might be where the customer experience is highly colored by emotional factors outside of the control of the survey’s subject. An example of that would be a hospital survey where the person’s rating of their experience could be influenced by the seriousness of their medical condition and/or the prognosis after their care. In that example, it would help to keep the respondent focused on the specifics of the stay in the beginning of the survey as a way to remind them of the operational aspects they are being asked to assess (nurse care, doctor communication, pain management) as opposed to immediately reminding them of how frightened or desperate they might have felt during their time as a patient. 

That said, I think in most cases it’s best to get the overall impressions first and then drill down into specifics. We in the Bunker would be interested to know what others feel about this issue. If you have any thoughts on this, please leave us a comment. We’d love to read your takes on this!

If you have any questions about writing customer satisfaction surveys or need to work with a consultant to draft up a customer satisfaction survey script for your business, contact our Director of Business Development, Sandy Baker, at SandyB@RMSresults.com or by calling 315-635-9802.

Read Full Post »

It goes without saying – some level of pre-testing is a must before launching a full-scale survey.

Survey Sampling International (SSI) defines a pre-test as a questionnaire that is tested on a statistically small sample of respondents before a full-scale study.  The benefits of pre-testing cannot be overstated whether you go to the extent of a soft launch or not.  The real issue is, in an industry where time is often of the essence and findings need to be turned around yesterday, it’s difficult to budget the time into a study to conduct a thorough pre-test.  In some cases, one could argue a pre-test is not necessary, such as a renewal of a yearly survey, or an online survey that is only being sent to a small population.  Here are some reasons as to why pre-testing your online survey is vital.  Keep in mind, some of these benefits will also apply to other methodologies including phone surveys.

1. Estimating a response rate.  

In all likelihood, this is one of the most common reasons surveys are pre-tested.  When you send 1,000 survey invitations out and obtain 35 completes, you know that your response rate overall will hover around 3.5%.  This helps you guide future sample purchase and determine how many invites you need to send to reach your quotas, without having to overspend.

2. Fishing out wording issues or questions that are difficult to answer.

As survey writers, we’ve all been in situations where we spend so much time engulfed in a survey that we overlook or don’t catch some minor issues.  By pre-testing your survey, you are getting a fresh perspective on the question language.  Doing this on a smaller scale, such as an email to other colleagues can accomplish the same results.

pre-testing online survey

Unfortunately for her, this online survey wasn’t pre-tested

3. Estimating the length of time to complete the survey.

This factor is probably the second of the two most common reasons surveys are pre-tested.  By reviewing the time stamps of how long it took the respondent(s) to take the survey you can determine whether you can add those two other key questions your client wanted to add, or determine if you need to cut back on the scope.

4. Understanding points of likely dropout.

We’ve often spoken about better engaging survey respondents in past blog posts and this benefit speaks directly to that.  You can accomplish this a few separate ways.  One would be to look at data to see which questions carried the highest likelihood of dropping out.  Another, more personalized method would be to follow-up with select survey respondents by phone to discuss their survey experience (if you have access to their telephone number through an internal or client panel).

5. Improving your subject line or invite text.

Another benefit of a pre-test is to test the effectiveness of your subject line or invite text.  Spam filters are a real concern when it comes to online surveys.  Therefore, if you have time to pre-test, you might want to think about using a few different subject lines or invite text and then see which of those worked best.

Are you thinking about conducting an online survey for your business?  Contact Sandy Baker at 315-635-9802 or SandyB@RMSresults.com.

Read Full Post »

Data weighting is a technique that is commonly used in market research. Many people reading this will already know what the concept means. If you’re not one of them, it refers to the practice of adjusting data results to either overcome sampling bias or to give more or less significance to factors based on their estimated relevance to the question at hand. Last year, the excellent Voice of Vovici blog featured this post that provided a good overview of data weighting as it applies to market research, and we recommend it to anyone who wants to learn more on the subject.

Here in the Bunker, we sometimes use weighting in survey analysis, and will continue to do so in the future. But we try to use the technique sparingly and proceed with a lot of caution when we do use it. We suspect that we probably do less weighting than a lot of our market research peers. The reason is that we feel it’s a tool that can easily be overused or misapplied.

Weighting is most effective when you have reliable, precise information about what the actual numbers should look like. A common example we run into in the Bunker is with surveys we do for school districts. There’s a built-in response bias (and to some extent there’s often a sampling bias) in favor of households with school children in such surveys. The response to a school district survey might skew as high as two-thirds parents of students and one-third non-parents, although typically, the districts we work with, the proportions are just the opposite:  about a third of the households will have children who are currently students in the district. In such cases, it’s appropriate to weight the data according to known demographics. But what if you don’t know exactly what the overall population looks like? Any weights you assign will be guesses — educated guesses perhaps — but still subject to the possibility that your estimates are off, which will in turn affect the accuracy of the results. The website of the National Council on Public Polls contains a nice, concise explanation (located under the heading “Bad Weighting”)  on why it’s often very problematic to weight data according to what you assume the overall population looks like.

As mentioned, in a previous paragraph, we think weighting is often overdone. We have seen analyses where certain subgroups are given double weight, or even more. Without getting heavily into the math involved, weighting to that extent increases the degree of error and the significance of the data, especially if the number of respondents involved is relatively small. Personally, I feel uneasy weighting anything by a factor of more than around 1.3 or 1.4.

Weighting Data in Market Research

Your data weighting proportions probably shouldn't look like this.

We try to address this issue by making sure that our sampling isn’t so skewed that such drastic weights are necessary. That approach doesn’t help when the data has already been collected, but it does eliminate the mindset of thinking it’s okay to be sloppy in the project design because you can “fix” the issues later with weighting.

Another issue with weighting is that, if you do it, you need to be prepared to justify and clearly communicate your assumptions to the client and potentially to any constituency they might be sharing the results with. For example, if you have conducted a public opinion survey for a community on issues of a sensitive nature (i.e. a potential school closing, allowing certain large-scale construction, etc.) people will scrutinize your methodology very closely.  If they learn that you have weighted the data to amplify the opinions of certain groups relative to others, controversy is likely to follow. In such cases, it will often be hard to convince people that the weighting is a valid analysis technique. They will see it as unfairly stacking the deck in one side’s favor, and if you haven’t done it properly, they’ll be right. That’s not to say you should never use weighting in that kind of project, just that you’ll really need to make sure you can thoroughly justify it – which is ultimately a good rule of thumb to use any time the question of whether or not to weight data arises.

Have questions about weighting data in market research – click on the ‘Have a Question’ box in the top right corner of the blog or call Research & Marketing Strategies (RMS), a market research vendor, at 315-635-9802.

Read Full Post »

At RMS, we are frequently asked by our clients what kind of response rate they can expect from their survey research, or after the fact, if the survey response they did receive was “good.” Usually, the most honest answer we can give to those questions is “It depends.” The truth is that survey response rates are devilishly hard to predict, because so many factors can impact them.

But at the same time, coming up with a reasonably close estimate of response rates going into a project is very important. It can impact issues like budget, project timeline and the upfront consideration of whether or not incentives will be offered. We would love to have a crystal ball or secret formula that allowed us to predict response rates with new projects, but we don’t. We do, however, have years of experience that gives us a general idea about how various factors will tend to improve or hinder our response rates.

If you are thinking about conducting survey research, here are seven questions you can ask yourself going into the project that will help you get a feel for any response rate challenges you might face. Knowing the answers to these questions upfront and planning accordingly can help you save money, create a realistic schedule and manage expectations.  

1. What is the nature of your organization?

In our experience at RMS, some types of clients have built-in advantages over others when it comes to survey response rates. Companies that are well-established and possess a high level of brand equity, and not-for-profits – especially well-known ones – will tend to enjoy higher response rates than lesser known companies. People are simply more comfortable providing data to organizations they recognize and feel they can trust. In this age of telephone and internet scams, many people are immediately suspicious of an organization they’ve never heard of contacting them to solicit information. If your organization is not well-known in the area you are surveying, there’s not much you can do to change that, but it is something you need to be cognizant of going in so you can have realistic expectations.

2. Do you have an existing relationship with the survey audience?

As a general rule, established customers of a company or members of an organization will be more inclined to respond to a survey than non-customers or non-members. Some of the highest response rates we have seen for surveys at RMS have been college surveys conducted among current students of the institutions. This, despite the fact that the 18 to 24 age group is notoriously hard to reach and engage for survey research. The reason was that the students were people who had a very comprehensive and deep relationship with the institution conducting the research. One caveat with this point is that organizations sometimes overestimate the strength of the relationship they have with customers/members. You can’t always assume that members of an organization will feel a strong connection. In fact, a survey among established customers or members that winds up with a low response rate is often a finding in itself.

3. How much of an obvious stake does the survey audience have in the subject at hand?

I use the qualifier “obvious” here because it might seem readily apparent to you that consumers have a vested interest in contributing to research that will enhance your delivery of products or services to them, but the consumer may not see (or believe) that connection. I know, in theory, that answering a follow-up survey about my experience at a car dealership might lead to better service next time I go to buy a car, but that’s not going to do much to motivate me to actually complete the survey. On the other hand, some survey audiences will immediately understand that their feedback could have a direct impact on issues that affect them. An example of the latter might be a school district survey that asked members of the community how they felt about closing a certain school building. That type of issue would generate strong feelings in the community and many people would be highly motivated to weigh in with their opinions. Answering this question requires you to put your normal assumptions aside and think like your survey audience for a bit (always a good practice, by the way). Ask yourself if you would honestly care about the questions being asked if you were in their shoes.

4. Are you going to let the survey audience know who is conducting the research or will it be blinded?

There are a few good reasons to withhold the identity of the organization conducting some sorts of research. For example, a competitive brand equity study could be tainted if respondents knew who was asking the questions.  That said, our recommendation to clients is that unless there’s a very compelling reason to blind the survey, they should avoid doing so. Most of that advice is tied to response rates. People are much less likely to respond to a blinded survey, for reasons already discussed in previous points. Public trust with surveys is low and anything anonymous will make them even more suspicious than normal. People prefer to put a name to someone they’re interacting with. When at all possible, you should let them know yours.

5. Will seasonality be a factor?

There are a few general rules of thumb about survey seasonality. For one thing, it’s often harder to reach potential respondents in the summer months because so many people go away on vacation. The last month and a half of the calendar year can also be problematic because of the holidays and all the activities surrounding them. Beyond that, seasonality is largely specific to the type of research you’re doing and who you’re trying to reach. Early April would be a bad time to attempt to survey tax accountants. School district surveys will get better response rates during the school year. The summer months might be a great time to survey consumers about lawn care products. There are many factors to be considered, but the basic considerations are weighing the availability of the survey audience at a specific time of the year and the extent to which they might be actively thinking about the subject at hand.

6. Could respondent confidentiality be a concern?

In some cases, people will be skittish about responding to a survey because they fear that their responses will be tied back to them and result in negative consequences. A classic example of this is an employee satisfaction survey, where employees will be nervous about offering frank criticism of top managers or an immediate supervisor. To some extent, this can be alleviated by using a third-party to conduct the research and with assurances that the results will only be reported in the aggregate. But, in those delicate situations, you should know going in that there is a larger than normal segment of the survey audience who simply won’t participate, no matter what.

7. If it’s a telephone survey, will experienced callers be used?

Telephone surveyors are not a commodity. Those who are experienced and know how to do it well will be much more successful than inexperienced callers who sound unpolished or tentative on the phone. To some extent, that ability is a function of personality, but it’s also largely the result of training and experience. The best way to make sure that a telephone survey is being administered properly, and the approach of the callers is maximizing your response rate, is to use a research firm with an established call center.

Read Full Post »

As a marketing research and marketing consultant in Syracuse, NY – Research & Marketing Strategies (RMS) speaks with a lot of clients about their needs.  Sometimes clients come to us knowing they need to do some type of marketing research, but don’t know where to start.  Other times clients come to us having already started brainstorming a marketing research project, but are caught up in one of these 7 common pitfalls:

1. Time and Cost Requirements – The amount of time and the cost needed to conduct a survey depends on the complexity and the type of survey.  The survey process itself can be very time-consuming, and as a result, clients trying to do survey research in-house often skip necessary steps or introduce bias into their results.  If you hire a dedicated firm who knows what they are doing, you will be more apt to save valuable time and money instead of dabbling with your own in-house survey that you can only attend to in your downtime (which may never occur).

RMS Tip: Even if you insist on using a SurveyMonkey or Zoomerang, just make sure you are doing it right before you launch the survey.  It doesn’t hurt to give a professional marketing research firm a call and see what they can recommend or scope for you.  First and foremost, many marketing research firms see themselves as a consultant in addition to being a full-service market research firm.  

2. Initial Planning – Fielding a survey without specific goals leads to results with no specific direction.  Start by asking yourself: what do I want to learn from the survey?  Oftentimes, this is one of the most rushed stages in the survey project.  The last thing you want to say after the survey has wrapped up is “I wish we would have asked…” – a fate that occurs all too often with survey writers.

3. Questionnaire Design – You have decided on a budget and a timeline, and you have your marketing research goals well-defined.  Now comes arguably the most common pitfall when managing a survey.  There are endless mini-pitfalls within questionnaire design that cause analysts like us in the bunker to cringe when we see hastily written surveys.  Such mistakes (to name a few) include: mutually exclusive and collectively exhaustive and scaling mistakes and what not to do.

RMS Tip: Survey design sounds much easier than it is in practice.  You have to be concerned with bias, multiple meanings, technical jargon, sensitive questions, usage of open-ended questions, abbreviations, and wording.  At the very least, if budget is a true concern, have a consultant review the survey script before you launch and offer improvements.

4. Sampling Procedures – Another common pitfall of SurveyMonkey and Zoomerang users.  In order to get the most accurate results, you should be using a representative sample.  Meaning your sample makeup matches that of the entire population you are looking to forecast the results on.  Launching a survey and making the link available to anyone who wants to participate is a risky approach (although sometimes necessary).  In most cases, the people electing to participate are inherently biased in some fashion.  For example, you want to find an optimal price point for your product or service, so you survey past customers who purchased your product to find out how much they would be willing to pay.  But, how about those who did not purchase your product? How are you supposed to find out any information on whether your price is a reason why they aren’t purchasing in the first place?  In most cases, sampling error is not as obvious as that, but all samples incur some type of bias.  Use a professional marketing research firm to minimize its effect through proper sampling or weighting data on the back-end.  Even if you don’t think a specific audience can be reached, it’s still worth a discussion.

5. Pretesting – Necessary, but sometimes overlooked.  It’s always good to test your survey with a limited pool of respondents before launching it to your entire sample.  This way you can eliminate the chance of mass-errors and better understand which questions might be confusing potential respondents.  It’s also a quick way to test your overall response rate.

6. Non-respondents – Unfortunately, when it comes to marketing research surveys, you are going to have more people who say “no” than “yes” when it comes to participation.  Similar to sampling pitfalls, there may be reasons why someone did not participate in the research and they should be accounted for.  This can be taken care of with some simple cross-tabulations to test for respondent versus non-respondent bias, and if needed, the creation of a weighting variable for analysis.

7. Processing the Data – Quality in means quality out.  Time needs to be set aside to review all cases returned from the fieldwork.  RMS runs through a rigorous quality control check on data as it comes in before we even begin populating a single chart for the report.  This is especially true in online survey data collection, which is more prone to quality control issues.

 

RMS Tip: Even if you managed the entire survey process in-house, there is still a major benefit to having a third-party expert analyze and report the data.  Many marketing research firms have been commissioned at the last stage of survey projects to wrap up the process.  Even if you think it’s too late to engage a firm, it’s not.

Looking to have someone work as your marketing consultant or marketing research firm in Syracuse, NY?  Or do you simply need to have an analyst review a survey script before you launch your project?  Contact our Business Development Specialists.

This article is based on the April 2003 Informational Brochure produced by Fairfax County Department of Systems Management for Human Services: www.fairfaxcounty.gov/aboutfairfax.

Read Full Post »

Halloween is just a few days away. In honor of the scariest holiday on the calendar, we here at the Bunker Blog want to provide our fellow market researchers with some spooky situations. For the most part, the market research community is a pretty even-tempered bunch. We’re not the kind of people who will jump out of our shoes at the sight of fake blood or a rubber monster mask. But there are things that are guaranteed to terrify us. That’s what this post is about.

1. When Mother Nature Hates You

It’s the night of the big focus groups. The participants have all been recruited, enticed by a lucrative honorarium. The facility has been paid for. The client has come in from out-of-town. Maybe you’ve come in from out-of-town. And then, an hour before the night’s first group, a blizzard hits and your participants stay home in droves. Looks like you’ll be rescheduling this one.  Okay, we admit it. Being located in Syracuse, NY, this one might have actually happened to us a few times. In reality, Syracusans are fortunate if it doesn’t snow on Halloween.

2. Computer Gremlins – The Online Survey That Goes Offline

You’ve launched a major online survey. Email invites have gone out to perhaps thousands of potential respondents. Imagine how ticked off they’ll be when they click on the link, only to receive an error message because a server or your service provider somewhere has gone down. Many of them won’t try a second time. If you’re lucky, they’ll start emailing and calling your office using the contact info you provided. You’ll get messages stating there’s a problem, many of which will be more irate than a group of trick-or-treaters after they visit the house of that elderly couple that gives out pennies instead of candy.   Test, test and retest your survey link before launch.  Make your IT department or provider aware of the upcoming survey before it happens. Control the things you can control.

3. Scary Silences – The In-Depth Interviewee with Nothing to Say

The great thing about B2B in-depth interviews (IDIs) is that they are usually conducted among people who are experts on a given topic, unless it’s a general B2C study. When they start talking, they have a wealth of information to offer. The problem comes when they don’t start talking. Sometimes interviewees clam up when you get them on the phone – or worse yet, you have traveled to their office to talk to them in person. Maybe they’re having a bad day. Maybe they’re naturally shy. Whatever the cause, when the cat gets the interviewee’s tongue, it leaves you with not much other than a lot of awkward silences and unanswered questions. When you’re in the middle of a bumpy qualitative research project, nobody can hear you scream (unless they sit next to your cubicle). There are a lot of different tried and true techniques an interviewer can use to probe for information, but sometimes there is nothing you can do.

4. Tiny Terrors – The Minuscule Survey Response Rate

Your call center has two weeks to get 400 completes for a telephone survey that needs to be done by the end of the month. On the first day of putting full resources on it they get…three. Discovering that people aren’t completing your survey at the rate you had budgeted for is always good for a pit in the stomach. You are faced with the situation of having to relax the screener criteria, putting maximum resources on a project, or having to tell the client that the timeline will be delayed. Or all three. It’s like going to the mansion on the hill for trick-or-treating to get that king size Snicker’s bar you get every year, but as you see it from a distance the porch light is off. Again, doing some pretests up front will help you judge the response rate before going all-in.  After you’ve done enough similar studies in particular industries like RMS has, we can estimate a response rate with a specific audience quite accurately.

5. More Computer Gremlins – When PowerPoint Gives Up the Ghost

This is perhaps the most classic nightmare scenario on the list. Equivalent to the feeling you had when your cheap rubber band snapped on your Spiderman mask when you were 6. You’re standing in front of a board room, maybe even an auditorium. You’re there to deliver a summary of research findings to an eager audience. Minutes before the presentation, you discover that PowerPoint doesn’t work properly. Even worse is when it breaks down in the middle of the presentation. Worse still is when the breakdown results in the image of your computer desktop with all the non work-related shortcut icons being displayed for the whole room to ponder. “How much time did this guy spend playing solitaire when he should have been writing our report and figuring out how to use PowerPoint?” It’s always a good backup plan to save the presentation beforehand in various locations.  Or even have a 2nd computer available for such issues.  All in all, sometimes there’s nothing you can do about this gremlin.  Even if you don’t feed it after midnight.

If you work in market research, those situations probably gave you cold sweats just thinking about them. Perhaps even some of them have happened to you. Sleep well tonight. Bwahahahahaha!!!

Feel free to share any nightmarish stories you might have from your market research history – the Bunker welcomes some additional frights.

Read Full Post »

Scale Week is over, and much like the viewing audience at the conclusion of a very special episode of “Full House,” we are left to reflect on what we have learned, how we have grown from the experience, and what it all means. At the very least, it gives us a chance to end the week with this summary of six simple rules for using scales in survey research:

1. Keep it simple – There are many scaling techniques available to the survey researcher. Some, such as the Likert Scale are pretty straightforward, while others, such as the MaxDiff technique or Constant Sum Scaling are a bit more complicated. They all have their uses, but the researcher always needs to be mindful of keeping the survey instrument as simple as possible for the sake of the respondent. The researcher needs to use a technique that the client and those who will be reading the report will be able to understand. Some types of scaling techniques may require a great deal of prior explanation before they are administered and/or as they are being reported.

2. Stay consistent – This was already touched upon in previous posts, but it bears repeating. It is best to keep the rating system and format used in a survey consistent throughout. Don’t switch from a five-point scale to a four-point scale and then up to seven. Also, keep the positions of the value axes the same – if you start out with “least/worst/disagree/negative” type values on the left of the scale and “most/best/agree/positive” on the right, stick with that throughout the instrument.

3. Break it up – Excessive use of rating scales can be a major cause of survey respondent fatigue. This is especially true in self-administered surveys where there are long, uninterrupted lists of rating items on a page/screen or in a telephone survey where the caller must read item after item. The survey will seem much more manageable to the respondent if you break up the rating items into small chunks of perhaps three to six items at a time. If the series can be separated by other types of questions, such as simple yes/no, multiple choice, or open ends, that is ideal. At the very least, the clusters should be broken up into distinct subject headings, which brings us to…

4. Cluster related items together in a series, separate unrelated items – This piece of advice might seem like it goes without saying, but we sometimes see surveys that ignore this principle, so we will state it. Ideally, items should be placed into groups with related items. For example, if one were doing an employee satisfaction survey, some topic areas might be “Management,”  “Teamwork and Cooperation,” and “Physical Working Environment.” This type of grouping will add structure and cohesion to the instrument and reduces the extent to which the respondent has to jarringly switch mental gears from topic to topic after each question.

5. Don’t ask respondents to rate more than one item at a time – This, often called a “double-barreled question,” is a common mistake among novice survey writers. Each item in a rating scale should only consist of a single concept or attribute. Consider this Likert scale item:

Courtesy might be a part of professionalism, but they aren’t the same thing. How should the respondent rate a salesperson who was extremely polite, but dressed inappropriately and gave them a tattered business card with a no-longer-functioning phone number printed on it? In this case, courtesy and professionalism should each be their own distinct item in the series. Always use one concept at a time, and always keep in mind that even if you think two words mean exactly the same thing, the respondent might not think that way.

6. Take advantage of new scaling tools…when appropriate – There are many ways to express the values on a scale rather than just words or numbers. Graphical slider scales can be a simple and intuitive way to represent the points on a scale. Consider the five- and three-point scales below that clearly convey a meaning without needing any words or numbers:

These kinds of graphic scales can be appropriate in instances where one is surveying a younger audience, or where respondents might not have a full command of the language in which the survey is written. On the other hand, they might be a little too light-hearted or cartoon-ish for a survey about, say, banking. Along with graphic representations, online surveys present different options for the scale tool itself such as an analog-looking slider, rather than traditional check boxes. The key with these new options is to always consider the audience for which the survey is intended (not to mention the general level of traditionalism of the research client!) when deciding what is appropriate.

We here in the Bunker hope you have learned everything you ever wanted to know about scales in our First Annual Scale Week. With the knowledge you now possess, feel free to express your opinion on this or any other article in the series below on…you guessed it…the starred rating scale.

  • Click Here to view the Day 1 post on Constant Sum Scaling.
  • Click Here to view the Day 2 post on Semantic Differential Scaling.
  • Click Here to view the Day 3 post on Likert Scales.
  • Click Here to view the Day 4 post on Scaling Mistakes | What Not to Do.

Read Full Post »

It’s Day 4 of scale week here at the Bunker.  Today, we put together some scaling errors and ways in which scales should not be used.  These problems can lead to biased results, unintentional participant error and/or improper analysis.  Here are a few things that should be avoided if you, as a market researcher, are looking for accurate and representative results.

1) Biased/Weighted Scales

Market Research Scale Error

As you can see in the above scale, the level of agreement is covering more than half the scale making it weighted towards agreement.  Not only does offering more options for agreement than disagreement make the scale biased, but it creates the potential for unintentional respondent error.  People typically tend to view scales as having 2 extremes, with the middle being neutral.  The analysis that is performed could also be misrepresentative as a result.  If every respondent selected the middle of the scale, one might state 100% of the respondents are in agreement with the new ice cream being delicious.

2) Reversing Scale Order within a Survey

5 point scale order

Reversing scale order within a survey typically won’t be as blatantly obvious as the example above, but it does happen.  As seen in the example, it’s likely that the respondent agreed with both statements but misread the scales or assumed the scales were identical.  Reversing scale order within a survey can create confusion among the respondents; one might compare it to a cereal box game in which you attempt to spot the difference between two pictures.  Some might argue that reversing scale orders can be used in an attempt to combat straight-lining of answers, but also, it can create unintentional respondent error among those who are trying to legitimately answer questions.   There are various techniques that should be used to prevent straight-lining, but this is not one of them.  Click here to read our post that goes into further detail about data quality/integrity.

3) Using Different Point Scales (5 pt, 7 pt, 10 pt)

This scaling issue may not have as much of an effect as others, but it is important to keep all scales identical within a survey.  Respondents put themselves in a state of mind where they are evaluating in terms of a certain point on a scale.  By jumping between different 5, 7, and 10 point scales, it makes it difficult for the respondent to rate statements or questions using consistent judgment.  The other problem with using different scales within a survey is that they are difficult to compare when it comes time for analysis.

In order gather quality data, it is important to use market research scales properly.  Throughout the survey it is important that all the scaling questions be uniform to reduce bias and other problematic errors.  Also, like other questions in market research, they should be mutually exclusive and collectively exhaustive by not doubling up/overlapping categories and by covering all answers.  Avoiding these problems will only benefit your analysis in the end.

  • Click Here to view the Day 1 post on Constant Sum Scaling.
  • Click Here to view the Day 2 post on Semantic Differential Scaling.
  • Click Here to view the Day 3 post on Likert Scales.
  • Click Here to view the Day 5 post on 6 Tips for Using Rating Scales.

Read Full Post »

Older Posts »