Why are questionnaires always so badly designed? Why do people think it is easy to put a questionnaire together? This post considers the key problems in an honest, candid way.
I see quite a lot of ‘quantitative’ research from various sources, both scientific (positivist) and humanities (interpretivist). I also see various random market research questionnaires about stuff like technology/ apps/ websites etc, market research does not consider paradigms at all. Here are some things I often think about that apply to nearly all questionnaires in all these contexts.
In positivist circles questions in questionnaires seem to be pretty haphazard but there is some logic due to the requirement of experimental design for later analysis and conclusions based on numbers, it’s the numbers that are relevant, and the setting for the experiment/test. For example, within subject or between subject, one or two tailed test, control groups etc. Responses are duly analysed in SPSS and numbers are crunched, and outcomes are provided, and short discussion is given, with not-very-much actual further context except the setting of the test. A positivist paradigm doesn’t even consider the possibility of different kinds of realities, and this is fine if we are dealing with cells, atoms or whatnot, but if we are dealing with humans, not so much.
In interpretivist circles it’s worse, though perhaps not always. Humanities research often appears to be based on a questionnaire that has been pulled together in ten minutes off the top of some well-meaning but essentially probably fairly clueless academic. Or, compiled by a group of academics who basically settle for a lowest common denominator one-size-fits-all approach for sake of being collegiate and working collaboratively. (Remember, a camel is a horse by committee.) I say clueless in the sense of questionnaire design, not in their knowledge field per se.
In market research circles it is important to differentiate between amateur and professional. In amateur circles, the same can be said as for both above approaches. In more professional circles there is usually a much more professional design being applied, though not always. And in some of these contexts, positivist experimental design is applied in some form – in UX research for example.
Student surveys are often the most disheartening, but I think this is because no one is teaching questionnaire design, especially to humanities students. This is the hub of the problem – there is not enough knowledge amongst academics about how to design questions, how to think about demographics for what purpose, how to limit and consolidate questions, how to segment questions, and how to critique questions and questionnaires. This is before you actually think about layout, titling and presentation.
In PhD research this may also be a problem. I myself went through a long phase of doubt about my interview technique for the chosen methodology – phenomenography – but after a long period (three months) reflecting and writing my way through it (about 13k words), I realised I probably interviewed more robustly than some of the phenomenographers I was reading (hard to judge but I don’t say this in haste). They sometimes said things I considered rather beginner-sh about responsive semi-scripted interviewing. But that discussion is for another day, as this post isn’t about interviews, it’s about questionnaires. I did use a questionnaire but only for setting the scene of the interviews. I had no intention of analysing it, but I still designed it – good layout over 4 pages or online equivalent, segmented logically, with short questions and options for don’t know etc.
So what are the problems?
It is fair to say that much of what I now note can be applied in all situations, depending on the field of research. There are numerous glaring problems, here I list the most obvious.
- Number of questions – always far too many
- No proper understanding of sections – usually no sections at all, or just arbitrary ‘next page’ titling
- Questions very badly worded – biased, loaded prompting, simplistic, vague, meaningless
- No use of plain English (yes I’m only referring to English, as that is all I know) – making sure the respondent actually understands the question
- No adequate design for possible responses to questions (e.g. asking it again, differently)
- FAR TOO MANY options in multichoice
- Hardly ever use the ‘dont know’ option
- Hardly ever use the ‘neither agree or disagree’ option
- No actual contextual considerations for possible influencing factors of responses in the design of the question
- No clear idea of the purpose of the survey beyond “lets find out stuff!”
- Apparently no understanding about security and privacy, and anonymity assurance
These are the main things, but I’m sure there are others I’ve left off. I need to acknowledge here that it is a nightmare doing this when working with others who want all their questions in the pot – impossible to tell them ‘no’, or ‘your question is terrible, bin it!’, etc. I see there are huge challenges to this, that are hard to solve.
I have worked in teams designing numerous national surveys and specialist group surveys, and probably interviewed in scripted and semi scripted contexts hundreds if not thousands of people. Between around 1987-1993 I worked for several years in market research, mainly for Audience Selection (Mackie St, off Drury Lane, Covent Garden). This company was part of Pergamon Press, then owned by Robert Maxwell. I also worked for BJM (Goswell Rd, Farringdon), briefly. Both companies had a very serious attitude about how questionnaires were designed, who they were aimed at, what was being found out and how to analyse and draw conclusions. BJM was very industry orientated and aside from regular national agricultural or transport surveys or interviews we also did technology surveys – asking about Compact Disc or HiFi components for example. Audience Selection ‘made the news’ in so far as we conducted national surveys every week about the political and market zeitgeist in the UK, published in the Daily Mirror and sold to many other newspapers. We also did groundbreaking qualitative interviewing with industry leaders, finance directors, broadsheet publishers or similar, conducting 30-45 minute phone interviews that were recorded, with a written precis, often presented to board meetings. Recordings were sometimes played to Mr Maxwell, who liked mine 🙂 . Though this was a long time ago, our methods were exactly as we would do today and frankly far more robust than *some* research I see going on in academia. This is the uncomfortable truth.
People do things differently, and that’s fine. But at least consider the advice of others. All these links give relevant advice for ANY situation.
- https://www.typeform.com/surveys/survey-design-101/ The marketing PoV
- https://www.imperial.ac.uk/education-research/evaluation/tools-and-resources-for-evaluation/questionnaires/best-practice-in-questionnaire-design/ The academic PoV
- https://www.nngroup.com/articles/keep-online-surveys-short/ The UX experts, Nielsen Norman group
Image by Mirko Grisendi