Why teachers should choose multiple-choice questions

Many teachers steer clear of creating multiple-choice questions because they think they are too easy. But studies show that recognising an answer can be just as challenging as recalling one, says Andrew C Butler. He tells Irena Barker how teachers can make the most out of multiple-choice questions
3rd January 2020, 12:04am
Multiple Choice

Share

Why teachers should choose multiple-choice questions

https://www.tes.com/magazine/archived/why-teachers-should-choose-multiple-choice-questions

If the last time you saw a multiple-choice question (MCQ) was on Who Wants to Be a Millionaire?, rather than when you used one in a classroom, then you are unlikely to be alone among UK teachers. MCQs are often seen as an “easier” form of assessment for pupils, and a much more labour-intensive (and less useful) assessment tool for teachers.

But does the multiple-choice question deserve that reputation?

Well, teachers in other countries seem to have a more positive view of it. In the US, it has been a standard form of educational assessment since US psychologist Frederick J Kelly wrote the first multiple-choice test in 1914. Today, if you want to get into college in America, answering MCQs makes up the bulk of the SATs or ACTs that you need to pass to gain entry.

And the research around MCQs throws up some interesting points that teachers may wish to ponder, according to Andrew C Butler, an associate professor of education, psychology and brain sciences at Washington University in St Louis.

The first thing the evidence will tell you is that an MCQ is not automatically an easier test for students, he says. The perception is often that, because the right answer is listed as one of the options, students simply have to recognise it rather than recall it - or even just take a lucky guess (if there are four options, they have a 25 per cent chance of being right, after all).

But that logic rarely holds up on quiz shows on which people regularly get questions wrong despite having the answer staring them in the face, and it does not find any support in the research, either.

“In the olden days, people used to think … that if you’ve stored something at a certain threshold then you could recognise it, but it required a much higher threshold for you to actually recall it yourself,” says Butler. “But there’s a lot of reasons to suggest that that way of thinking about it is wrong and it’s not that simple.

“The science would tell you recognition is not inherently easier than recall.”

So why has this misconception gained such prominence? It’s because we often encounter very badly designed MCQs, says Butler.

He gives the example of the question stem: “What is the capital of Australia?”

If you give four possible answers, but only one is in fact a city in Australia, all the candidate has to do is identify the only city in Australia listed. They might not have known the name of the capital at all.

But what if, says Butler, you gave four major cities in Australia as possible answers, for example:

a) Sydney.

b) Canberra.

c) Melbourne.

d) Perth.

“When given the option of ‘Sydney’, many people may be lured into picking it even if it meant second-guessing their initial intuition of Canberra,” says Butler. “This is just one example of a way in which recognition can lead to lower performance than recall.”

Essentially, a MCQ can be just as tough - if not tougher - than other question types: it just depends on how well the question is constructed. Too frequently, we encounter badly constructed MCQs.

So, how do you create high-quality MCQs? The research can offer us some tips here, too. Butler studied the issue in detail for his 2018 research review, bringing together the findings of academics in the worlds of assessment and learning science.

Badly set questions, says Butler, are “common worldwide”. And one of the worst crimes is giving an “all/none of the above” option as an answer.

“I think that was an interesting invention that someone came up with, but then people were like, ‘Oh, this is a neat thing to do,’” says Butler. “But the research shows those are not great things to include.”

Various pieces of research (including this study) show that inclusion of “none of the above” makes the question harder to answer but less good for telling students apart in terms of whether or not they have learned the material.

For example, if you have asked “What colour do you get if you combine red and yellow?” and a pupil incorrectly answers “blue” rather than “orange”, that tells you more than the answer “none of the above” - you can begin to unpick the misconceptions that led to the pupil opting for “blue”, but with “none of the above”, the diagnostics are a lot more challenging.

Another issue with many MCQs is that they are used only for straight recall tasks that have simple right or wrong answers. A good MCQ, Butler believes, is capable of making students think in more complex ways, too. “You can design questions that get people to do higher-order thinking, such as applying their knowledge or synthesising information,” he says. “It’s just a matter of designing the questions. Oftentimes it’s harder to write effective lures [distractors] for those things, but it can be done.”

To do this effectively, it is about picking options where simple rote learning of the facts would not give you the right answer. This is explained very well in a 2013 blog post by Daisy Christodoulou, author of Seven Myths About Education, who uses the example of a question from a British Columbia leaving exam: “How did the Soviet totalitarian system under Stalin differ from that of Hitler and Mussolini?”

a) It built up armed forces.

b) It took away human rights.

c) It made trade unions illegal.

d) It abolished private land ownership.

She writes: “It tests a finer gradation of understanding. Everyone knows the Nazis and Soviets were evil, and because they were evil, it is easy for pupils to just think that their regimes were the same. And, of course, the regimes were very similar. But they were different in interesting ways, too, and this question probes that.”

So, a key part of a good MCQ is knowing what you are trying to achieve. If it’s straight recall, your design will be very typical of quiz shows. For example, “Which of the following buildings is the tallest in the world?” followed by a list of generally known tall buildings: The Shard, the Burj Khalifa, the Shanghai Tower, One World Trade Centre.

But if you want to get pupils thinking more, and synthesising information, you need to design the question differently, says Butler. A typical example might be: “Given that the patient shows symptoms X, Y, and Z, which of the following diagnoses is most likely?”

The final bit of wisdom from the research on MCQs relates to perhaps the most common thing that we get wrong: how many answers are optimum?

Research suggests that giving students three answer choices, one of which is correct, results in the most effective questions.

So, should we all jump fully on board with MCQs now that we have them sussed perfectly? Although Butler sees their benefits, he advocates that they are always used alongside other forms of assessment.

“It’s important to give students different opportunities to demonstrate what they’ve learned,” he says. “That’s a matter of maybe having some multiple-choice questions, and some short-answer questions and maybe a written essay, and some different things that they can do.

“In my classes, I have some short-answer and open-ended questions and some multiple-choice questions, and guess what the correlation is between performance on those two formats? Very high. People who do well on the multiple-choice also tend to do well in the short-answer.”

But are teachers equally as good at creating both formats of assessment? Butler believes that for the more complex MCQs, we could be doing a little better.

“I think that people are not necessarily trained in creating good tests, so it’s probably very true that a lot of multiple-choice tests do just measure memorisation of facts and recognition of facts,” he says.

So, teachers, for £64,000, is that:

a) Disappointing.

b) To be expected.

c) All the fault of our policy overlords in government.

Irena Barker is a freelance writer

This article originally appeared in the 3 January 2020 issue under the headline “Tes focus on...Multiple-choice assessment”

You need a Tes subscription to read this article

Subscribe now to read this article and get other subscriber-only content:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters

Already a subscriber? Log in

You need a subscription to read this article

Subscribe now to read this article and get other subscriber-only content, including:

  • Unlimited access to all Tes magazine content
  • Exclusive subscriber-only stories
  • Award-winning email newsletters
Recent
Most read
Most shared