Multiple-Choice Questions


What is a multiple-choice question?

A multiple-choice question is a type of item where students are presented with a question or instruction (a stem) and select the correct answer or response from a list of answer options. These types of items are sometimes referred to as selection items because students select the answer. Technically, matching items, true-false items, and a variety of other specific item types where correct answers are available and students select the correct answer, are all multiple-choice questions. For this discussion, though, we will only talk about those questions which are comprised of a stem followed by a small set of answer options associated with that stem. That is typically the format which is thought of as a multiple-choice question. 

Multiple-Choice Question

1. Who wrote The Great Gatsby? <-- Stem

  1. Faulkner    <-- Distractor
  2. Fitzgerald    <-- Correct Answer ("Keyed" Answer)
  3. Hemingway    <-- Distractor
  4. Steinbeck    <-- Distractor

Designing Multiple-Choice Questions 
Not much, but some, empirical research has been done on the characteristics of multiple-choice items and how they affect validity or reliability. Additionally, there is a common set of recommendations found in classroom assessment textbooks. A few of the critical guidelines from those sources (Frey, Petersen, Edwards, Pedrotti, & Peyton, 2003; Haladyna & Downing, 1989a, 1989b; Haladyna, Downing & Rodriguez, 2002) are presented below. 

Guideline 1.

There should be 3 to 5 answer options.
Items should have enough answer options that pure guessing is difficult, but not so many that the distractors are not plausible or the item takes too long to complete.

Guideline 2.

"All of the Above" should not be an answer option.
Some students will guess this answer option frequently as part of a test-taking strategy. Other students will avoid it as part of a test-taking strategy. Either way, it does not operate fairly as a distractor. Additionally, to evaluate the possibility that "All of the Above" is correct requires analytical abilities which vary across students. Measuring this particular analytic ability is likely not the targeted goal of the test.

Guideline 3.

"None of the Above" should not be an answer option.
This guideline exists for the same reasons as Guideline 2. Additionally, for some reason, teachers do tend to create items where "None of the Above" is the correct answer, and some students know this.

Guideline 4.

All answer options should be plausible.
If an answer option is clearly not correct because it does not seem related to the other answer options, is from a content area not covered by the test, or because the teacher is obviously including it for humorous reasons, it does not operate as a distractor. Students are not considering the distractor, so a four-answer-option question is really a three-answer-option question and guessing becomes easier.

Guideline 5.

Order of answer options should be logical or random.
Some teachers develop a tendency to write items where a certain answer option (e.g. B or C) is correct. Students may either pick up on this with a given teacher or, as part of a test-taking strategy, often guess B or C. Teachers can control for any tendencies of their own by placing the answer options in an order based on some rule (e.g. shortest to longest, alphabetical, chronological). Another solution is for teachers to scroll through the first draft of the test on their word processors and attempt to randomize the order of answer options.

Guideline 6.

Negative wording should not be used.
Some students read more carefully or process words more accurately than others, and the word "not" can easily be missed. Even if the word is emphasized so no one can miss it, educational content tends not to be learned as a collection of non-facts or false statements, but, one would think, is likely stored as a collection of positively worded truths.

Guideline 7.

Answer options should all be grammatically consistent with stem.
If the grammar used in the stem makes it clear that the right answer is a female or is plural, make sure that all answer options are female or plural.

Guideline 8.

Answer options should not be longer than the stem.
An item goes more quickly if the bulk of the reading is in the stem, followed by brief answer options. A good multiple-choice question looks like this:

1. ===============
A. ====
B. ====
C. ====
D. ====

Guideline 9.

Stems should be complete sentences.
If a stem is a complete question, ending with a question mark, or a complete instruction, ending with a period, students can begin to identify the answer before examining answer options. Students must work harder if stems end with a blank or a colon or simply as an uncompleted sentence. More processing increases chances of errors.

 

How can the use of quality multiple-choice questions benefit your students, including those with special needs? 

As a format, there are good reasons why multiple-choice items are so popular. When written well and efficiently, a multiple-choice test can cover a large amount of material in a relatively brief period of time. Less instruction time is lost. More objectives can be measured which increases both the validity and the reliability of the test which benefits all students. Additionally, students will likely face multiple-choice tests beyond their current classroom, in future classrooms, in college admissions procedures, in job placement exams, and so on. Students who get used to quality multiple-choice testing procedures are likely to have less test anxiety and will have greater success on tests in the future in situations with higher stakes than a weekly quiz.

References 

Research Articles

Frey, B.B., Petersen, S.E., Edwards, L.M., Pedrotti, J.T. & Peyton, V. (2003,
April). Toward a consensus list of item-writing rules. Presented at the 
Annual Meeting of the American Educational Research Association, 
Chicago.
Haladyna, T. M. & Downing, S.M. (1989a). A taxonomy of multiple-choice
item-writing rules. Applied Measurement in Education, 2(1), 37-50.
Haladyna, T. M. & Downing, S.M. (1989b). Validity of a taxonomy of
multiple-choice item-writing rules. Applied Measurement in 
Education, 2(1), 51-78.
Haladyna, T.M., Downing, S.M., & Rodriguez, M.C. (2002). A review of
multiple-choice item-writing guidelines for classroom assessment. 
Applied Measurement in Education, 15(3), 309-334.