Testwiseness and Guessing


What is testwiseness and guessing?

Testwiseness is any skill which allows a student to choose the correct answer on an item without knowing the correct answer. Students who are testwise look for mistakes in test construction, make guesses based on teacher tendencies, and search for any unintentional clues that can be found in a test. This is an issue of validity because the score on a test should be a reflection of the level of the trait that the test is designed to measure (knowledge, skill, understanding) not a reflection of a general ability to do well on poorly made tests.

Guessing, in this context, means random guessing, essentially flipping a coin and choosing an answer. Scores from a student who got lucky and guessed his or her way to a high score are meaningless and not valid. It is important to distinguish between this sort of a guess, which good tests are designed to protect against, and an "educated guess" which is not nearly as harmful to the validity of a test. With educated guesses, students, at least, have some knowledge of the content which has allowed them to narrow their answer options down to a small number of reasonable alternatives. The guidelines on this page are designed to protect against the lucky guess, not the educated guess.

 

Designing items that protect against testwiseness and guessing 

There has only been a small amount of empirical research on the characteristics of objectively scored items and how they affect validity or reliability. To support those few research findings, though, there is also a common set of recommendations found in classroom assessment textbooks. Below are the most critical guidelines related to testwiseness and guessing from these sources (Frey, Petersen, Edwards, Pedrotti, & Peyton, 2003; Haladyna & Downing, 1989a, 1989b; Haladyna, Downing & Rodriguez, 2002). Some of these guidelines are also emphasized in the areas on this website which provide guidelines for the design of multiple-choice items and matching items. 

Guideline 1.

Order of answer options should be logical or random.
Some testwise students will notice or guess predictable patterns in which answer options (e.g. C or B) tend to be correct in a given teacher's tests. Teachers can control for any tendencies of their own by placing the answer options in an order based on some standard rule (e.g. shortest to longest, alphabetical, chronological). Another solution is for teachers to scroll through the first draft of the test on their word processors and attempt to randomize the order of answer options.

Guideline 2.

Answer options should all be grammatically consistent with stem.
If the grammar used in the stem makes it clear that the right answer is a female or is plural, make sure that all answer options are female or plural. Testwise students will quickly rule out answer options that are not worded in a way consistent with the stem.

Guideline 3.

Correct answer options should not be the longest answer option.
There is a common tendency among teachers to write items where the answer option that is wordiest is the correct answer. This may be because the item writer wants the correct answer to be undeniably correct and provides many details or, perhaps, it takes less effort to write a short wrong answer than to write a long wrong answer.

Guideline 4.

Items should be independent of each other.
Testwise students will use any clues they can to discover the right answer, and, sometimes the answer to question 2 can be found in the stem of question 1. This is a very common error for teacher-made tests; many of us got through school using this trick! The solution is to review the complete exam, not each item individually, before administering a test you made yourself.

Guideline 5.

There should be as many answer options as reasonable.
The more the answer options students must choose from, the less likely it is that students will select the correct answer purely by chance. Out of a hundred multiple-choice items with three answer options each, a student with no knowledge of the content will get 33 correct simply by guessing blindly. A five-answer-option question cuts the chances down to 20%. A well-written matching section with 10 answer options makes guessing even more difficult.

Guideline 6.

All answer options should be plausible.
If some answer options are not even considered by students, they do not operate as distractors. This is true for answer options that are obviously jokes, for example.

Guideline 7.

In matching, answer options should be available more than once.
This is a simple way to functionally increase the number of answer options available for each stem.

How can the use of these guidelines benefit your students, including those with special needs? 

All item-writing guidelines for classroom tests are designed to increase the validity and reliability of student assessments. Controlling for testwiseness increases the validity of tests, as it eliminates the effect of irrelevant skills or abilities on the tests score. Measurement experts refer to the variability in test scores due to some ability which is not the measurement trait of interest as irrelevant variance. When irrelevant variance is eliminated, score differences reflect only what they should reflect. Differences in student performance on quality teacher-made tests should reflect differences in learning, and, usually, nothing else. By controlling for the ability to do well on objectively scored tests, test scores will be a fairer measure of student performance. Eliminating or lessening the chance of choosing correct answers purely by chance increases the reliability of tests. Reliable test scores are measures of typical performance. Scores affected by luck do not indicate typical performance. All students benefit when test scores reflect actual learning.

References 

Research Articles

Frey, B.B., Petersen, S.E., Edwards, L.M., Pedrotti, J.T. & Peyton, V. (2003, 
April). Toward a consensus list of item-writing rules. Presented at the 
Annual Meeting of the American Educational Research Association, 
Chicago.
Haladyna, T. M. & Downing, S.M. (1989a). A taxonomy of multiple-choice 
item-writing rules. Applied Measurement in Education, 2(1), 37-50.
Haladyna, T. M. & Downing, S.M. (1989b). Validity of a taxonomy of 
multiple-choice item-writing rules. Applied Measurement in 
Education, 2(1), 51-78.
Haladyna, T.M., Downing, S.M., & Rodriguez, M.C. (2002). A review of 
multiple-choice item-writing guidelines for classroom assessment. 
Applied Measurement in Education, 15(3), 309-334.