Traditionally a multiple choice question consists of a stem, the choices and, among the choices, the correct answer. But a valid question should have something else stored along with each question: metadata (information about the question). Why? For at least two reasons:
- As important information for any other exam author who might be using the question. In a typical assessment system the item pool and the exams themselves are separate, so more than one exam author may be using the items. Even if you anticipate that you will be the only person using the question what happens if you leave your current position and someone else becomes responsible for maintaining and administering these questions and exams? He/she will find this information valuable.
- For defensibility. Exams need to be fair, valid and reliable. A key step in the process of building valid exams is beginning with valid questions. You must be able to justify that the questions adequately cover the training content (content validity) and are important for the test taker to know in order to perform his or her job.
So, what metadata should you store? Here are some suggestions:
Rationale for question. Why is this question important to the job?
Estimated difficulty. This can be either quantitative – what percentage of test takers do you anticipate will get this question correct? Or it can be categorical (e.g. easy, medium, difficult). Note: Once you have real exam data for this question this estimate can be replaced by actual difficulty data.
Reference. Where in the training material does this question come from (e.g. module, lesson, screen/page) — or if it is for a pharmaceutical company PI exam, the section of the PI.
Cognitive Level. Typically Bloom’s Taxonomy (or revised Taxonomy).