What is Item Response Theory?
Item response theory (IRT), also known as latent trait theory or modern mental test theory; is a relatively new approach to psychometric test design. Whereas classical test theory focuses on the test as a whole, item response theory shifts its focus to the individual items (questions) themselves.
A number of parameters may be used when estimating the ability of a person using IRT:
- The 1 parameter logistic model (1PL) also known as the Rasch model, only uses item difficulty as a parameter for calculating a person’s ability.
- The 2 parameter logistic model (2PL) uses both item difficulty and item discrimination (the extent which the item is measuring the underlying psychological construct) as parameters.
- The 3 parameter logistic model (3PL) uses item difficulty, item discrimination and the extent which candidates can guess the correct answer, as parameters.
Regardless of the model used, IRT based tests have a number of advantages over classical test theory based tests. IRT allows item banking, which means that candidates can all be given a completely different set of items, but still provide an equally accurate estimate of ability. Also, IRT allows the use of adaptive testing, tests which tailor the difficulty of the test to each individual candidate. Increasingly, psychometricians are turning to IRT based models to design research and publish psychometric tests, with more research on IRT published every year. Similarly, most of the major test publishers, such as SHL and Saville consulting are using IRT based methods in their test development processes.
How will IRT affect test candidates?
Because IRT allows for item banking, test publishers can minimise the effects of cheating from unscrupulous candidates. With fixed form tests, candidates could print-screen their questions and share them among other candidates, giving them unfair advantages. With item banked tests, each candidate is given a unique set of questions, increasing test security. Also, IRT allows for adaptive testing, in which a test gets more, or less difficult depending on the performance of the candidate, tailoring the test to their ability. IRT provides myriad benefits to the test publisher, the client and the candidate, and IRT based models are becoming increasingly popular within psychometrics.