Upon reading the National Association of Flight Instructors’ report on how a recent change in the FAA knowledge test question banks had increased the number of failures, my initial reaction sided with the FAA. Let me explain: When I was with Flight Training magazine, the banks of test questions were still public. With the release of every new bank I compared it to the old one and shared the changes with readers.
From experience I knew that writing good multiple-guess questions isn’t easy. The question must have a clear link to the knowledge taught and, to fully assess student understanding, subtle nuances should separate the right, almost-right, and wrong answers.
After a decade of evaluating its work, I held the FAA’s question creating ability in high regard. They’d elevated the need to RTFQ and UTFA (that would be read the freakin’ question, understand the freakin’ answer) to an art form, which is why I initially sided with it. And then I called NAFI Executive Director Jason Blair to learn how this problem came to light and any new developments since the initial March 3 report.
Our conversation reversed my take on the situation, which came to light at several university aviation programs. Professional educators, they teach knowledge, not the test. On average, 13 percent of their students failed the Fundamentals of Instruction (FOI) test. January 2011 was an average month. In February, 58 percent failed. Since the teachers and the knowledge they’d presented was unchanged, the test was the problem.
While I was talking to NAFI, Logan Derby, a student in the aviation program at the University of Nebraska-Omaha, e-mailed JetWhine. A CFI candidate, he bought a 2011 FOI test prep book to supplement his classroom learning. He thought the knowledge he learned and studied was current. “Man was I wrong! It wasn’t till I had failed the FOI written that I come to find out the FAA decide to change a number of questions.”
Tests assess more than student knowledge; by their outcomes, the number of students who pass and their average score, they measure the test creator’s knowledge of the source material and ability to parse it into test questions. Done skillfully, a new test will result in slight, single-digit changes in these numbers. By its own test statistics, the FAA was skillful, at least through 2009, as suggested by the overall FOI results below.
The seeds that grew into this problem were likely planted last year, and they will affect all knowledge tests, not just the FOI, ATP, and flight engineers, as initially reported. At a meeting with industry June 15-16, 2010, the FAA said it would be revising knowledge tests and practical test standards over the coming year, AOPA reported.
Those changes included a dramatic 5x growth of the knowledge test question bank. Writing 80,000 new questions is no quick and easy task, and validating them to make sure each one has a direct link to the source material and appropriate answers takes time. Validation proves accurate assessment, which is why they exist in the first place.
Jason Blair suggests that the FOI is the canary in the mine of knowledge test questions because the FAA totally rewrote the Aviation Instructor’s Handbook, on which the FOI test is based, in 2008. Given the multitude of new questions, NAFI suggests that all tests are affected to some degree and that CFIs and their students prepare accordingly. But here’s the question: How do you prepare for multiple-guess questions that have no right answers, just two or three almost-right answers?
Oh, and let’s not forget that students must pay, give or take, $150 to take these tests. Fortunately, the errant questions should not play as significant a role on the other tests because their source data, the FAA handbooks, have not undergone revisions equal to the Aviation Instructor’s Handbook. NAFI continues to work the problem, Blair says, and he has a meeting with the FAA to discuss and, one hopes, resolve the problem, in early April. And let us hope this problem is acute and not chronic. –Scott Spangler