You have /5 articles left.
Sign up for a free account or log in.

Testing enthusiasm continues to grow among friends and critics of higher education. My own enthusiasm for the endless standardized testing schemes proposed tends to be limited. Nonetheless, even as some protest the effects of standardized entrance tests such as the SAT and the ACT, others extol the power of accountability based on standardized exit tests. Leaving aside for the moment, the issue of whether standardized tests serve a useful purpose for sorting students into colleges, the exit testing conversation raises some interesting questions that we should try to answer before signing up for one form of testing or another.

One problem is deciding what we want the test to tell us.

We may want to focus on an institution’s performance in producing students who, in general, meet an average standard of performance. Or, we may want to focus on the student, and certify that each individual graduate meets an individual standard of performance. These different perspectives require a different testing strategy.

The institutional perspective requires us to define a test reminiscent of industrial quality control. We sample the production of our educational process, the students, and test for quality, using statistical quality control to tell us how well our process works. While there are practical issues that may limit the success of this strategy (students selected for the sample may not take the test seriously because it does not have an individual impact and the high variability of student talent and motivation may make the statistical validity of the sampling difficult to demonstrate), the sampling technique gives institutions a way to respond to many of their accountability critics.

The individual perspective requires us to define a test reminiscent of the SAT or ACT that assesses every individual graduate’s level of achievement and certifies the level of competence achieved by each graduate. In large institutions, administering such a testing program will surely be a challenge, but the benefit in establishing a quality metric may well justify the expense.

Both systems will tell us much about the output of the institution, but the utility of either approach depends on the audience. For some, the purpose of the testing is to help parents and prospective students evaluate the expected quality of an institution before enrolling. For others, the purpose of the testing is to guarantee that the graduates can perform at an expected level.

If we want to certify institutions as having the ability to produce a reasonable product out of the student material provided, then the institutional version is useful in picking an institution. We can predict the odds of our student emerging with good skills in much the same way we can predict the chances of an airline arriving on-time. There’s no guarantee that any particular student will emerge fully qualified, but if the institutional average score is good, we can expect that the odds of a good result for our student will be high.

However, an employer may well prefer a certification of the individual student, because the employer needs individuals to perform. If the institution has a good average score, the employer might take a chance that the graduate is good, but if the graduate can pass the appropriate test as an individual, the risk for the employer disappears. This form of testing is what we do when we test students for admission to graduate or professional programs using such tests as the LSAT, the MCAT, or the GRE or require them to take certification exams for nursing or the CPA. Those tests are exit tests from college to determine whether the student, by whatever means, has graduated with the right skills required for either specific employment or particular graduate study. We may want to know that they graduated from a rigorous college, but we are much more secure in our judgment about an individual’s prospects if they can demonstrate directly their own personal capability.

The least useful testing, of course, is indirect measurements where we poll our students and ask them how they feel about their education. The very popular NSSE surveys are a prime example of this kind of test. We know that students who feel good about their education may have had a good learning experience, but absent a test of their actual academic achievement, we really don’t know what they learned as they enjoyed the learning process. Also, asking people whether they think they studied hard or think they had a good interaction with their professor, among other questions of this type, tells us about customer satisfaction, but it doesn’t tell us much of anything about what students learn. This is reminiscent of student evaluation of teaching, a process that has almost nothing to do with learning but much to do with enjoyment and perception. We know what the student learned in class when we give a rigorous test that asks questions about the material. If the student passes the test, we know what the student learned. The relationship between enjoyment or satisfaction and learning is tenuous at best.

Another issue for the testing discussion is the issue of what constitutes a reasonable standard of performance. Much study has gone into the tests that measure baseline reading, writing, mathematical, and reasoning skills. For some institutions, meeting these baseline goals, whether testing institutional performance by sampling or student performance by exit exams, may be a challenge. For highly selective institutions, however, the existence of such minimal standards may prove to be yet another differentiator. Selective institutions might test their applicants against this minimal standard and deny admission to those who do not pass or they might require students to pass the test by the end of the first year. Then, they could introduce subject-specific specialized certifications to give their graduates an additional edge in the job and graduate or professional education markets.

We’re probably locked into some form of exit testing to certify our graduates because our internal processes of evaluation no longer inspire confidence or speak to common standards of performance. Given this inevitability, institutions should think carefully about the competitive advantages of different forms of exit testing, because it is certain that the results of whatever exit exam systems emerge will be used as part of the intense competition among institutions for students and funding.

Next Story