You have /5 articles left.
Sign up for a free account or log in.

My response to “New Approaches to Assessing Institutional Effectiveness” by Steven Mintz, posted December 3rd:

Why not require institutions to benchmark their performance against peer institutions?

-- Because it’s unnecessary.  If your on-time graduation rate is 25%, you shouldn’t need a peer institution to tell you that you need to improve.

-- Because it’s often counterproductive. If your graduation rate is 25%, and your peers have only 20%, that’s a recipe for complacency. Then there are “adopt when you need to refine” and “refine when you need to adopt”.  If the problem is in leadership – “management-based” efforts won’t help.

-- Because the link between accreditation and eligibility for financial aid is already “roughly right”.  If it’s going to affect financial aid eligibility in an all-or-nothing sense, then it should be a low bar.

Why not require every institution to conduct an academic equity audit -- a curriculum-wide analysis of disparities in DFW rates by class and course sections, access to high-demand majors, and the course registration process?

-- If our government really has a commitment to “liberty and justice for all”, then this is not a bad idea.  Any institution with equity as part of its mission or values should already be doing this. The main obstacle might be lack of knowledge about how to perform such an audit. It is something that can be learned; easily within the realm of possibility for an institution of higher learning.

Why not require, as part of the re-accreditation process, each institution to describe, and provide data on, the steps it is taking to improve the quality of instruction and student learning in the following areas?

-- Because more prescription (more check-boxes) is not the right answer.

-- Because institutions will be happy to describe and provide data on the steps they are taking in order to check the boxes. It does not mean that the steps will be effective.

The four principles leave a lot to be desired:

1. The measure of who we are is what we do.

This is so vague as to be meaningless.  The underlying description is helpful but still needs clarification.  If “careful” means effective – producing information for useful action – then it is correct.  “Verify” is beyond the concept of “measure”.  To “make true” is not just checking a value – it is a commitment to achieving the goal.

2. What’s measured gets done.

This is just not true enough to be called a principle.  If “done” means “actions taken” it’s true but not useful.  If “done” means “goal achieved” then it’s useful but not true.  Without understanding the “why” – the relationship between the action and improvement – improvement is not certain.

3. What you don’t measure can’t be improved.

A little better.  Still, it over-emphasizes “data driven”.  Many improvements should be based on principles; data doesn’t help.  For example, by the time you measure morale, it’s too late. At least there is some area where this “principle” is applicable.  With the wrong measures, the measured values may “improve” without any real improvement happening.

4. What gets measured gets managed.

This has the same problem as #2.  Unless management is competent – understanding the “why” – this is not helpful.

I agree that measurement is important.  Measurement itself is under attack because we’ve been through an era of useless measurement resulting from the concept being taken out of context.  Measurement is useful when it is part of learning.

As a general comment on the article: I think the link between assessment and accreditation is too strong.  Assessment should happen for its own purposes (learning and improvement), without a connection to accreditation.  Within the context of accreditation as eligibility, “equity” is a valid goal; as is confirmation of mandated data.  The weakness in the current system is that it does not require effective systems for institutional learning and improvement.

-- Edward P. Manning
Manning Services

Next Story

Found In