You have /5 articles left.
Sign up for a free account or log in.

Quick: name your five best school teachers or college professors, the individuals from primary school through graduate school who made a difference – a big difference – in your education and your life. Ok, pencils down.

What made these men and women effective instructors? Great teachers? Memorable mentors?

How do you know? By what metrics do you assess their impact and effectiveness? Which teachers and professors “added value?” Why do their names linger while other names are lost?

These are no longer “academic” issues best left to faculty meetings and AREA presentations. The question of value added – for elementary and secondary schools, for colleges and universities, and for individual teachers and professors – looms large in the continuing public conversation about quality and reform across all levels of education.

Moreover, the public conversation about value-added has been elevated a few decibels, courtesy of articles in the LA Times that began on August 14th about “value-adding” teachers and schools in the Los Angeles Unified School District (LAUSD).

Go to the source and read the LA Times series: intended for civilians, it draws on multiple years of student performance data on the California Standards Tests in English and math that begin in the second grade. The special analyses of student test data referenced in LA Times articles are based on 1.5 million test scores from some 603,500 elementary school students, covering the school years 2002-03 to 2008-09.

The LA Times series asks and tries to answer questions about effective schools and teachers in the LAUSD. Rather than reporting mean test scores for individual schools as a measure of effectiveness and outcomes, the LA Times analysis was able to track student performance by individual classrooms and teachers: “The Times obtained seven years of math and English test scores from the Los Angeles Unified School District and used the information to estimate the effectiveness of L.A. teachers — something the district could do but has not.”

The August 14th article that opened the series begins by highlighting two teachers in the same elementary school, located in

the poorest corner of the San Fernando Valley, a Pacoima neighborhood framed by two freeways where some have lost friends to the stray bullets of rival gangs. Many are the sons and daughters of Latino immigrants who never finished high school, hard-working parents who keep a respectful distance and trust educators to do what's best. The students study the same lessons. They are often on the same chapter of the same book. Yet year after year, one fifth-grade class learns far more than the other down the hall. The difference has almost nothing to do with the size of the class, the students or their parents. It's their teachers.

Identified by name and shown in photographs, the tale of two teachers, as presented by the LA Times, is a compelling story: on average, students in one classroom “begin the year in the 34th percentile and end in the 61st. The gains [make this teacher] one of the most effective in the district.” In another classroom down the hall, students on average lose “14 percentile points in math during the school year, relative to their peers districtwide.” This teacher, says the LA Times, “ranks among the least effective of the district's elementary school teachers.”

Further on, the article cites a teacher described by her principal as one of the “most effective” in the building, among the first in the LAUSD to be certified by National Board for Professional Teaching Standards, someone who routinely attends professional development workshops and helps to train future teachers. Yet the LA Times analysis found that this well-regarded third grade teacher “ranked among the bottom 10 percent of elementary school teachers [in the district] in boosting students' test scores. On average, her students started the year at a high level — above the 80th percentile — but by the end had sunk 11 percentile points in math and 5 points in English.”

The LA Times correctly notes that value-added is not (yet?) a widely accepted as a model for school and teacher effectiveness and educational outcomes: “Though controversial among teachers and others, the method has been increasingly embraced by education leaders and policymakers across the country, including the Obama administration.”

And it should come as no surprise that the LA Times articles have generated controversy. The major reporting, on August 14th and again on August 21st, provides data, graphics, maps, and photos. Voices on all sides of the key issue – is value added an appropriate assessment model for K-12 schools and teacher effectiveness – are offering up their opinions.

Pay attention, because this really does have consequences for higher education, and not just for methodology courses and teacher training curricula in the nation’s ed schools.

One of the early advocates for “value-added” assessment in postsecondary education was Alexander W. Astin, the founding director of The Cooperative Institutional Research Program (CIRP – aka the UCLA Freshman Survey or the Astin Survey), the nation’s largest continuing empirical study of higher education, and now professor emeritus of higher education at UCLA. (Disclosure: Astin was my dissertation advisor, mentor, and boss from 1978 to 1989.) Drawing on longitudinal studies of undergraduates, Astin’s work, notably Four Critical Years (Jossey-Bass, 1977) and What Matters in College: Four-Critical Years Revisited (Jossey-Bass, 1993) documented the added-value – the exceptional impact – that some colleges or collegiate experiences have on student outcomes as measured by academic performance and other individual and institutional metrics.

Stated (too) simply, Astin’s research, based on multivariate analyses of large, multiple, and longitudinal cohorts of undergraduates across a wide array of colleges and universities, confirmed that the impact of the college experience at some institutions surpassed predicted outcomes (grades and other measures of academic performance; retention in specific majors; degree completion, student satisfaction with the college experience, etc.) while the collegiate experience at other institutions impeded some student outcomes.

Astin was one of the seven members of the of the Study Group on the Conditions of Excellence in American Higher Education, whose 1984 report, Involvement in Learning: Realizing the Potential of American Higher Education, was intended by the U.S. Department of Education to serve as a postsecondary follow-up to the widely cited A Nation at Risk report about K-12 education issued by the Department in April 1983. Strikingly different in tone and tenor than A Nation at Risk (“If an unfriendly foreign power had attempted to impose on America the mediocre educational performance that exists today, we might well have viewed it as an act of war. As it stands, we have allowed this to happen to ourselves.”), Involvement in Learning advocated for “adequate measures of educational excellence [which] must be couched in terms of student outcomes – principally such academic outcomes as knowledge, intellectual capacities, and skills” (p. 16).

In its discussion about assessment, the 1984 Study Group argued that “higher education should ensure that the mounds of data already collected on students are converted into useful information and fed back [to campus officials and faculty] in ways that enhance student learning and lead to improvement in programs, teaching practices, and the environment in which teaching and learning take place. We argue that institutions should be accountable not only for stating their expectations and standards, but [also] for assessing the degree those ends have been met. In practical terms, our colleges must value information far more than current practices imply” (p. 21).

The value-added analysis of LAUSD schools and teachers conducted by the LA Times throws down a gauntlet to the assessment efforts at many colleges and universities. As noted by the 1984 Study Group, and like the LAUSD, most postsecondary institutions collect a rich array of data about their students that remain untouched for the purposes of analyzing impacts and outcomes.

Consider one example: student placement tests. Many colleges (especially large state institutions) require their students to take placement tests and/or “rising junior” examinations: the students who do not “pass” these exams must enroll in remedial courses. When they pass the remedial course, they move on, either to college level courses (placement tests) or to upper-class standing (rising junior exams).

What happens to the data about the student experience in these courses? Are the data – tests on mid-terms and finals, as well as other metrics – used to help assess the impact of the course or the effectiveness of the instructor? What about state systems that have a common (freshman placement or rising junior) exam but multiple remedial courses offered across multiple campuses? Are some courses, instructors, or institutions more effective than others? Are the data analyzed in a way that they can be used as a resource (“how do we do better?”) rather than a weapon (“your students failed!”)?

In 1980 the Southern Association of Colleges and Schools added institutional outcomes to the accrediting criteria for colleges and universities, a policy quickly followed by the other major regional accrediting agencies. For thirty years, colleges and universities have operated under an outcomes mandate but without a consensual methodology for assessing outcomes.

The September 2006 Spellings Commission report, titled A Test of Leadership: Charting the Future of U.S. Higher Education, echoed the 1984 Study’ Group’s concern about data and assessment, but with sterner language: accountability suffers from “a remarkable shortage of clear, accessible information about critical aspects of American colleges and universities…. There is inadequate transparency for accountability and for measuring institutional performance, which is more and more necessary for maintaining public trust in higher education.”

Yet a quarter of a century after the 1984 Study Group’s Report, most colleges and universities have yet to “ensure that the mounds of data already collected on students are converted into useful information and fed back [to campus officials and faculty] in ways that enhance student learning and lead to improvement in programs, teaching practices, and the environment in which teaching and learning take place.”

Business analytic methodologies and technologies such as data mining and data warehousing that are widely deployed across the consumer economy are making the slow migration to campus. Initially deployed for enrollment management, these same methodologies and technologies can generate data, information, and insight that can also contribute to the improvement of academic programs, professional development efforts, and institutional services.

But let’s acknowledge that the effective use of institutional data requires campus officials and policy makers to agree that the data will be used as a resource, not as a weapon. The challenge, as articulated by the 1984 Study Group, remains: how do we do better – how can we use and exploit data to aid and enhance program improvement efforts and professional development? In this context, value added analysis, done well and used appropriately, can be a powerful and useful resource.

Addendum: A special note to entrepreneurial researchers thinking about large grant proposals to do similar value-added work for your local school district: it might be wise to think small. The statistical analysis for the LA Times reporting was supported by a $15,000 grant from the Hechinger Foundation. The analytical work was performed by RAND Corp. senior economist Richard Buddin, who worked as an independent contractor. No doubt the LA Times invested significant dollars and staff time aggregating the LAUSD data and preparing the newspaper articles. But it might be interesting to compare the project budget at the LA Times against university grant proposals. Here, too, the LA Times series throws down a gauntlet to the education community, not just about bringing data to aid and inform the conversation, but also, perhaps, about the costs of doing so.

Next Story

More from Digital Tweed