You have /5 articles left.
Sign up for a free account or log in.

Public and private funders have spent billions of dollars -- sometimes wastefully -- on education initiatives like those in the STEM (science, technology, engineering and math) disciplines without rigorous assessment and evaluation. Not asking for documented results when so much money is on the line misses a golden opportunity to determine whether such programs are indeed helping to improve the way students learn and enhancing their engagement in their studies.

Before more money is spent, we need to listen to those well-attested success stories -- what I like to call “islands of success” -- learn from them, and put assessment to work so that student learning can be improved in all fields, not least the STEM disciplines.

A Tough Look at the STEM Disciplines

Close on the heels of the student loan scandal, the U.S. Department of Education admitted that the federal government has poured billions of dollars into STEM education with little evidence that the money produced good results. More recently, Congress passed the America Creating Opportunities to Meaningfully Promote Excellence in Technology, Education, and Science (COMPETES) Act, leaving open the possibility that billions more may be poured into the effort to improve education in these crucial areas.

State governments, businesses and corporations, foundations, and private individuals have likewise invested a lot of money in these disciplines with little insistence on seeing results. One would think the bigger the appropriation of funds, the greater the need for in-depth assessment of results, especially when the progress in improving the performance of our students in science and math fields is so tenuous. Let’s look at some of the readily available evidence.

  • The just-released math results of the 2007 National Assessment of Educational Progress (NAEP) shows gains in grade 4 performance and some smaller gains in grade 8 performance. While these data are encouraging in some ways, the real test is whether these gains can be sustained over time. For example, the National Center for Educational Statistics -- which administers the NAEP -- reports that from 1995-2006, students showed an increase in grade 4 performance in science-related subjects, stagnation at grade 8, and a decline at grade 12. Will the NAEP math results follow this pattern?
  • SAT scores are also telling. According to the College Board’s news release of August 28, 2007: “The long-term trend in mathematics scores is up, rising from 501, 20 years ago to 511, 10 years ago to 515 this year.” But the last two years show declines: “Mathematics scores hit an all-time high of 520 in 2005, before slipping in 2006 and 2007.” This year verbal scores, now called “Critical Reading”, dropped one point to 502. There was also a drop in the new Writing component of the SAT.

Comparison of the mathematical skills of young people with their verbal (“critical reading”) skills isn’t encouraging either. (With all the money that has gone into STEM disciplines over the decades, one would expect to see robust growth compared to the chronically undernourished reading and writing area.) While the NAEP test indicates some gains since 1996 for minority students, and improvement in both reading and math at elementary and middle school levels, it documents stagnation at the high school level (12th grade) for both areas.

Stagnation is also the story in both realms over the longer term. The NCES report on the NAEP Long-Term Trend Assessment, which includes both math and reading scores over the past three decades, shows that 17-year-olds in both areas were doing no better in 2005 than their counterparts in 1971 and 1996.

Where are the Islands?

None of this suggests that funding should be cut for the STEM disciplines, but it does point to the need to look more closely at those “islands of success,” both in the STEM disciplines and in more “verbal” areas. We need to find the places where careful evaluation has been done and positive results demonstrated. The report from the Department of Education mentioned above warns us not to expect to discover many of such islands on the STEM side.

Programs to strengthen student learning in reading, foreign languages, history, literature, and other humanistic fields may have done little better than their more quantitative counterparts, where one would expect rigorous evaluations, but they do have some successes to report. To be sure, their evaluations are often based on interviews or opinion surveys, rather than on learning outcomes, but such assessments can be very useful. For example, when evaluators from the University of Pittsburgh’s Learning Research and Development Center assessed the impact of teacher professional development seminars at the National Humanities Center (where I was director from 1989-2003), they gathered data on how the participants taught the subject matter of the program both before and after their time at the center. They found that teachers changed from more passive to more active teaching techniques and that teacher after teacher reported higher levels of engagement and student learning.

Seminars such as those at the NHC invite comparison with the Advanced Placement Training and Incentive Programs(TM) (APTIP) and Pre-AP Training and Incentive Programs(TM) in Texas. To date, with funding from ExxonMobil (through the National Math and Science initiative)
and others, these programs have provided rigorous training in math, science, and English for “almost 900 AP teachers and more than 7,800 pre-AP teachers in more than 230 high schools and 350 middle schools in more than 80 Texas districts.”

The results are encouraging: “In 1996, when an AP Incentive Program was started in 10 Dallas schools, the number of students per 1,000 juniors and seniors scoring 3 or higher on mathematics, science, and English AP exams was just two-thirds of the national average. Ten years later, these schools are two-thirds above the national average.”

Such success stories are out there, on both sides of the Two Cultures divide, in public institutions and private ones, large and small. They can be instructive to all of us, not least those of us whose goal is to support gains in student learning.

Philanthropy needs to ferret out the programs that really work and figure out what accounts for their success when so many others are at best mediocre. Federal and state governments and corporations need to do the same, making sure that their evaluation processes are not just accumulating data, but vigorously and systematically searching for what works best, and then supporting these projects generously, bringing them up to scale and extending their impact.

Do we really know what accounts for such successes? Maybe not, but I’ll wager that the programs that succeed have often evaluated their results and used those evaluations to guide planning and make successive improvements in program design. Whether that hypothesis proves correct or not, careful assessment of results, as we have repeatedly seen in our grants at the Teagle Foundation, helps the funder improve its grant programs over time. We learn from our successes, and, yes, our failures too.

Next Story

More from Views