You have /5 articles left.
Sign up for a free account or log in.

Walk into the exhibit hall at any educational technology conference (or meetings on accreditation or academic affairs, for that matter) and your head is likely to explode. Many of the products sound alike, with vendors spouting the same buzzwords ("adaptive!" "personalized!" "predictive!" "mobile!") and making seductive -- and often unprovable -- promises.

This is the landscape in which many provosts and other campus leaders responsible for student success are operating, and it is unsurprising that the dynamic has left many academic administrators (and the faculty members they work with) wondering about, if not downright skeptical about, the value of many academic technology tools.

Various efforts are under way to try to resolve this problem by injecting data into adoption decisions that have too often been driven by anecdotes, assertions or feelings. Some are national in scope, like the Courseware in Context Framework and the Jefferson Education Accelerator and its Edtech Efficacy Academic Research Symposium; others have local origins, like Oregon State e-Campus's online database.

The state of play surrounding burgeoning efforts to better judge the performance and, importantly, the wisdom of using technological tools and other innovations in instruction was the topic of one of a series of interviews that Doug Lederman of Inside Higher Ed and “Inside Digital Learning” conducted at the Online Learning Consortium's Accelerate conference in Orlando, Fla., in November.

The discussion featured Kristin Palmer, director of online programs at the University of Virginia, and Karl Rectanus, chief executive officer of Learn Platform, which aims to inject peer review in the educational technology analysis and buying process.

The interviews were sponsored by OLC and “Inside Digital Learning” and conducted via the Shindig video platform.

A partial, edited transcript of the conversation with Palmer and Rectanus appears below.

***

IDL: The focus of our conversation today is about the move to focus on the efficacy of technology tools in higher education. But we were chatting a little bit about maybe making this more of a conversation about education than a conversation about technology. I’d ask both of you to explain why you think that framing makes the most sense.

Palmer: There's been an evolution over the last 10 years, [from] looking [at things in a] very tool-specific way and what bright shiny object [can we use], and sort of meeting needs [without knowing] the data around learning outcomes. And now all of a sudden we have tools that we can look more at measuring outcomes … Now the dialogue's really changing to where we're more passionate about education. We know education is needed; we know it's needed broadly in many different instances. So I'm excited to see that migration of a conversation to the larger umbrella. I think we'll get a lot more participants at the table and engaging dialogue. I think that can only lead to good things.

Rectanus: For the early part of this evolution, it was, what is education technology? Does ed tech work, was the overall question.

Palmer: It also pits people against each other. Faculty, you can't teach, we can teach better with these tools.

Rectanus: I think what I’ve enjoyed watching is the belief that we can actually do more. We can actually figure out, in which situations tools or technologies can do this and be supportive of the educational practice. At Learn … our slogan is results should matter. Because when results matter, everything changes. Policies change. Practices change. Purchasing changes, because there's the ability to understand what's driving these educational outcomes for both individual students and student groups and those learners, but also for the organizations. Because it's a pretty complex endeavor right now to personalize learning at scale.

IDL: You can walk into an exhibit hall here at OLC or Educause, and there's dozens if not hundreds of companies and others pitching products or tools or platforms that promise to do certain things. And our Survey on Faculty Attitudes on Technology, which we published a couple of weeks ago, really reveals a lot of skepticism about overpromising. Karl … maybe let's step back a little bit and define efficacy, if we could.

Rectanus: Let me put some numbers around what we see. The Learn Platform product library has almost 5,000 education-technology tools. It's not just who shows up at the convention exhibit hall floor. But with over 100,000 faculty sharing their experience with these tools, what we found is they're not asking, what's a tool? They're asking which tool can answer a question or solve a problem for me, in my situation? Because now we have data on what that looks like.

So … if you ask about efficacy, the question really is not just about student outcomes. If I'm a faculty [member], it's how effective is this tool at solving the problem that I'm trying to solve? If I'm an individual or department leader, it's how are we increasing or spending budget dollars appropriately and effectively to get the best outcomes, whether that's retention and completion or learning and work-force drivers?

I think certainly outcomes is what motivates all of us, right? As educators we care and got into this to drive a higher growth of student learning. We want to learn from each other. And we want to educate the overall market on what's real and what works. But ultimately, there are so many individual aspects of what's effective for which group or which situation.

Palmer: And I would say, in our personal experience at the University of Virginia, I don't think you'll find any administrator or board member anywhere that would say that they're not trying to create a more engaging, successful student experience with retention and high-quality outcomes that the learner is feeling successful. Or a faculty experience of trying to improve the faculty experience where they can feel like they're being effective in what they're doing and engaging. And so if we look at it through the lens of improving the student experience and improving the faculty experience, there's a difference between outcomes and outputs is what I've heard people talk about.

And right now … there are so many tools. And there were so many anecdotal stories, and people are just having a hard time really being able to understand what actually moves the needle, whether it's student engagement or learning outcomes or faculty engagement and support.

And so trying to understand how we can provide an environment that people feel supported but there's also some data out there that we're sharing. It’s in our DNA. We're a research institution. So [we're] trying to figure out … what do we measure, how did we measure it in a robust way and how can we disseminate that.

IDL: You were at the Leadership Network on Tuesday. A lot of conversation about the role of faculty … We had some questions about their embrace of and questions about educational instructional tools. Where are we on the belief by the faculty that student learning is the primary goal of the institution?

Palmer: You guys have your faculty survey that came out [and that presented one take on the opinions]. I was on the research group for the EdTech Efficacy Symposium that happened in May. Our group was specifically looking at higher ed ed-tech decision makers. And there was a lot of information in there -- perhaps we knew it anecdotally -- of for-profits and privates making [decisions] more top down. We're going to implement these tools and we're going to standardize centrally, and there will be efficiencies. All of those buzzwords, right? Versus, in a larger nonprofit, where we’re decentralized and we really want the faculty at the table. And we want them to be part of the conversation.

But [for faculty members] there are so many competing priorities. And that was one of the things that came out in the data for us. People are just inundated … They have a lot of emails in their inbox. And they have a lot of different requirements, whether it's peer-review journal reviews or sitting on advisory boards. So it's hard to figure out how do you improve your teaching? And we have a history of understanding how to do that residentially, face-to-face. But through different modalities and utilizing different tools that are changing all the time, [that’s] a challenge.

Rectanus: We respect academic freedom. And whether you're in the K-12 or the higher ed [sector], most of this “defining a better way” has been an individual pursuit. You know, it's, “I piloted something or I tried this thing and it worked or it didn't work.” And that data, that insight, has gone off into the ether. And at best has been an anecdote that's either widely or not widely shared, right?

At best we get a [conference] poster or you get a viral tweet … This is why we've worked with a bunch of institutions to define a framework. On one hand, we've had anecdote, or feelings, that have driven these decisions. And on the other, as academia, we care deeply about the randomized-control trial, right? And that's been sort of the tool, right?

IDL: And those take four years …

Rectanus: I spoke with a company just last week that's going to release their results from a randomized-control trial. It was concluded two years ago. They are three versions [later] on the product from when that data was collected. But that's been their option. But now, with the Learn Platform, we built this in so that you can run a rapid structured pilot, which quantifies those qualitative insights.

But you can also then move to sort of a correlative analysis or comparative study or control trial in a way that doesn't overwhelm. Most importantly, it takes advantage of the expertise in all these institutions that are already running these analyses. They're just running it alone.

IDL: One of the structural problems in higher education … is the lack of system thinking and structure in higher ed. As an institution like UVA, you can have 50 different experiments going on. And those dots might never be connected, let alone Virginia Commonwealth and Radford … So figuring out how to connect experimentation, build common understandings. It's hard to spread good practices of various kinds in an enterprise and industry … Where do you see opportunity for bridging that gap into connecting?

Palmer: Within the University of Virginia, we've had for the last year and a half, looking at how we're doing research as an institution and ways to make the faculty experience better and to be more productive and more centrally track that as something that we're producing as an institution.

Then within the Commonwealth of Virginia, we have the Network Learning Collaborative of Virginia. That's 13 different member institutions, and we get together bimonthly and we have an annual retreat that UVA hosts in May. And we share best practices and we're trying to work together to create more event scholarships and research scholarships for looking at what our best practices are.

And more looking regionally at what we can be doing to move the needle for improving education across the Commonwealth of Virginia, versus just in our particular institution.

IDL: I believe that slow change in higher ed has been a historic advantage. I think in general it remains [that way].

Rectanus: I don't disagree with that at all. I think you hit on another important challenge, which is incentives and disincentives. If you want to change a system, it happens at the edges. It happens at the connections.

So first off, getting some founding context that is clear and concise and understandable. One way to say that, if we want to figure out what we're using that works, we should probably just know what we're using, to start.

Most of the institutions we talk with, this idea of, we just want to do an inventory to get control first … Most of the institutions that work with Learn Platform, they start with this rapid understanding and just solving some headaches around procurement and communication between faculty internally and their administration on what's actually being used.

And then … the next opportunity is … if we could share rapid cycle evaluation that happened and provide the context that this happened at a large university in a specific domain or department for these specific students … I just need that context to be able to say, you know what? That sounds like me. I'm going to look at that data quickly. Having that context in a unified platform … frankly, the cognitive load of getting context around every one of these things becomes the overwhelming challenge. It's not that people don't want to deal with it. It's, gosh, I don't trust it because I don't have that context.

IDL: We know that context [matters]. So many of these conversations are oversimplified. Can we help students write better? Can we do X or Y? Well, which students? At what kind of institution? Are they 27-year-old mothers? Are they 19-year-old freshmen? Who are we talking about? There is a lack of sophistication in a lot of our conversations about this. And it's hard to drive down, probably, because we haven't provided detail.

Rectanus: One of the opportunities that I think the technology paradigm provides in education experience right now is something we didn't have with, for example, textbooks. I don't know how much the student actually read the textbook. Or which pages they engaged with or didn't engage with, etc. But in this technology environment, an institution used our system. They wanted to analyze a social learning platform that recommended a certain amount of engagement -- a dosage, traditionally. And when they analyzed across a set of 30 courses, along with courses that didn't use the system, they wanted to know -- the research question -- was, does this social engagement increase completion and attendance and engagement?

And what they found was, not only did it in a net, [it] had a slight positive effect. But for a group of those students, not reaching dosage but using the system a certain amount, they did have a significant higher engagement with those students.

And it wasn't what the product company recommended, but in their situation, for their students, they were able to identify in like a couple of hours, “hey, wait a second, this is happening for these students. This is a practice that we can share with other faculty and move forward.”

IDL: But maybe focus on those students.

Rectanus: Exactly. And making the decision -- at least informing the informal discussions about, hey, we -- as opposed to, institution has X number of students. We need X number of licenses. They're now talking about, hey, so what does this look like for the budget? This is what we can push. So we can expand that pilot target or more useful targeted value proposition.

Palmer: And I think the other area, we look so much at data and now we've got all these data, and how do we visualize this data and having to make decisions on this data where, I know, in this specific platform, you've got all sorts of dashboards that are already there. It's like, here it is. And so if you're an institution, and you're looking for kind of an easy way in, there's lots of opportunity there.

IDL: A question from the OLC exhibit hall: "Are you familiar with the vendor evaluation portal that the [University of North Carolina] system put in place? … Any thoughts on this or further ideas as to how schools can mimic this or improve?"

Rectanus: I know a lot about this because it was Learn Platform. UNC launched something called the Learning Technology Commons. It was a white-label version of our platform. They invited all vendors. In their case, they started with vendors that were providing tools for under $25,000. And they said, anyone can come through, but you have to agree to North Carolina law -- essentially a pre-contract -- and provide pricing that will be consistent across our 17 institutions.

And by the way, you can provide any discounts that you'd like -- time based, bulk, you name it. Within 90 days, they had 270 products approved and through the system, visible to 20,000 faculty across the institutions with an average discount of 21 percent across all those, because it streamlines the process for everybody.

IDL: Going back to the walking into that exhibit hall thing, and the data -- the overwhelming nature of this whole environment.

Palmer: And there's a passion to want to improve education … Our board, we've got a group of amazing people that have done phenomenal things in their lives. But they want to make a decision. And their cycle is much faster. And how do you provide a framework and deep thought on things that you're proposing that you can show the efficacy of, so that it can really improve the education.

Rectanus: Where I hope we're going in this market, is we have had decades of a trust gap between educators and product providers. And frankly [some] people … have benefited from a lack of transparency.

I really love this project because it was really about faculty and institutions sharing what they needed, and being slightly more demanding of what they needed to understand in a clearer way and delivering on that so that we can inform decisions.

IDL: That trust gap is hard, when your head is exploding because of buzzwords … Do you think institutions in general are doing enough to focus on … starting with the problem you're trying to solve and then choosing tools?

Rectanus: I think what we see is, there's a high interest in the last question -- in the “for which students is this tool working?” But what we're seeing is an overwhelming feeling about where to start on that …

First, let's figure out what we're using. And then let's clearly understand the opportunities that we have, and incentivize the right things in engagement, and then analyze against impact.

Next Story

Written By

More from Teaching & Learning