You have /5 articles left.
Sign up for a free account or log in.

I am not a cognitive psychologist, I didn't even play one on TV, and it's been awhile since I stayed in a Holiday Inn Express, but even I can see many problems with the notion that Artificial Intelligence technology will be an enhancement to how students learn online.

Writing here at Inside Higher Ed, Ray Schroeder recently outlined the ways that AI is improving at recognizing human emotions. He cites an industry professional who I'm certain has no vested interest in promoting a particular agenda or hyping the market potential of a particular technology who says, "By 2022, your personal device will know more about your emotional state than your own family."

Schroeder tells us that, "Customer service chat bots can sense when a client is angry or upset." In my personal interactions with this scourge, this is not a particularly tough feat. I'd think even a relatively crude AI interface could tell I've reached the end of my rope when I type: "What the f--k does a person have to do to talk to a g-d human being? Is there anyone there? Anyone? Hello? Hello?"

I am an extremely even keeled person in real life, to the point where those who know me best would occasionally like me to be less keeled at times, but I don't think I have ever been more upset than when a customer chat bot chastised me for my use of profanity after I'd reached peak frustration.

As with other AI technologies considering the use of emotion-detecting software tends to ignore any questions about the underlying values and dynamics that surround learning. By privileging "accuracy" and "efficiency," we're privileging a kind of frictionless experience, but while frictionless may be desirable for customer service, it is not something we should attach to learning.

Friction makes heat. Heat is energy, and energy is a necessary ingredient in the learning process.

For example, sometimes a misreading of emotions between humans creates deeper engagement, where a response inconsistent with what we expect causes us to engage empathy, adjust and improve understanding.

Can this process be a little messy, particularly online? You betcha. This is why it's even more important to practice it within the confines of a college course where a trained and caring faculty member can help this process along, provided there is a faculty member teaching the course rather than an AI bot.

Second, the least interesting part of emotions is the "what." Far more important is the "why." Am I distressed because I'm hangry, or because I'm experiencing something more significant? Do I need an energy bar or a hug? What if what I most need is human connection, even if that connection is inefficient and imperfect? What happens when a bot can detect an emotion, but because we've outsourced the monitoring to a non-human, there's no follow-up that can intervene in real ways?

Third, once I know I'm interacting with or being monitored by AI, I will change my communication to match that feedback loop. Consider the recent reporting by Benjamin Herold at Education Week in which he found a student who knew their communications were being monitored trying to alert the authorities to concerns about a boy acting strange by typing it into a Google doc that would be flagged by an algorithm.

It is a very small leap to see many ways in which students will be conditioned to game the algorithm to please the AI, the same way that Les Perleman's BABEL Generator could fool the GRE auto-scorer with literal gibberish.

Employing AI in online courses actually strikes me as very similar to the use of algorithms to grade student writing. It is a way to score and surveil -- this student gets an A, that one seems angry -- but it is nothing like genuine engagement. It claims personalization, but it is the person who must please the algorithm, not the other way around. To see these technologies as "successful" means defining down the educational experience to something they can actually do and then calling it "education." Never mind what's lost in the process.

The only way this helps prepare students for the complexities of the future is if we're simply getting a jumpstart on programming them to interact with disembodied and dehumanized technology, which come to thank of it, is maybe the point of all the AI boosterism.

While big data approaches are good at showing us trends and tendencies in aggregate, they are not and never will be perfectly predictive at the individual level. The faith some place in the potential of this technology is near magical.

For example, consider this sentence from earlier in this post: He cites an industry professional who I'm certain has no vested interest in promoting a particular agenda or hyping the market potential of a particular technology who says, "By 2022, your personal device will know more about your emotional state than your own family."

My human readers almost certainly recognized the sarcasm very much intended in my framing of the quote. Would an AI bot?

Even if it could, that wouldn't be any fun, would it?

Next Story

Written By