You have /5 articles left.
Sign up for a free account or log in.

Istockphoto.com/PhonlamaiPhoto

The ubiquity of mobile computing -- and rise of algorithms that determine what we watch, hear and experience -- is raising frequent, and perhaps well-founded, concerns about the disintermediation of humans from decision making.

Chat bots respond to our most obscure queries in nanoseconds. Algorithms help us pick the perfect restaurant, partner or job candidate. Artificial intelligence, in particular, conjures fears of a dystopian future where Elon Musk’s “demons” come for more than just our jobs. And on college campuses, worthy questions are being asked about the incongruity between higher education’s mission and the rise of analytics that sort and filter students or faculty in ways that undermine higher education’s promise -- and mission.

Of course, the future need not be so bleak. The machine-learning underpinnings of AI present not just risks, but profound potential to address some of higher education’s most vexing challenges. And striking the right balance on AI should hinge less on the false choice between whether or not to embrace its potential, but rather on preserving sufficient control to shape the contours of its influence.

Consider a pattern that many of us know all too well: a discussion or debate among friends prompts participants to reach for their phones and consult Google. We use technology to settle questions of fact but retain control over the dialogue and debate around those facts. Technology allows us to spend more time on perspective and nuance. The conversation advances.

But one can contrast that scenario with troubling examples of AI control imbalance. AI-enabled trading has sidelined humans on Wall Street when it comes to short-term trading decisions; companies run to meet the needs of the algorithms developed to buy and sell stocks. Unintended consequences abound, and regulators are struggling to articulate a framework that strikes the right balance.

Advances in AI have also allowed the media to use the technology to cover certain news events. Kris Hammond, co-founder of Narrative Science, estimates that 90 percent of all news could be written by computers by 2030. Meanwhile, the rise of commentary bots -- posting on news articles on social media -- has already shown the power AI has to affect discourse. Nuance is lost -- our perspectives shaped by a dialogue between machines

But, in an ironic twist, artificial intelligence is also demonstrating potential to make human interactions more meaningful. AI requires massive amounts of data to spot patterns, and human interactions are among the most complex, data-rich interactions in existence. Social psychologists have, for decades, speculated on what makes particular interactions more successful, rewarding or fun than others. But the social sciences draw upon precious few data points in relation to AI.

Perhaps there is no better example of AI’s potential to augment positive human interactions than on a college campus.

In 2016, Georgia State University began using an AI chat bot to respond to students’ questions about financial aid and enrollment issues. Using natural-language understanding, the bot is able to comprehend and respond automatically to 2,000 queries. The university says the technology helped decrease “summer melt” among a control group of its students by 20 percent. At Echo360, we recently partnered with Amazon Web Services to incorporate Amazon Transcribe into our video platform. Using AI technology, Amazon Transcribe quickly creates highly accurate transcripts of videos, making course content more accessible than ever before.

More dramatically, AI can also inform the perspective of educators to improve the ways in which they identify and respond to the challenges of students, unlocking the greater potential of teacher-student relationships and allowing educators to chart learning paths shaped by the personal needs of individual students. It’s a development that has fueled debate about possible unintended consequences as behavioral “nudging” becomes the norm, and sophisticated analytics flag students in need of intervention. It is here -- where AI perhaps holds the most potential to transform learning for the better -- that educators are most at risk of losing control.

How will institutions and regulators address inherent risk, as an abundance of data fuels an array of new data privacy concerns? Will the advent of personalized -- or algorithmic -- teaching and advising tools enable higher education to achieve unprecedented scale to reach an increasingly diverse student population? Or will data-driven predictions of student success be used to sort, filter and ration opportunities?

AI tools can be used in the admissions process, for example, to help identify certain keywords in essays or other admissions materials. Such tools could help sidestep inherent human biases and identify students whose applications do not otherwise rise to the notice of the right admissions officials. But those tools should be limited to enhancing administrators’ abilities to find particularly promising applicants -- and not to automatically shut out students who do not grab the tool’s attention.

The future of AI doesn’t have to be scary. But as educators and entrepreneurs, we have a responsibility to ask critical questions -- and avoid easy answers that might undermine our institutions’ democratic promise. Because striking the right balance -- in education, and beyond -- should hinge less on the false choice between whether or not to embrace AI but rather on the issue of control that shapes the limitations of its influence.

Next Story

More from Views