You have /5 articles left.
Sign up for a free account or log in.

The letters "AI" in purple, against a dark background depicting the transfer of data in many different colors.

da-kuk/E+/Getty Images

Remember when professors and librarians expressed a sense of panic about Wikipedia after its 2001 debut? A flurry of soul-searching discussions and debates spread on campuses everywhere about the meaning of accuracy and authority in the digital age.

Many faculty members banned its use outright. Librarians warned students to turn to vetted, authoritative sources on library shelves rather than the popular website. Some instructors vandalized articles to demonstrate how easily falsehoods could be introduced. Others let Stephen Colbert do the vandalizing, showing students a satirical Colbert Report segment that included him altering an article on air: “Who is Britannica to tell me that George Washington had slaves? If I want to say he didn’t, that’s my right.”

Writing for MIT Technology Review in 2005, Simson Garfinkel claimed Wikipedia shook the foundation of the “meaning of truth.” An oft-cited study by Nature in the same year, however, found that Wikipedia was not significantly less accurate than Encyclopædia Britannica, averaging four errors per entry versus three for Britannica—a gold standard recognized for its authority and reliability.

It didn’t take long for the panic to subside. Wikipedia results started popping up as answers on the first page of Google searches. Having a Wikipedia page became a status symbol for many. Before long, the phrase “but it’s in Wikipedia” crept into the student vernacular as a declaration of legitimacy.

Even the most strident critics eventually came around. Wikipedia gained recognition in campus libraries as a tertiary source, collections that summarize and repackage existing information. Soon scientists and scholars began contributing to improve articles in their areas of expertise, and many instructors developed assignments that would engage students in crafting articles themselves. The fact is Wikipedia’s editorial guidelines show an almost archaic appreciation for traditional approaches to truth claims and, like old-school encyclopedias, provide context for sense-making. When our research institute, Project Information Literacy, conducted a national study, funded by the MacArthur Foundation, at 25 U.S. colleges and universities about undergraduate students’ research strategies, one student told us, “My professor says Wikipedia is a great place to start, but a horrible place to end.”

Fast-forward to November 2022, when OpenAI launched a prototype of a chat bot based on its GPT large language model, inviting the world to test it out. The new tool quickly garnered headlines thanks to its seemingly magical ability to instantly generate essay-length answers to questions in error-free, grammatical English.

Once again, panic set in and hand-wringing commenced. How could instructors continue to assign essays to measure student mastery of critical thinking and writing skills if a website could generate them in less than a minute? Would ChatGPT lead to an unstoppable wave of plagiarism? Was the college essay headed the way of chalkboards and blue books?

Many schools, including New York City’s public schools, banned the app from their devices. Seeing a niche market, developers began to roll out ChatGPT-detection services, while an enterprising Princeton undergraduate created an app to tell the difference between human and AI-generated content, a brainteaser since Alan Turing’s time.

But not everyone is resisting the seemingly inevitable AI future. Some instructors are redesigning their assignments to work side by side with students to make use of ChatGPT. Writing for The New York Times opinion page, Zeynep Tufekci claims teaching students “the ability to discern truth from the glut of plausible-sounding but profoundly incorrect answers will be precious.”

As with the early years of Wikipedia, there are glitches. AI text generators are oblivious to factual correctness and sometimes “hallucinate”—inventing strange and inaccurate responses, as when Bing’s AI-powered chat bot told Kevin Roose it was in love with him and urged him to leave his wife. No doubt some of these embarrassing failures will be smoothed over, but larger questions loom. Will AI chat bots be used to generate industrial-size volumes of convincing disinformation? Will they eliminate jobs? What does it mean for machines to sound so human?

In hindsight, some of the concerns expressed by professors and librarians when Wikipedia seemed to shake the foundations of knowledge were off base. Twenty years later, we are used to its ubiquitous presence in search engine results and answers provided by voice assistants like Siri and Alexa. We are comfortable with Wikipedia’s rules governing editorial decisions and the use of sources, even though the large majority of the online encyclopedia’s editors are white English-speaking males from the Northern Hemisphere. Fears about anonymous amateurs writing an encyclopedia have been replaced by the expectation that we can find up-to-the-minute factual information easily. Just think, it took all of 11 minutes after Aretha Franklin’s death was announced in 2018 for Wikipedia to be updated.

Yet the anxiety around crowdsourced knowledge that was attached to Wikipedia when it first became popular exposed a fundamental risk inherent in what was then called the “read-write web.” The focus on Wikipedia may have been misplaced; it is not, as Jaron Lanier asserted in his 2006 essay “Digital Maoism,” a destructive example of collectivized mob rule over truth. But then again, something more problematic emerged that we didn’t see coming: the social media revolution has given political actors profitable tools to popularize falsehoods, drive division and cast doubt on widely accepted facts, leading us to the profoundly unsettled terrain of our information landscape today.

While we were busy worrying about students using Wikipedia in their schoolwork, we failed to anticipate what would happen when anyone could proclaim their own truth and gather a mob of believers around it. To this turn of events, ChatGPT has made its debut.

As the novelty of ChatGPT drives a wave of hype, are we repeating the same mistake with worrying about how it will change what we do in the classroom, when instead we should be thinking about the unintended consequences of AI hubris? No doubt, after a paroxysm of alarm about ChatGPT, college instructors will find ways to help students learn how to think critically and write clearly without assigning essays that a chat bot could write. They will find useful ways to use AI in teaching and research. It will likely become an everyday tool, running in the background, as large language generative models are integrated into search engines and office software. At some point, like spellcheck and autocomplete, we may forget it’s even there.

But there are larger concerns we must not ignore. Unlike Wikipedia, ChatGPT and similar products under development are proprietary, not the product of a nonprofit powered by volunteers and small gifts. It ingests data from the web (including the entirety of Wikipedia) but unlike with Wikipedia’s history and talk tabs, there’s no visible information about how the chat bot arrives at its answers or how it “decides” what sources to draw on. In fact, it invents sources when mysteriously moved to do so.

Perhaps unsurprisingly, conservative critics have called ChatGPT biased and are developing their own chat bots that will provide content that aligns with right-wing beliefs, echoing the 2006 launch of Conservapedia, a wiki-based answer to Wikipedia that has never achieved much traction. But unlike Wikipedia, ChatGPT is geared toward customer satisfaction rather than facts. The company is planning to build in options for customizing results to match one’s political and cultural values. By avoiding taking a position on truth, ChatGPT can be fine-tuned to give users the answers they prefer.

Apart from predictable culture war controversies, ChatGPT presents a number of troubling possibilities in its early days we should not ignore. It may shake up the job market in many industries that were, until now, resistant to automation. It could make it easier than ever to generate volumes of plausible disinformation, at a cost to democracy. These technical developments not only distract from the urgency of solving the climate crisis, they require vast amounts of computing power, which results in an enormous carbon footprint. And all this is happening without any kind of oversight. Once again, Big Tech is moving fast, and even its leaders have no idea what they might break next.

Maybe it’s a good thing that the dominance of the college essay has been called into question. The traditional focus on presenting a polished finished project that could just as easily be generated by a chat bot or purchased from a term-paper mill distracts from the dynamic process of thinking through complex questions and weighing possible answers. But rather than respond by finding clever ways to use ChatGPT in the classroom, as if it’s an inevitable part of our lives from now on, instructors should help students think critically and ethically about the new information infrastructures being built around us. We should consider how we can play a greater role in deciding what happens to our knowledge environments rather than leaving it up to a handful of big tech companies.

We missed the boat on Wikipedia, vilifying it as a threat to accuracy and expertise while missing the larger risks presented by the social web. We shouldn’t expend our energy on whether to use ChatGPT or ban it from the classroom when the rise of generative AI raises much more important questions for us all.

Barbara Fister is professor emerita at Gustavus Adolphus College and scholar in residence at Project Information Literacy. Alison J. Head is an information scientist and the founder and director of Project Information Literacy, a research institute that has interviewed more than 21,000 U.S. undergraduates for 12 studies.

Next Story

More from Views