You have /5 articles left.
Sign up for a free account or log in.
Vertigo3d/iStock/Getty Images
“I am like the midwife in that I cannot myself give birth to wisdom … The many admirable truths which [my students] bring to birth have been discovered by themselves from within.”
Thus Socrates described himself. To this day, he is known not for his own ideas but for his eponymous method of eliciting them from others. This “elenchus”—cross-examining—involved asking difficult, sometimes uncomfortable questions, subjecting answers to logical objections and identifying his “patients’” underlying assumptions. These all remain mainstays of philosophical education. Socrates’s image as the paradigmatic philosopher has yet to fade.
Few can use artificial intelligence without wondering whether it is conscious. Fewer still fail to question the many ethical issues it raises. These questions about mind and morality are reasons enough to justify widespread emphasis on philosophy in education. But philosophy has an additional educational benefit: It can teach students how to use AI effectively.
Whether we wish to write an email, build a website or analyze a data set, our futures will involve less in the way of creating a product ourselves and more in the way of eliciting something of quality from an AI. We’ll have to use carefully worded questions, follow them up with logical objections and identify the assumptions these programs make. This is to say, in the age of AI, our roles are more akin to Socratic “midwives” than self-reliant producers. Students who will inherit this age would thus be wise to learn the elenchus.
But how specifically can philosophy help? And why should we think interacting with an AI is anything like interacting with the youth of ancient Athens or the undergraduates of contemporary America?
Compare OpenAI’s prompting recommendations to the writing advice of Jim Pryor, a philosophy professor at the University of North Carolina at Chapel Hill whose website is a well-known source of advice for students in the field. OpenAI’s recommendations for writing prompts include: “Reduce ‘fluffy’ and imprecise descriptions”; “Be specific, descriptive and as detailed as possible about the desired context, outcome, length, format, style, etc.”; “Split complex tasks into simpler subtasks”; and “Use few-shot” prompting, in which the user provides the AI with judiciously chosen examples to illustrate the kind of problem they wish it to solve.
Pryor gives corresponding advice about writing philosophy: “Be concise but explain yourself fully,” “Make the structure of your paper obvious,” “Use plenty of examples and definitions.” The central virtue of philosophical writing is clarity of reasoning—a virtue of prompt engineering as well. Philosophy seems well suited, therefore, to help students provide effective inputs to generative AI.
But why would philosophy have so much in common with prompt engineering? Philosophy is many things, but it is often a search for the relevant information. For example, we often ask questions that aren’t clearly relevant, are far from the heart of the issue, or are too general to be effective. But philosophy teaches us how to ask better questions. Many wonder “Does morality depend on God’s will?” To the yea-sayers, Socrates would have us ask, “Are morally right actions right because God commands them, or does God command them because they are right?” Questions like these—deep, unobvious and on which the original question turns—are pervasive in philosophy. And questions sharing these features are what we will want to ask of large language models to produce better outputs.
Even so, many fields involve asking the right questions to sort the relevant from the irrelevant. A programmer must write their code to be as clean as possible while still working. Scientific experiments are designed precisely to figure out which factors are relevant to an outcome. What makes philosophy different?
For one, philosophy is, in essence, a kind of dialogue. A programmer learns how to give the relevant information to a (non-sentient-seeming) computer. A scientist learns how to effectively elicit answers from nature. But philosophers subject people to questioning. And regardless of whether AI is conscious, our interactions with it are more akin to our interactions with people than with a terminal or with nature. Hence the similarity between prompting recommendations and philosophical communication.
But there’s an even more fundamental reason the dialogic nature of philosophy is uniquely suited to teach AI skills, one that has more to do with our evaluation of AI’s outputs than with crafting effective inputs. Students new to philosophy are often frustrated by the fact that we never quite reach the bottom of an issue but instead continuously seek more precise, deeper answers. This iterative process is essential to leveraging the power of AI.
ChatGPT can produce a cover letter. But even with a well-crafted prompt, the result is never what one wants initially. To use AI effectively, one must, like Socrates, find the flaws in its outputs, subject them to critical scrutiny and ask it to revise accordingly—a process that can be repeated indefinitely. This is exactly the nature of philosophy—an open-ended dialogue in which, through objection and refinement, one arrives at a view of the world that is never complete but better than what one started with. (Granted, you should decide your cover letter is finished at some point!)
This at last brings us to one of the most important skills one learns from philosophical study: Artificial intelligence is no God. It hallucinates. It is a poor judge of quality. It is, by definition, a bullshitter. To use it effectively, we must treat its outputs critically. This skeptical attitude toward the mere appearance of expertise is a great fruit of philosophy.
“Whenever someone tells us that he has met a person who knows all the crafts as well as all the other things that anyone else knows and this his knowledge of any subject is more exact than any of theirs is, we must assume that we’re talking to a simple-minded fellow who has apparently encountered some sort of magician or imitator and been deceived into thinking him omniscient and that the reason he has been deceived is that he himself can’t distinguish between knowledge, ignorance, and imitation.”
Socrates said that, too.