You have /5 articles left.
Sign up for a free account or log in.

Blurred image of pedestrians on a crosswalk with black and white horizontal stripes

Surveillance technology changes how people live in the world. Some computer science students want more guidance about technology’s societal impacts.

Rawpixel

“Connectivity is my hammer,” said Robert Metcalfe, recipient of the Turing Award for inventing, standardizing and commercializing ethernet. “Everything looks like a connectivity problem.” His reference to the First Law of the Hammer (“If you give a boy a hammer, everything looks like a nail,” from a 1963 computing book) suited his audience of computer scientists and mathematicians at this year’s Heidelberg Laureate Forum in Germany. After, he fielded questions from peers and young researchers.

“You said that connectivity is your sledgehammer,” said Wiebke Hutiri, a computer science doctoral student from the Technical University of Delft in South Africa. Her addition of the prefix “sledge-” foreshadowed her concern. “Are there limits to how connected we ought to be? Can we be connected too much? Should certain entities not be connected?”

“It is my job to argue in favor of connectivity,” Metcalfe replied. At the forum, the laureates heckle each other in good-natured ways, which one proceeded to do.

“Bob, think about power generation and distribution systems!” shouted an unnamed audience member—presumably another laureate, given the first-name basis—suggesting concern about cyberattacks on, for example, electrical power plants. “Should those be connected?”

“Of course,” Metcalfe said.

“Nuclear weapons?” Martin Hellman called out from the audience. Hellman received the Turing Award for co-inventing public-key cryptography—technology that supports secure online transactions.

“Yes,” Metcalfe said. “Aren’t they already?”

Hellman nodded and replied, “Unfortunately.”

“Huh, interesting,” Metcalfe said before pivoting the conversation. “I love the graph that shows the progress of poverty over time. It’s correlation, of course. But there was an unprecedented downturn following the arrival of the World Wide Web. I attribute that downturn in poverty to the connectivity that was provided to all of those economies around the world. That’s a lot of good that got done there.” Once the conversation returned to a techno-optimist narrative, Metcalfe continued from there.

Technology can be an awesome force for good, as evidenced by the development of the COVID-19 vaccines at record speed. But today’s AI-powered tools and other emerging technologies can also affect humans in harmful ways—some fuel bias and discrimination, spread disinformation, disregard intellectual property rights, threaten privacy, power autonomous lethal weapons, or possibly pose existential threats to humanity.

Many colleges have embedded ethics into technology curricula to help students in dozens of tech-focused certificate and degree programs think about human impacts. But colleges are slow to change, and many students say such offerings are optional, separate from the “real” work of computing or simply nonexistent. As society reckons with the sometimes-dark side of technology, many in academe are asking how ethics is—or isn’t—integrated into college and university technology curricula.

“In most computer science departments, there’s zero mandatory classes in anything having to do with society—whether ethics or social impact,” said Yoshua Bengio, scientific director of the Mila-Quebec AI Institute and professor of computer science at the University of Montreal. Bengio, known as one of three “AI godfathers,” received the Turing Award—sometimes known as the “Nobel of computing”—for his pioneering work. “That’s a big problem, because computer scientists are becoming a major force in shaping the world and shaping society.”

Hazy Status Quo

Despite growing calls—from the public, policy makers, professional associations and others—for human-centered computing, the global status quo on university curricula is challenging to discern. Much of the research focuses on European and U.S. universities.

A 2022 survey offers some insight into the teaching of tech ethics in computer science and related programs at 61 universities across 23 European countries. Approximately two-thirds of the institutions reported teaching computer ethics, and one-third did not. Of those who taught tech ethics, most taught it as a stand-alone module.

But two-thirds of all of the survey’s respondents (67 percent) reported that their tech ethics modules were 10 hours or shorter. A minority (18 percent) taught more than 20 hours of tech ethics per semester. Respondents widely agreed that computing ethics was important. Their top reasons for not teaching computing ethics were a lack of time and a lack of staff availability.

In the United States, the National Academies of Sciences, Engineering and Medicine called on universities in 2022 to “enhance teaching and learning in computer science and engineering, information science, and other computing-related fields to ensure that the next generation is better equipped to understand and address ethical issues and potential societal impacts of computing.” This echoes calls from other associations over decades, including a 1991 Association for Computing Machinery Code of Ethics and Professional Conduct that stated, “undergraduate programs should provide an environment in which students are exposed to the ethical and societal issues that are associated with the computing field.”

But 2019 survey of machine learning courses at U.S. universities showed that a majority did not include ethics on the syllabus, and those courses that did include such content were taught as electives and offered as stand-alone courses. Computing’s impact on society should be taught alongside the subject matter, many argue, especially as technology is not neutral.

“There’s no naturally occurring river of technology that’s out there that we pick from,” said Afua Bruce, a leading public-interest technologist and adjunct professor at Carnegie Mellon University. “We actively create technology. We actively create the ways that we use technology and that we allow it to be used in our society.” Bruce has held senior science and technology positions at the White House, the Federal Bureau of Investigation and IBM.

Harvard University, in response to student demand for more instruction on societal impacts of technology, has designed an Embedded EthiCS program that situates ethical challenges and the thinking about them in context with the technical material. Their modules are available online and are open access. Stanford University has launched a similar effort.

Even so, when ethics is taught, the content varies a lot, according to a 2020 study that considered 115 university tech ethics courses in the United States.

‘Everybody Made a Joke’

Many college leaders are eager to demonstrate their commitment to producing the next generation of human-centered technology researchers and entrepreneurs. But colleges that implement one-and-done technology ethics training may get mixed reviews.

“We had four hours of ethics that everybody made a joke of,” Hutiri said of her institution’s efforts to prompt students to consider technology’s impact on humans. “It was considered easy. It was considered that there was no right or wrong.”

In an ideal world, designers would burden themselves with imagining the ways in which their inventions could go awry or be abused. A self-driving car that stops at stop signs on a test track, for example, may be deemed ready for the public. But once such a car is released to the streets, a stop sign with graffiti may confuse it, leading to an accident.

Hutiri, whose research focuses on trustworthy AI, is confident that computing students have ample opportunity to hone technical expertise at universities. But once they join the workforce, she questions whether they are equipped to understand possible dangers from systems they are asked to build.

“Either you have zero clue where system X is going to be used, so you have no idea whether it’s good or bad. You might ask the boss, and the boss might not know,” Hutiri said. “Or you actually know that it’s going to be used in an application that you don’t stand behind. What do you do?”

In the past decade, many universities have accelerated efforts to help students consider potential impacts of emerging technologies, according to Ruha Benjamin, professor of African American studies at Princeton University and author of Viral Justice: How We Grow the World We Want. Benjamin is currently teaching a course that’s cross-listed in engineering and African American studies—a first for her institution. Her tech-oriented students in the course speak of an ethos in their computing classes that sometimes emphasizes their potential to become billionaires, Benjamin said.

“Where is [an effort to teach about societal impacts] just lip service and cosmetic, and where is this being integrated into the curriculum?” Benjamin asked.

‘You Decide How Far to Push Models’

As the Turing forum unfolded, many of the undergraduate and graduate computer science students and postdoctoral fellows spoke candidly about a perceived divide in the academic computing community. Students and faculty are either all in with considering societal impacts, many said, or they pay scant attention to such concerns.

“There was zero ethical training,” Chi-Ning Chou, who is Taiwanese, said of his undergraduate computer science program at the National Taiwan University. “I started my Ph.D. at Harvard in 2017. Sometime in the middle, every course was required to have one lecture about ethics. But not every professor did that.” Chou is currently a Flatiron Research Fellow at the Simons Foundation.

“A good first step would be to make both sides appreciate the importance of this issue,” Chou said. “Because if people in computer science don’t care, how can you convince people outside?”

Jason Wu, an American who majored in computer science at the Georgia Institute of Technology and is pursuing a doctorate at Carnegie Mellon, agreed. Wu was part of a team at Apple that applied machine learning and computer vision techniques to make apps more accessible to users with disabilities. The team won a Best Paper award at the 2021 Association for Computing Machinery Computer-Human Interaction conference.

“In my department, a lot of people who specifically focus on bias and fairness advocate for being a lot more careful with this technology,” Wu said. “At the same time, there are people who are just interested in the technology itself, and they’re not thinking in the same way.”

As an undergraduate, Wu was required to complete a course that addressed well-known computing disasters, such as the Therac-25 disaster: in the 1980s, Atomic Energy of Canada Limited produced software-controlled radiation therapy machines that delivered lethal doses of radiation to patients. But not all students engaged deeply with the material, Wu said. “Mentally bypassing [such lessons] is one potential outcome.”

Nayan Saxena, from India, earned an undergraduate degree in computer science at the University of Toronto and is currently working in industry. He has used his deep learning and machine learning knowledge to help create the world’s first Nigerian Sign Language data set. Low-resource languages suffer from data shortages, though their speakers often want their communication documented, preserved and revitalized.

Saxena reported that each of his undergraduate AI courses had a “very short” lesson addressing issues of bias, explainability or fairness in AI, “either towards the beginning or end of the course.” But that made the content on human impacts feel separate from the technical lessons, Saxena suggested.

“Some think fairness and people who focus on that are completely useless, that it’s not a conversation to be had. Then there are others who care about this topic,” Saxena said. “Right now, it’s up to you to decide how far you are willing to push these models.”

Students from Canada, Estonia, Germany, Ghana, Zimbabwe and other countries spoke of self-directed opportunities to learn about computing’s social impacts. Many reported talking with their peers about such issues. But technology ethics, at some institutions, appears to be a specialization, rather than a fundamental part of the field.

Higher ed may tolerate some structural disincentives to helping students hone ethical skills in tandem with technology skills, some argue.

“We’re seeing tons and tons of money being poured into institutions by tech giants that can afford to build out tech education,” said Alec Stubbs, future of work postdoctoral fellow at the Applied Ethics Center at the University of Massachusetts at Boston. “These are very rarely people who are interested in those kinds of questions. I don’t think that [avoidance] is an accident, whether it’s conscious or unconscious.”

Big tech can provide universities with much-needed help in the face of persistent challenges, including educational costs, teaching shortages, student job readiness, student learning loss and diversity initiatives. But corporate and educational interests can also diverge.

“It’s not in Facebook’s [a Meta product] interest to think critically about whether or not they are essentially an addiction mechanism,” Stubbs said. “Even if there are ethical questions being asked, they can be very constrained questions.”

Individuals Make a Difference

Despite a general consensus among international computer science students at the forum that some universities are falling short on teaching ethical expertise alongside technical expertise, many spoke of individual professors who have made a difference.

Amreesh Phokeer, a Mauritian, earned a doctorate in computer science from the University of Cape Town in South Africa. Today, he works remotely from Mauritius for the Internet Society in Washington, D.C., on problems such as understanding the economic impact, for example, of an internet shutdown in India.

“My education was very technical up through my master’s,” Phokeer said. “But in my Ph.D., there was a shift … I started to understand the human side. There were no formal courses, but my lab and group leaders were interested in the human side.”

That insight, Phokeer reported, has helped in his work at the Internet Society.

“We collect quite a lot of [personally identifiable information],” Phokeer said. “We need to be very careful about how we treat this data.”

Iacovos Kolokasis, a Cypriot, is pursuing a Ph.D. in computer science from the University of Crete and the Foundation for Research and Technology–Hellas Institute of Computer Science. His program required a yearlong course that addressed the impact of technology on society.

“But this course was introduced in the past two or three years,” Kolokasis said. “Before, we didn’t have such a course … The course gave me a first step in thinking about ethics, because before I didn’t know anything.”

Sylvia Sellán, from Spain, is working toward a Ph.D. in computer science, with a focus on computer graphics, at the University of Toronto. Data sets containing human images often report male-to-female ratios, though the scientific consensus on sex and gender has evolved beyond that binary in recent decades, Sellán wrote in a recent paper. When algorithms assume binary definitions of sex and gender, the result may be incorrect and may harm those who do not conform.

Sellán was glad for the chance to discuss ethical questions in her classes, including in those focused on theoretical math, which underlies much of modern AI.

“How can triangles have an ethical impact?” Sellán asked of a geometry course taught by her adviser, Alec Jacobsen. “But every class has 10 minutes dedicated to the ethical impacts of papers we discuss.”

Mom Taught Me Ethics

Senior computer scientists, including many at the forum, came of age as professionals during an era in which technical questions were the main drivers of their work.

“I was never taught the right thing to do except by my mother,” said Raj Reddy, Turing Award recipient for pioneering the design and construction of large-scale artificial intelligence systems. Reddy is widely credited with demonstrating the practical importance and potential commercial impact of AI. “We need to evolve educational systems.”

But Reddy acknowledges that universities can be slow to change, including when curricular changes may be warranted.

“There is a committee, and they talk for the whole year,” Reddy said. “Artificial intelligence is moving so fast these days. Universities have to be able to act fast.”

Given the heavy lift on a pressing need, institutions may look to students, many of whom appear eager to help.

“There are ways of making these moral and ethical questions really lively, challenging and interesting,” Hutiri said. “But that wasn’t the experience I had.”

Author’s note: This article is part of a series focused on higher education’s role in helping emerging computer scientists and aspiring entrepreneurs consider the impact of technology on humans. For the first article in the series, click here. In future articles, I’ll report on some institutions’ successful efforts in this regard. If you have examples to contribute to, please email me.

Next Story

More from Teaching & Learning