So, let’s start with this: What is computing ethics?
It’s a form of applied ethics that examines how technologists—or the people who are creating current and future technologies—evaluate the impact of their work on society from multiple perspectives. My computing ethics class provides an interdisciplinary review of all of the different legal, social, and individual impacts of technology. And it does that from the perspective of the creators. It’s about teaching students to think about what kind of decisions they are making and what kind of questions they are asking about those decisions as the person actually creating, deploying, and testing technology.
How do you teach those skills?
In the course, we review different ethical frameworks, professional codes of ethics, and case studies of how technologies are continually reshaping our understanding of concepts such as privacy, security, power, knowledge, and reality. We also discuss the ethical concerns these raise for everyone. Sometimes we look at real-world narratives—scholarly articles, podcasts, documentaries, etc.—and other times fictional narratives—films, series, novels, etc.—that illustrate different perspectives on ethical concerns about current and near-future technologies.
And do the students take a deep dive on a particular ethical issue?
Yes, students also engage in a semester-long research project about a specific technology they have chosen to investigate. They explore the ethical issues around that technology, based on the ethical frameworks and codes of ethics, as well as potential solutions or tools that can address them.
Why is it important for undergraduates to learn this?
Many computer science (CS) students are going right into product manager positions or software engineering teams. They’re placed in critical professional roles and are responsible for making decisions about different types of technology development considerations. And so we want them to develop what we call an “ethical sensibility.”
What exactly is that?
Ethical sensibility is this idea that technology isn’t neutral, and there are decisions and values that are placed into every single algorithm, application, program, and system created either by an individual or team. It provides a framework to think through design decisions and the potential impact of those decisions—both positive and negative.
What are the dangers of not having ethical sensibility?
We’re seeing numerous examples of the inability of some individuals and corporations in tech to evaluate the potential harm that could be created to individuals and society.
The rise of deepfake video technology is a good example. Researchers created an open source software technique to create videos of people doing or saying things that they never actually did.
Which can cause all sorts of problems.
Exactly. The CS research community said, “That’s interesting, except we can see all sorts of problems with that at a social, political, and personal level.” They also asked: “What have you done to make sure that there’s a signal so that we know it’s a deepfake?” The researchers who developed it had no strategy or answers for mitigating the harm these videos could inflict. They hadn’t even thought of it.
Interesting. Can you tell us a bit about the history of computing ethics?
There have been different phases of computing ethics, starting in the 1940s and ’50s when computers were massive mainframes and were used for calculating things like rocket trajectories and other complex computing tasks. People were concerned at a societal level that these giant “brains” were going to replace humans—and you can see this concern circling back around now with the increased use of AI being combined with and integrated into current technologies.
We definitely share those concerns today. What sort of concerns accompanied later stages of computing?
In the 1950s and ’60s, there were increased privacy concerns as data started to be collected and stored in computers by government agencies. In the ’80s, we had the advent of personal computers and concerns about cybersecurity and hacking into sensitive systems. In the ’90s, and the rise of the internet, additional ethical concerns developed about “who was in charge of the internet” and “who was responsible for the information shared on the internet.” Many of these concerns are still obviously relevant. However, with the increased use of AI-driven applications, the scale, scope, and speed of these ethical concerns is making it clear there’s a gap in our ability to effectively manage and address these concerns.
Recently, a growing number of computer scientists have been leaving big tech companies as they voice ethical concerns. And lately, universities and colleges have been adding computing ethics classes into their CS curriculum. But neither computer science nor computing ethics are new fields. Why do you think the two are coming together only so recently?
Some of it has been a persistent lack of diversity in computer science, whether it’s in academia or industry. Whenever you have the same type of people creating technologies, often with the same training, mindset, and experiences, the lack of diversity creates technologies that only reflect that narrow perspective. We also have had industry tech leaders and some scientists in academia who have valued innovation at all costs to win the race of being the first to market (or publication), or have worked from the model of releasing technologies “into the wild” and letting the users find the problems to fix in the next update release.
That doesn’t sound like a way to anticipate or address these problems.
When these types of approaches come together with other influences, such as venture capital firms looking for quick returns on investments, it’s not going to produce a lot of change that would ensure concerns like safety, privacy, equity, or security are adequately addressed.
Was this because there was a lack of understanding of computing ethics?
While there’s a large body of computing ethics work and it’s a current research area in the discipline, it’s most frequently not being discussed in the context of the core technical CS curriculum. Often, it’s a stand-alone course that might be offered as an elective class rather than a required course. So the goal of the Computing Ethics Narratives (CEN) project is to make it easier for faculty to discuss and teach computing ethics in their core courses to ensure all CS students have training on how to evaluate their work in relation to the responsibilities of profession and to society as a whole.
How has the recent emphasis on computing ethics impacted the industry?
You can see recently that tech companies are setting up ethics boards or oversight boards and hiring leading technology ethicists. Those positions didn’t exist maybe 10 years ago. Unfortunately, you can also see the conflict within these corporations when the people they hire are fired for raising concerns about the ethics related to some of the research and practices of these companies. That’s going to have a chilling effect on tech professionals raising ethical issues. And there are going to be some growing pains in this process of finding the most effective ways to hold tech companies accountable from the inside.
What do you expect your CS students to do in that environment?
Within 10 years of leaving Colby, they’re going to be managers and be in charge of some important decision-making around ethical issues raised by their innovation and development practices. If they don’t start developing this ethical sensibility, then they won’t be ready to make those really important decisions in the future when they are in a position to address these ethical issues. They should be prepared to critically examine their role in technology creation and its impact as well as listen to and work with other professionals and stakeholders to reduce harm that might be caused by their contributions to technological innovation.