In-class activities considering what consciousness is, why it matters whether an AI is conscious, and how to measure consciousness.
Course
This was implemented in a seminar course in computing ethics and professional communication for juniors in computer science.
Time
45 to 90 minutes
Prerequisites
Machine Learning or Artificial Intelligence would help, but not required. In fact, this activity could be tweaked slightly to work for non-CS majors.
Description
Insights from philosophy of mind to help CS students consider a very timely question! The arc of these activities is to consider these three questions in order:
What is consciousness? What would it look like for an AI to be conscious?
Why does it matter whether an AI is conscious? How would that change things?
How can we come to know (or at least believe) that an AI is or isn't conscious?
Curiosity
Demonstrate constant curiosity about our changing world
Explore a contrarian view of accepted solution
Public perception is often that AI will become conscious; these activities challenge that with more specificity about what is meant by "conscious" and some reasoned skepticism about our ability to measure it.
Connections
Integrate information from many sources to gain insight
The reading by Nagel comes from a different field -- namely, philosophy of mind -- than these computer science majors typically engage with.
Assess and manage risk
Students will brainstorm specific examples of what could happen if AI becomes conscious.
Creating Value
Identify unexpected opportunities to create extraordinary value
Students are equipped to help friends and family evaluate whether future AI technologies are conscious, which has ethical implications for how such technologies are used (and treated).
Log In to View More
Save changes before leaving this page?
Information you've entered will be lost if you discard changes. Would you like to save or discard changes?