You will be watching a TED talk by computer scientist Yejin Choi on the topic of common sense and artificial intelligence. As artificial intelligence systems like chatbots continue to advance, there is much discussion around whether they have the kind of basic common sense that humans acquire just by living in the world.
In this 14-minute video, Dr. Choi shares thought-provoking examples that show gaps in AI’s common sense today. She argues that while neural network models are very impressive in some ways, simply scaling them up cannot impart true understanding of everyday physical and social concepts. Her team is exploring new methods like knowledge graphs to teach AI systems.
Watch: Why AI Is Incredibly Smart and Shockingly Stupid | Yejin Choi | TED
As you watch, consider the following questions:
- What examples stood out to you of AI systems lacking common sense? Did any make you laugh?
- Do you agree that today’s AI is missing this human-like common sense? Why or why not?
- How convinced are you by Dr. Choi’s arguments against simply “scaling up” neural networks indefinitely? Can you think of counterarguments?
Stop and Reflect:
Reflect on the concept of “common sense” that Yejin Choi discusses in her talk. Take 10-15 minutes to consider some or all of these questions. Think about how Choi’s ideas connect with your own familiarity with AI and computers.
- Think of times in your own life when you used basic real-world understanding or “common sense” to solve problems. For example, knowing not to touch a hot stove or when to take an umbrella outside.
- Do you think AI systems today lack this kind of everyday common sense? Why or why not? Can you think of examples of AI systems making silly mistakes that show a lack of common sense?
- Choi says that just giving AI systems more data and computing power is not enough for true common sense. Do you find this argument convincing? Could other methods like having people create knowledge graphs or diagrams to teach AI systems be helpful?
- Should people be closely involved in developing and monitoring AI systems to make them safer and fairer? Or can this be left completely up to algorithms? What do you see as the pros and cons?