[Interview] ‘AI is not really intelligent’: Ted Chiang on science and fiction in the LLM boom

Posted on : 2024-05-27 16:59 KST Modified on : 2024-05-27 16:59 KST
Chiang will deliver a keynote address at the 3rd Hankyoreh Human & Digital Forum on June 12
Science fiction writer Ted Chiang. (courtesy of Chiang, Photo credit: Alan Berner)
Science fiction writer Ted Chiang. (courtesy of Chiang, Photo credit: Alan Berner)


 
Ahead of his keynote speech at the Hankyoreh Human & Digital Forum next month, the Hankyoreh arranged an interview between science fiction writer Ted Chiang and Kim Beom-jun, a Sungkyunkwan University professor of physics who will be joining Chiang for a talk at the forum.
 
Chiang is known as one of the era’s preeminent American science fiction writers. He has been honored multiple times by Nebula and Hugo awards, which are considered the most prestigious prizes in the sci-fi world. His novella “Story of Your Life” drew widespread attention after it was adapted into the 2016 film “Arrival” by director Denis Villeneuve.
 
More recently, he has attracted notice for texts sharing profound insights on artificial intelligence. A February 2023 essay in the New Yorker titled “ChatGPT Is a Blurry JPEG of the Web” was seen as elevating the level of the AI debate.
 
The third Hankyoreh Human and Digital Forum will take place on June 12 with a focus on the theme “The AI Desiring to Go Beyond Human: Can it Capture Even Human Values?” Kim and Chiang’s interview took place over email.
 
Kim Beom-jun: Could you introduce yourself to the Hankyoreh’s readers?
 
Ted Chiang: I’m a science fiction writer. I’ve published two collections of short stories, titled “Stories of Your Life and Others” and “Exhalation.”
 
Kim: You studied science and computer science in college. What is the value of studying sciences not only for science fiction writers like yourself, but for all of us?
 
Chiang: It’s important to know how the universe works, and that is what science teaches us. The universe often doesn’t work the way we want it to; it doesn’t accommodate our preferences. For example, in the United States, many people believe that the month you were born determines your personality; in Korea, many people believe that your blood type determines your personality. Neither of these beliefs is supported by any science whatsoever. It’s comforting to have a system of classification into which you can fit people, but that is not how the universe actually works.
 
Kim: Reading your work, I’ve often felt as though you were performing thought experiments with them, jumping off from “what if” questions. Can you comment on the differences between thought experiments in science and science fiction?
 
Chiang: Thought experiments in science fiction don’t need to be as rigorous as the ones in science. Reading fiction presumes a certain suspension of disbelief, which is entirely appropriate for fiction but not for science. Scientists are able to use thought experiments to make real scientific progress; Einstein developed his theory of relativity primarily by sitting in a room and thinking rather than looking through a telescope. Science fiction writers use thought experiments to dramatize philosophical questions like, “Is immortality still desirable if it means that having children is forbidden?”
 
Kim: You are now well-known as a critic of AI. How does your career as a science fiction writer relate to raising your voice on AI issues?
 
Chiang: It’s a little odd to be a science fiction writer telling technologists that they need to rein in their imaginations. I think it’s important to make a clear distinction between scenarios that would make for a good story and scenarios that might actually occur in the world we live in. A lot of people working in AI talk about the idea of “the singularity,” when computers become more intelligent than human beings and make themselves even smarter. This term was coined by a science fiction writer, Vernor Vinge. I think it’s a great idea for a story, and I’m happy to read novels built around the concept, but it’s not something we need to worry about in the real world.
 
Kim: I am very much impressed by your brilliant and catchy metaphors, like AI as a “blurry jpeg of the web” and “the new McKinsey” in your piece for the New Yorker last year. Why are such metaphors important? What is the power of metaphors?
 
Chiang: I think comparing AI to McKinsey is actually not that great a metaphor because too many people haven’t heard of the McKinsey consulting agency. What I more commonly say is that AI is a knife-sharpener for the blade of capitalism, which is a more understandable way of expressing the same idea. Metaphors are useful when trying to make sense of unfamiliar concepts. In English we have an expression “get a handle on something,” which means “begin to understand.” That’s a metaphor about the utility of metaphors; they provide a handle you can grasp. However, we should always remember that metaphors are not literally true; they are just a way to get started.
 
Kim: Recursive developments, not at an individual level but on a social level, were one of the main driving forces for our technological achievements. Can AI develop based on social learning like us? Could a society of AIs produce an AI technology better than themselves?
 
Chiang: The interactions between AI programs have nothing in common with the interactions between people. They are tools, and any improvements that result from using different tools in combination is purely a result of human ingenuity. One day it might be possible to build AI programs that are just like people, but what would be the point? We already have billions of people. If we want the benefits that arise from people working in collaboration, we know how to get it. The goal of developing AI should be to create tools that let us do things we can’t do on our own.
 
Kim: You emphasize we must tame capitalism in the coming years of the AI era. For the past 40 years, and even now, we’ve failed to do so, and are facing economic inequality. Tell us more about how AI Luddites can make capitalism less harmful and more beneficial for humanity.
 
Chiang: There are no easy solutions to the problem of capitalism. Stronger unions would help, and so would companies that are owned by their workers rather than by investors. Decisions should be made by people actually doing the work instead of executives who only see the company through a balance sheet. Being a Luddite does not mean being opposed to technology; it means caring more about economic justice for workers than about shareholder profit. If we have policies that favor economic justice, then technology can assist with that, but if we have policies that favor shareholder profit, then that’s what technology will promote.
 
Kim: Can you give us a taste of your talk at next month’s Hankyoreh Human & Digital Forum?
 
Chiang: I’m going to argue that AI is not really intelligent and that large-language models are not actually using language. I’m also going to argue that generative AI is not a tool to make art.
 
For more information on the Hankyoreh Human & Digital Forum, visit https://enhdf2024.imweb.me/ 
 

button that move to original korean article (클릭시 원문으로 이동하는 버튼)

Related stories

Most viewed articles