‘No specific purpose’: Experts on how Big Tech attempts to create demand for AI

Posted on : 2024-07-05 12:49 KST Modified on : 2024-07-05 12:52 KST
Ted Chiang, Gary Marcus, Choi Yejin and Abeba Birhane took part in a roundtable on the topic of AI at the Hankyoreh Human & Digital Forum on June 12
Ted Chiang speaks in a roundtable at the 2024 Hankyoreh Human & Digital Forum with moderator Jeon Chi-hyung and fellow panelists Choi Yejin, Abeba Birhane and Gary Marcus. The forum took place under the theme “The AI Desiring to Go Beyond Human: Can it Capture Even Human Values?” at the KCCI Building in Seoul, South Korea, on June 12, 2024. (Kim Young-won/Hankyoreh)
Ted Chiang speaks in a roundtable at the 2024 Hankyoreh Human & Digital Forum with moderator Jeon Chi-hyung and fellow panelists Choi Yejin, Abeba Birhane and Gary Marcus. The forum took place under the theme “The AI Desiring to Go Beyond Human: Can it Capture Even Human Values?” at the KCCI Building in Seoul, South Korea, on June 12, 2024. (Kim Young-won/Hankyoreh)

What is AI technology at its essence? Will AI replace humans in certain professions? What standards are needed to assess the quality of AI?  

These were all questions touched on during a roundtable discussion of experts held at the Hankyoreh Human & Digital Forum on June 12, which took place under the theme “The AI Desiring to Go Beyond Human: Can it Capture Even Human Values?” Moderated by KAIST professor Jeong Chi-hyung, world-class thinkers Ted Chiang, Choi Yejin, Gary Marcus and Abeba Birhane dove into the issues at the core of artificial intelligence for a lively 90-minute debate. Given their diverse backgrounds as a computer engineer, a cognitive scientist and a science fiction writer, the speakers demonstrated subtle differences in viewpoints, but provided insight into how to allay anxieties about AI. 

The following has been edited for length and clarity. 

Jeon Chi-hyung: Let me pose a question about what AI is. There’s been a suggestion by some researchers that we regard chatbots based on large language models as “stochastic parrots” because they say things that they don’t really understand. 

Ted Chiang: I wrote an essay last year in which I described ChatGPT as a “blurry JPEG of the web.” What I meant by that is that it is a highly compressed version of the information found on the internet which, in itself, is not the world. The internet is just a bunch of web pages and ChatGPT has taken a lot of that information and compressed it. It is a lossy compression in as much as it does not typically reproduce the text on the internet with 100 percent accuracy. It generally produces some sort of rephrasing of the text on the internet in a way that sometimes conveys the same general impression in the same way that a blurry JPEG bears a resemblance to a high-resolution original. But in the same way that a blurry JPEG often has what are known as “artifacts,” you'll have glitches which sometimes can be very noticeable.

That is also true of the output of ChatGPT, but whereas JPEGs are visibly blurry, in the text output of ChatGPT, the blurriness is happening on sort of a semantic level. So the blurriness is what is sometimes referred to as “hallucinations,” “confabulations” or just outright “bullshit.” What it gives you is superficially sharp and clear because it produces grammatically correct sentences. But it is blurry at a different level, one that is not immediately perceptible to the viewer.

But I would also like to expand on what Gary was saying about ChatGPT being autocomplete on steroids. That is a very useful way to think about it, because whenever you use autocomplete on your phone, you will notice that it is never at a loss for a suggestion. It will always suggest another word. It will never say, “I’ve got nothing. I don’t know what comes next.” It will always offer you a suggestion — sometimes it is a good suggestion, sometimes it is a terrible one, but it never hesitates, and that is also absolutely the case with ChatGPT. 

Choi Yejin: I found that if you ask someone with a high bar for creativity, none of this is good enough. But, just objectively speaking, I did notice a lot of people being blown away, and that, to me, is a bit of a concern about what sort of expectations people have around this technology and my frustration about how, as a scientist, it's really difficult to nail down what exactly the source of the generalization was. 

Jeon Chi-hyung: I wonder how we as a society should assess the quality of AI technology. When I ask this question, I’m thinking of the example of automobiles. We have assessed the quality of automobiles with various measures including power, speed, design, comfort, safety, accessibility, fuel efficiency, carbon emission and many others. These criteria have changed over time as our values and our environment have changed. How might it differ for AI?

Ted Chiang: To say a little more about your comparison to automobiles, one of the things that played a big role was the fact that General Motors, one of the major manufacturers in Detroit, bought up public transit companies all across the United States. They bought up streetcar companies and they shut them down. And they did this because it was a way of creating more demand for automobiles. So that played a big role in two things: the lack of public transportation in the United States and the creation of private automobile ownership as a norm in the United States. 
I think that process is at play all the time in every industry. It is not so much what consumers are looking for, it is what strategies the industries that are trying to sell these as products can devise in order to increase consumption. Those same forces are at work in the tech industry. Much of what shapes the direction of technology is not based on any criteria that consumers have. We as individuals could say, “What I want from AI is this.” But even if I convince everyone who hears this to go with my criteria, that’s not going to make any difference, because it’s like six people in Silicon Valley whose opinions make a difference. 

Abeba Birhane: There is no better example than current generative AI that showcases that these big corporations will produce AI and will put it out there, and they will force it onto the population until it becomes the norm. There is no specific purpose, there is no specific use case for generative AI. It’s just been a technology over the past couple of years that’s just floating around looking for purpose, looking for some kind of uptake.

Gary Marcus: One of the problems right now that I think faces the field is if you don’t know what data the system’s trained on, it’s very difficult to make a good benchmark. So it’s one more reason why, if we want to advance the field, we actually need transparency about what data the systems are trained on. The upper bound [of AI capabilities] is actually probably well beyond humans, but we use humans right now because there are a bunch of things that humans currently do that machines don’t do very well, like reasoning in unknown circumstances and so forth. The big problem we have with AI right now is that it’s not reliable. Benchmarks aren’t really getting better at that. 

Choi Yejin: I want to highlight one thing really important, which is that in the coming years, AI evaluation will only become harder. I find two challenges: For one, society, even scientists, have already drawn the conclusion such that even when I present a mistake, they think that it’s no big deal. “The next version, the bigger one, will not have any problem with that.” So human bias is a huge problem even among scientists. That’s challenge No. 1. 
And then there’s challenge No. 2, which is that, unlike automobiles, unlike all the other tools that we have ever created, we don’t have full control over what it’s generating. When it does fail, it fails in the most unexpected ways. It makes a weird common-sense error after doing so well on something seemingly harder from a human standard. I think evaluation will only become harder as these models will advance. One thing I can tell you for sure is that the next AI releases will be even more impressive, and it will become even harder to evaluate. And I think it's really, really important to invest deeply into benchmarking and testing these models rigorously before deploying them. 

Attendees of the 2024 Hankyoreh Human & Digital Forum listen to a discussion between Abeba Beirhane, a professor at Trinity College, Ireland, and Cheon Hyun-deuk, a professor at Seoul University, on the topic of “How Does AI Development Led by Big Tech Reproduce Bias and Inequality?” (Shin So-young/Hankyoreh)
Attendees of the 2024 Hankyoreh Human & Digital Forum listen to a discussion between Abeba Beirhane, a professor at Trinity College, Ireland, and Cheon Hyun-deuk, a professor at Seoul University, on the topic of “How Does AI Development Led by Big Tech Reproduce Bias and Inequality?” (Shin So-young/Hankyoreh)

Jeon Chi-hyung: It is very common now to point out that the data used for training AI models are biased, which leads AI chatbots and other services to produce problematic results, whether they are factually wrong or unacceptable opinions. Do you think we can ever solve this problem of bias, whether it is in data or the world? How should we deal with this idea or problem of bias?

Abeba Birhane: From a philosophical point of view, we are talking about modeling the complex social world — human behavior, human action. And there is never an unbiased data set and there will never be an unbiased data set, because these are social concepts that don’t have a single objective representation. And a lot of the audit work that we are doing is actually both diagnosing this problem, because without having a full understanding without a diagnosis of the problem solutions are impossible; tackling it from multiple perspectives, both in terms of changing societal structures but also working on numerous reliable methods to investigate and mitigate the hideous issues that are staring at you when you look at these data sets. So we can work on this from multiple perspectives.

Choi Yejin: Can this problem be ever fixed completely? I don’t think there’s such a thing because human society is complex. Some of these biases are very nuanced and these are real even if they are nuanced. But because of that, not everyone has the same level of understanding, which makes it even harder to address this.

Jeon Chi-hyung: Is there a reason that we should be particularly worried about Big Tech companies in AI?

Abeba Birhane: Various people have made the argument that Big Tech is no different from big oil, big automotive companies, and even big tobacco. People have explicitly mapped out the similar tactics that Big Tech is following that are copied from big tobacco in terms of influencing academic research and in terms of influencing policy. But one of the major differences is that there is this public support or public framing that presents AI as societal advancement — as progress that makes it seem like it’s in everybody’s interest to advance AI. So we have a much harder problem pushing back against Big Tech as opposed to the other big corporations that have played influential roles in the past.

Gary Marcus: My small footnote to that fantastic answer is that they’re not even actually making money. It’s all based on the promise that they will make money. And they’ve convinced everybody with this current technology that doesn’t really work well that they’re going to radically change the world. And so they’re getting power immediately. Altman went on a world tour and met all of the world leaders. He’s never made a nickel. Maybe let’s wait; let’s see if he makes a nickel. Why are we giving him all of this power? It’s because of this fantasy. It’s not something that is actually based in reality — at least not so far.

Ted Chiang: In the past, the US government had played a much bigger role in breaking up monopolies. There were times in history when the US government was very invested in breaking up giant monopolies. Taxation rates were very high, both corporate taxes and individual taxes on high-earning individuals. But since the 1980s with Reagan and Thatcher, we’ve moved into a sort of different idea of what government should do. So this combination of circumstances, I think, is why giant tech companies face less opposition than giant oil companies did in the past, or giant steel companies, or any sort of monopolistic enterprise. We are starting to see a little more pushback. There are now more lawsuits by the government to try and break up companies or at least restrain them in some fashion. So hopefully that is just the beginning of an ongoing trend.

Jeon Chi-hyung: What are the important and meaningful tasks in human society for which AI is really the right tool?

Gary Marcus: It’s in its element for fraud for creating disinformation and so forth. The real place where AI may help most is in scientific discovery — material science. Generative AI is good for brainstorming.

Abeba Birhane: Most of the time its good uses are, again, within biological systems, physical systems. DeepMind has shown promise in protein-folding and weather-related forecasting, or even mapping and building predictive models of the natural world — rainforests and so on. But the more we build toward the social, using it within the social space — in hiring, in law enforcement, in fraud detection, in welfare benefits — the less accurate and the less helpful. In some places, we should entirely avoid AI.

Choi Ye-jin: In India, there are a lot of people who just cannot afford lawyers, so even if something unjustly happens to them there’s nothing they can do. But if the cost of these lawyers can be lowered by helping them with some AI assistance in the pipeline, even if it’s imperfect, it can still be very helpful for those people who have zero resources. I also met with someone who wanted to do a startup in Africa. In his town, people don't see a doctor because it’s too far and they cannot afford but he’s working on this AI phone app that can help with some information, as opposed to zero access.

By Han Gui-young, research fellow at Human & Digital Research Lab 

Please direct questions or comments to [english@hani.co.kr

button that move to original korean article (클릭시 원문으로 이동하는 버튼)

Related stories