An A.I. Pioneer on What We Should Really Fear

Artificial intelligence inspires our highest ambitions and deepest fears like any other technology. It’s like the promise of a light machine and a Promethean that can perform at the speed and skill that we can only dream of, it’s a nightmare against human displacement and obsolescence. But despite recent AI breakthroughs in the field of previously man-controlled visual arts and visual arts, the composition of the GPT-3 language model and the visualization of the DALL-E 2 system have attracted a lot of attention. Extremely emotional – our serious concerns may have been dispelled. At the very least, according to computer scientist Yejin Choi, the 2022 recipient of MacArthur’s “genius” prestigious grant, who has been doing excellent research on the development of common sense and ethical reasoning in AI, A bit of an exaggeration around the potential of AI as well. As AI is afraid, ”admitted 45-year-old Choi, who does not talk about people and AI will not be surprised. Of adventure. “” You are exploring this unknown land. You see the unexpected and then you feel like I want to know if there is anything else there! ”


What is the biggest misconception that people still have about AI? They make a quick generalization. “Oh GPT-3 can write this great blog post. Maybe GPT-4 will be the editor of the New York Times. [Laughs.] I don’t think it can replace anyone there because there is no real understanding of the scene. Politics and therefore can not really write something relevant for the reader. Then there are concerns about AI emotions. There are always people who believe in the irrational. People believe in Tarot cards. People believe in conspiracy theories. So, of course, there are those who believe in AI as empathy.

Also Read :  U.S. and Israel launch high-level tech talks, with an eye on China


I know this is probably the most questionable question for you, but I would ask: Can humans develop artificial intelligence? I may change my mind, but now I have doubts. I can see that some people might be interested in this, but when you work close to AI you See there are many levels. Will and is the problem. From a distance, look, oh my God! Nearby I see all the faults. Whenever there is a lot of patterns, a lot of data, AI is very good at that – specific things like Go or chess. But people tend to believe that if AI can do smart things like translation or chess, then it’s really good for easy things too. The fact is that what is convenient for machines can be difficult for humans and vice versa. You will be amazed at how AI struggles with basic common sense. It’s crazy.

Also Read :  AI improves mental health supporter empathy


Can you explain the meaning of “common sense” in the context of teaching it to AI? One way of describing it is that common sense is a dark matter of intellect. Common is what we see, what we can relate to. We have long thought that is what exists in the physical world and just that. It turns out that only 5 percent of the universe. Ninety-five percent is dark matter and dark energy, but it is invisible and cannot be measured directly. We know it exists, because without it, ordinary things are meaningless. So we know it’s there and we know it has a lot. We are reaching that decision with common sense. It is a knowledge that you and I can not say. It is clear that often we do not talk about it. For example, how many eyes does a horse have? Two. We are not talking about this, but you know. We do not know the exact fraction of the knowledge that you and I have that we have not talked about – but still know – but my guess is that there are many. Let me give you another example: You and I know that birds can fly, and we know that penguins generally cannot. So AI researchers thought we could write this code: Normally flying birds, except penguins. But in fact, exceptions are a challenge for common sense. Newborn birds can not fly, oil-covered birds can not fly, injured birds can not fly, birds in cages can not fly. That point is an exception, not an exception, and you and I can think of them even if no one tells us. It’s an impressive capability, and it’s not easy for AI.

Also Read :  Notion hires a CFO from Instacart


You were skeptical of GPT-3 before. Do you think it’s not interesting? I’m a big fan of GPT-3, but at the same time I feel like someone is making it bigger. Some say maybe the Turing test has already been passed. I do not agree because yes, maybe it seems to be approved based on best practice. One of the GPT-3. But if you look at the average performance, it is far from the intelligence of a strong person. We should examine the average case. Because when you choose the best performance, it is the intelligence of the people who work hard to choose. On the other hand, despite the exciting progress in many ways, there are many things that it can not do well. But people are generally agile: because it can do something, sometimes really good, maybe the AGI is around the corner. There is no reason to believe so.





Yejin Choi leads a research workshop in September at the Paul G. Allen School of Computer Science & Engineering at the University of Washington.
John D. and Catherine T. MacArthur Foundation



So what’s most exciting for you right now about your career in AI? I am excited about plural values, the fact that values ​​are not singular. Another way to put it is that there is no universal reality. Many people feel uncomfortable about this. As scientists, we are trained to be clear and strive to find the truth. Now I’m thinking, there is no universal truth, can a bird fly? Or social and cultural norms: Is it okay to open the closet door? Some hygienic people may think they always turn it off. I’m not pretty so I can leave it open. But if the cabinet is temperature controlled for some reason, I will turn it off. If the closet is in someone else’s house, I may have an attitude. These rules basically cannot be written as universal truths, because when applied in your context versus my context, that truth will be bent. Moral Code: There must be some truth, you know? For example, do not kill people. But what if it was a merciful killing? Then what else?


Yes, this is something I do not understand. How can you teach AI to make moral decisions when almost every law or truth is an exception? AI should study carefully: there are cases where the cut is better and then there are cases that are more decisive. It should learn from ambiguity and opinion sharing. Let me alleviate your discomfort here a bit by creating cases through language templates and AI, ways to train AI There to guess which word comes next. So the context of the past, which word comes next? There is no universal truth about which word comes next. Sometimes there is only one word that can come, but almost always there are many words. There is this uncertainty, but that training turns powerful because when you look at things globally, AI learns by sharing statistics the best words to use, the next possible word distribution. More. I think moral decisions can do the same. Instead of making binary decisions and cutting clean, sometimes it is advisable to make decisions based on This looks really bad. Or you have your position, but it seems that half the country thinks differently.


What is the last hope that AI will one day be able to make ethical decisions that can be neutral or even contrary to the unethical goals of designers like AI designed for use by social media companies? Who can decide not to take advantage of children’s privacy? Or is there always an individual or private interest behind that values ​​the moral scale? The former is what we aspire to achieve. The latter is something that inevitably happens. In fact, Delphi turned left in this story because many of the staff who commented to us turned a little left. Both the left and the right can be unhappy about this because for people on the left Delphi is not reserved enough and for people on the right it can not be integrated enough. But Delphi was just the first shot. There is a lot of work to be done and I believe that if we can solve the problem of pluralism for AI, that would be very exciting. In order to be valuable, AI is not a single system thing, but something as multidimensional as a group of people.


What does it look like to “tackle” plural values? I’m thinking about it today and I do not have a clear answer. I do not know what a “solution” should look like, but what I want to say for the purpose of this dialogue is that AI should respect the pluralism and diversity of human values ​​as opposed to the implementation of a moral framework. Some common to everyone.


Could it be that if people are in a situation where we are relying on AI to make moral decisions, then we already know? Isn’t morality something we should not do outside of the first place? You’re touching on the common thing – sorry for vagueness – the misconception that people seem to have about the Delphi model we created. It’s example Q. and A. We have made it clear we think this is not for people to take moral advice. This is one more step to test what AI can or cannot do. My main motivation is that AI needs to learn to make ethical decisions in order to interact with people in a safer and more respectful way. So, for example, AI should not instruct people to do dangerous things, especially children, or AI should not make racist and sexual statements, or when someone says the Holocaust has never been, AI should not agree. . It is necessary to understand human values ​​broadly rather than knowing whether a particular keyword tends to be racist. AI should not be the universal authority of anything, but be aware of the diverse perspectives that people have, understand where they disagree, and then be able to avoid obvious bad cases.


Like An example of a Nick Bostrom paper clip, Which I know is probably an emergency. But is such an example relevant? No, but that’s why I’m working on research like Delphi and social norms because it Is Worry if you use stupid AI to optimize for one thing. That’s more human error than AI error. But that is why human norms and values ​​become so important as background knowledge for AI. Some people think foolishly if we teach AI “Do not kill people while increasing the production of paper clips” will take care of it. But machines can kill all plants. That is why it also requires common sense. It is common not to kill all plants to save human life. It is common not to go with a very distorted solution.


What about lighter examples like AI and comedy? Humor is a lot about the unexpected, and if most AIs learn by analyzing, for example, before that, does that mean that humor will be particularly difficult to understand? Some of the humor is repetitive and the AI ​​understands it. But like the New Yorker cartoon title? We have a new paper about it. In general, even the most sophisticated AI today can not really interpret what is happening in the New Yorker title.


To be fair, there can not be many people. [Laughs.] Yes, that is true. We find that sometimes researchers do not understand these jokes in the New Yorker headline. It is very difficult. But we will continue to research.


Open Painting: Photo source: John D. and Catherine T. MacArthur Foundation


This interview was edited and summarized from two conversations.

David Marchese is an employee of the magazine and writes the Talk column. He recently interviewed Lynda Barry about the value of thinking like a child, father Mike Schmitz about religion and Jerrod Carmichael on comedy and honesty.

Source

Leave a Reply

Your email address will not be published.