Home About us

Interview with Sam Altman: Startups don't bet that AI won't continue to grow

Deep Learning and NLP 2024/09/01 00:32

Deep Learning & NLP, Deep | Interview with Sam Altman: Startups don't bet that AI won't continue to grow

Image courtesy of The Logan Bartlett Show

Z Highlights:

Demystifying GPT-4o Multimodality: Technology, Use Cases and Behind the Scenes

Logan: You made a new announcement earlier today. New multimodal model GPT-4o. O stands for Omni. It can work across text, speech, visual. Can you explain why this is important?

Sam: Because I think it's an incredible way to use computers. In fact, we already have voice-controlled computers, like Siri, but they never felt natural to me. And this time, thanks to GPT-4o's many different features, its responsiveness, increased modality, intonation, naturalness, it is able to do something new. For example, users can say "Hey, say hurry" or "Speak in a different voice". Its fluidity and flexibility are astonishing. I can't believe how much I enjoy using it.

Logan: Spike Jonze would be proud of it (ZP note: Spike Jonze is the director of the movie Her. The plot of the film envisions a highly developed artificial intelligence system that can naturally interact with humans and form an emotional connection). Are there any use cases that you particularly enjoy?

Deep Learning & NLP, Deep | Interview with Sam Altman: Startups don't bet that AI won't continue to grow

Image source: Unsplash

Sam: Well, I've only been using 4o for about a week. I put my phone on the table while I'm at work, and then I don't need to switch windows or change what I'm doing, I can use it as if it were another channel. Before having it, usually I need to stop what I'm doing, switch to another tab like Google search, and make some clicks. But with 4O, I can ask it and get an immediate response without having to change what I see on my computer, provided that I continue what I'm doing at hand. It's cool and pleasantly surprised.

Logan: What makes this possible? Is it a shift in architecture or more computing?

Sam: This brings together all the things we've learned over the last few years. We've been looking at audio models, visual models, and how to combine them. We've been working on more efficient ways to train models. The birth of this model is not that we unlock a crazy new thing all at once, but that it brings a lot of pieces together.

Logan: Do you think there is a need to develop a model built on the device to reduce latency to achieve availability?

Sam: You mean for video? It's true that at some point it will be difficult to deal with network latency. One of the things I've always thought was amazing that one day we could put on AR glasses, talk and see the glasses' display change in real time. This feature can be difficult due to network latency. But for our GPT-4o model, the delay of two or three hundred milliseconds feels very good, and it feels like a person responds faster than in many situations.

Logan: Does video mean an image in this context?

Sam: Oh, sorry. I am referring to the generated video, not the input video.

Logan: Understood. So at the moment it is directly processing existing videos.

Sam: Well, working frame by frame.

Logan: I feel like you've taken an iterative approach to model development. Should we be able to understand it this way: there won't be a big GPT-5 release next, but?

Sam: To be honest, we don't know yet. One thing I've learned is that AI and surprises aren't compatible. In fact, we release products in a different way than traditional tech companies. We could call the current model GPT-5 and release it in a different way, or call it something else. But we haven't figured out how to name and brand these things yet. Of course, the release and naming from GPT-1 to GPT-4 makes sense to me. Now GPT-4 has gotten better. We also have an idea: there may be a basic virtual brain that is more capable of thinking than in some cases. Or the new products may be different models, but the user may not care if they are different. We don't actually know how to market these products yet.

Logan: Will it take less computing power to make incremental progress on the model than it used to be?

Sam: We've been using as much computing power as we can. Now we're finding incredible efficiency gains, and that's very important. One of the cool things we're putting out right now is obviously voice mode, but maybe more importantly we've been able to make it efficient enough to be able to make it available to users for free. That's the best model in the world right now, and the gap is quite noticeable. This is a significant efficiency gain for GPT-4 and GPT-4 turbo. Of course, we still have a lot to improve.

Logan: I've heard you say that ChatGPT itself isn't changing the world, but it may just change people's expectations of the world.

Sam: yes. You can't find a lot of evidence that ChatGPT actually boosts productivity on a certain economic metric or something.

Logan: Maybe it helps in terms of customer service.

Sam: yes, there are some areas where there has been improvement. But if you just look at global GDP, can you spot the launch of GPT? Probably not.

Logan: Do you think we'll see an increase in GDP one day?

Sam: I don't know if you're sure it's caused by a particular model. But I think if we look at the charts of statistics in a couple of decades, we'll see, well, something has changed.

Logan: What applications or areas do you see as most promising in the next 12 months?

Sam: I think what I'm doing makes me have a certain bias. But I think AI programming is a very important area.

Logan: Tell me about The Bitter Lesson. (ZP Note: The Bitter Lesson refers to a well-known concept in the field of AI research, proposed by artificial intelligence researcher Rich Sutton.) This concept asserts that the most significant AI advances in the past few decades have not been achieved through complex, artificially designed algorithms, but through simple algorithms that leverage vast amounts of data and computing resources. Complex domain-specific knowledge and hand-designed features tend to be less effective than large-scale computation and simple algorithms. You recently talked about the difference between a deeply specialized model that is trained on specific data and a general-purpose model that has real inference capabilities.

Sam: I bet the generic model would be more important.

Logan: So what do you think is important for people who are focused on a particular dataset and all the relevant integrations in a narrow domain?

Sam: If the model can do general-purpose reasoning, if it can discover new things, then when it needs to know how to handle a new data type, you input the data, and it can do it. But not the other way around. A bunch of specialized models don't enable general-purpose inference.

Logan: So, what does this mean for the development of specific models?

Sam: I think the most important thing is to address real reasoning skills. Then we can do all sorts of things with this reasoning ability.

Looking to the future of AI in communication and creativity

Logan: What do you think will be the main ways of communication between humans and AI in two years?

Sam: Natural language seems pretty good. I'm interested in the idea that we create a future world where humans and AI can be used together, in the same way. So I'm more interested in humanoid robots than other forms of robots, because I think the world is very human-friendly right now, and I don't want it to be reconfigured for something more efficient. I like the idea of communicating with AI in human language, and even AI communicating with each other in this way. It's an interesting direction to go.

Logan: You recently mentioned that models may eventually be commoditized over time, but probably the most important thing is that the model is based on the personalization of each person. Am I right?

Sam: I'm not sure, but it sounds reasonable.

Logan: So apart from personalization, do you think the business UI and ease of use are the key points where the model will ultimately stand out to the end user?

Sam: These are definitely important. They are always important. I can imagine that in the future, for example, there will be, some kind of marketplace or some kind of network, and then our agents will communicate with each other, and different companies will belong to the same app store. I think the rules of business will still apply. Whenever you have a new technology, you feel like they don't apply, but that's usually wrong. All traditional methods of creating lasting value remain relevant here.

Commercialization of AI: Monetization, Open Source, and Future Directions

Logan: What was your reaction when you saw open source models catching up with GPT in benchmarking?

Sam: I think that's great. Like many other types of technology, open source models have their place, and hosted models have their place, and that's fine.

Logan: I won't ask too specific questions, but there's news reports that you're looking to raise a lot of money. The Wall Street Journal said it was raising money to invest in the fab. In the semiconductor industry, TSMC and Nvidia have been aggressively expanding to meet AI infrastructure needs. You also recently mentioned that the world needs more AI infrastructure.

Sam: yes, I think so.

Logan: What do you see on the demand side that makes you feel like we need more AI infrastructure than we currently get from TSMC and NVIDIA?

Sam: First of all, I believe we'll find ways to dramatically reduce the cost of delivering the current system. At the same time, I also believe that as we do this, the demand will increase substantially. Third, I believe that by building bigger and better systems, there will be more demand. We should want a world where intelligence is unmetered. People will do all sorts of things with it. You don't have to think about "do I want this thing to help me read all my emails and reply to them" or "do I want this thing to cure cancer". Of course, you will choose the latter. But the better answer is: you want to do two things with it at the same time. I want to make sure that everyone has enough resources.

Logan: I don't need you to comment on your personal efforts in this area. Of course, please let me know if you want. But there are a variety of different physical device AI assistants like Humane and Limitness. (ZP note: Humane Inc.'s main product is the Humane AI Pin, which is similar to a wearable assistant that uses AI to provide users with convenient and privacy-preserving services, such as real-time translation, schedule management, voice assistant, etc.) The device does not require a smartphone or screen, and interacts directly through voice and gestures. Limitless is a company focused on AR and AI, and its main product is Limitless AR Glasses, an augmented reality glasses designed to provide users with an immersive AR experience that incorporates AI. It can be used in many fields such as education, entertainment, productivity, etc., to help users complete tasks more efficiently. What do you think they did wrong? Why isn't the adoption rate meeting user expectations?

Sam: I think it's just because they're still in their early stages. I have been an early adopter of many computing devices. I owned and really liked the Compaq TC1000 when I was a freshman in college. I think it's pretty cool, but it's far from the iPad. It's far away, but it's in the right direction. Then I bought a Treo (ZP note: The Palm Treo is an early line of smartphones made by the Palm company. The Palm Treo device combines the functions of a mobile phone and a PDA (personal digital assistant) and is one of the early representatives of the development of smartphones). I was very uncool in college. I had an old Palm Trio that wasn't popular for students at that time. It's far from the iPhone. But we ended up on the iPhone. These things feel like a very promising direction, but require some iteration.

Logan: You recently mentioned that many businesses built on GPT-4 will be "crushed" by the GPT of the future, and that's your word. Can you elaborate on this? What characteristics do you think will allow AI-centric companies to survive the advancement of GPT?

Sam: The only framework that works is this: you either build a business that isn't going to be very good based on a next-generation model, or you build a model that would benefit from GPT getting better. If you put a lot of effort into making a use case that is beyond the capabilities of GPT-4 work, but when GPT-5 comes out and it does the task better, then you may feel bad. But if you do something that works well and the user is using it, and then GPT-5 or some version comes out and everything is stronger, then you benefit from all the benefits that come with the rising tide. In most cases, you're not building an AI business, you're building a business. AI is just the technology you use. In the early days of the App Store, there were a lot of cracks that needed to be filled. But eventually Apple will fix the problem. You no longer need the App Store's flashlight app because it's already part of the operating system. It's a given. In contrast, companies like Uber are powered by smartphones, but they have built a long-term defensive business. I think you should pursue the latter model.

Logan: I can think of a lot of existing businesses that leverage your technology and they fit into that framework. In this case, what do you think is a novel concept? Can it be a real company or an interesting idea, like Uber?

Sam: I'd actually bet on a new company. A common example is trying to build AI doctors, AI diagnosticians. They'll say, I don't want to start a business in this space because the Mayo Clinic or other agencies do. But I'd actually bet that this is a new company to do something like this.

Logan: What advice do you have for CEOs who want to be proactive in preparing for these disruptions?

Sam: You need to bet on intelligence as a service that is getting better and cheaper every year. This is a necessary but not sufficient condition for success. So, the big companies that have spent years implementing this technology, you can beat them. But every start-up that focuses on this space will do the same, so you still need to figure out how to build your company's moat in the long run. The competitive environment is more open now than it used to be, and there are a lot of new things to do, but you can't let go of the hard work because there are more ways to deliver value.

Logan: Are there any jobs that you think will exist or become mainstream because of AI in five years, and that may be niche or non-existent now?

Sam: That's a good question, I haven't been asked before. People always ask which jobs are going away, but new jobs are actually more interesting. I think the new work will be in new forms of art, entertainment, new forms of more human connection. I don't know exactly what kind of position it will be, but I think it's going to be a very big new category. I think there will be a premium in terms of personal experience between people.

Analytic AGI: The Continuous Journey to Advanced AI

Logan: The recently publicized valuation of OpenAI is about $90 billion. What do you think are the key milestones for OpenAI to become a trillion-dollar company in the short term? Except for AGI.

Sam: I don't know the exact number. But I think if we can continue to improve technology at the speed that we have and continue to launch good products and grow revenue, we can reach trillions.

Logan: Is the current business profitability model capable of creating trillions of dollars in equity value?

Sam: I think ChatGPT's subscription model has worked very well for us. I didn't think it would be so successful, but it did a great job.

Logan: Whatever AGI means, do you think after AGI, can we ask AGI if there's a different commercialization model for it?

Sam: yes, I guess so.

Logan: We may have seen some of the shortcomings of the existing OpenAI structure in November. You mentioned that you will make changes in the future. What do you think would be a more appropriate structure for the future?

Sam: We're almost ready to talk about that. We've been trying to have all kinds of conversations and brainstorming. I hope that this year, we will be ready to discuss this.

Logan: When Larry and Brett Taylor were promoted to board members, I was waiting for a call, but my phone didn't ring.

Logan: There's a lot of interesting perspectives on the business model of AI and the concept of all of that. You mentioned that it would replace manual work first, then white-collar work, and finally creative work, but obviously, it has proven the opposite in some ways. Was there anything else that surprised you?

Sam: That was the biggest surprise for me, the one you mentioned. I didn't expect it to be able to do legal work so early, because I thought it was a very precise, complex thing. But the biggest is the observation between physical, cognitive, and creative labor.

Logan: For those of you who haven't heard you talk about why you don't like the term AGI, can you elaborate on that?

Sam: Because I don't think it's a moment anymore. What I initially envisioned was that we would have a moment where we didn't have AGI and then we did. There will be a real jump. Now overall I think it's going to be more of a continuous exponential curve, and it's the rate of progress that counts from year to year. You and I may not agree that this is AGI in a given month or year, but we can come up with some tests that we can all agree on, but it's harder than it sounds. GPT-4 apparently didn't hit the AGI threshold that almost anyone thinks it would be, and I don't expect our next big model to do the same. But I can imagine that we're just one or two or some small ideas and some scale gaps from something different. So we need to be vigilant.

Logan: Is there a more modern Turing test, which we can call the Bartlett test, to test whether it crosses that threshold?

Sam: I think when it can do better than all the OpenAI researchers, that's going to be a very important moment, probably or should be discontinuous.

Logan: Does that feel close?

Sam: It's probably not close, but I wouldn't rule it out.

Logan: What do you think is the biggest obstacle to getting to AGI? It sounds like you think there's room for scaling law to evolve and that's going to be for a few years to come.

Sam: I think the biggest hurdle is new research. One of the things I've learned from internet software to AI is that research doesn't work on a schedule like engineering. Research is often meant to take longer, sometimes much faster than anyone expected.

Logan: Can you elaborate on the fact that research progress is not as linear as engineering?

Sam: The best way to explain this is with historical examples. I may be misremembering the numbers.

Logan: I'm sure no one is going to correct you.

Sam: Somebody will. I think the neutron was first theorized in the early 20th century, and it was probably first detected in the 10s or 20s. Work on the atomic bomb began in the 30s, and the creation of the atomic bomb took place in the 40s. From not knowing the concept of neutrons to building an atomic bomb and breaking all intuition in physics, this progressed very quickly. There are other examples, such as the Wright brothers. I could have misremembered the numbers, let's say 1906, they thought it would take another 50 years to fly, and in 1908 they did it. There are many examples of this in the history of science and engineering. There's a lot more that we've theorized that never happened, or that's decades or centuries longer than we expected. But sometimes it's going very fast.

The importance of AI explainability

Logan: Where are we on this path of explainability, and how important is that for AI in the long run?

Sam: There are different types of explainability. The first is whether I understand what's going on at each layer of the network. The second is whether I can find logical flaws through the output. These are all explainable. I'm excited about the work OpenAI is doing in this direction. I think explainability as a broader field is exciting and promising.

Logan: I'm not going to force you to give a specific answer. I think you guys will have a nice announcement when you are ready to make a statement. But do you think this will be a necessity for AI to be adopted in the enterprise?

Sam: GPT-4 is now widely adopted.

AI ethics, regulation, and security

Logan: Maybe it's an overkill to call it an accusation, but there is a real concern about the exciting development of AI in terms of AGI and your personal control and unilateral decision-making with OpenAI, which has sparked some discussion. Some argue that there should be a government structure and elect leaders to take control of OpenAI, rather than letting you make all the decisions.

Sam: yes. I think it's a mistake to tightly regulate the current capability model. But when the model is going to pose a significant catastrophic risk to the world, I think it's probably good to have some sort of oversight. Of course, setting these thresholds and testing methods is somewhat complicated. If we have international nuclear weapons rules, that's a good thing.

Logan: What do you think they don't know about the potential risks inherent in AI that VCs are critical of regulation because they think it's about protecting vested interests?

Sam: I don't think they're seriously thinking about AGI. Many of those who have strong voices about AI regulation have not long ago denied the possibility of it altogether. But I do understand their position that regulation is bad for technology. Looking at what's going on in the tech sector in Europe, I really understand their concerns. But I think we're moving towards a threshold that might make us feel different.

Logan: Do you think there are some aspects of the inherent risks of the open source model?

Sam: Not at the moment. But I can imagine it's possible.

Logan: I've heard you say that security is a false framework in some ways, because it's about things that we've explicitly accepted, like airlines.

Sam: yes, security is not a binary thing. You're willing to fly because you think they're fairly safe, even though you know they crash sometimes. The conditions under which airlines need to be safe are discussable, and some people have different views. It's a hot topic.

Logan: They've become very safe overall, but safety doesn't mean no one is going to die in a plane crash. Similarly, with medications, we really think about side effects, and some people have adverse reactions. There are also hidden security issues, such as social media or things that have a negative impact.

Logan: When it comes to security, is there anything that would cause you to act differently?

Sam: yes, we have something called a "preparedness framework" that is designed for that. They dictate that we will take different actions at different levels.

Logan: I interviewed Eliezer on the podcast. (ZP note: Eliezer Yudkowsky is a well-known figure in the field of AI security.) He is a researcher at the Machine Intelligence Research Institute (MIRI), which focuses on developing safe and beneficial AI systems. He advocates rigorous research to ensure that future AI developments are in line with human values and safety standards).

Sam: How's that going?

Logan: Very good. It's the longest podcast I've ever done, and I think we talked for four hours.

Sam: I'm grateful for his presence.

Logan: It was a lot of fun to sit down with him and talk for four hours.

The Future of AI: Fast Takeoff Scenarios and Social Change

Logan: It was a lot of fun to sit down with him and talk for four hours. We talked a lot about directions, but as a friend of the show, I had to ask Fast Takeoff. I'm curious, there are a lot of different Fast Takeoff scenarios.

Logan: One of the constraints we face today is the lack of AI infrastructure. If researchers had developed an improved Transformer architecture that drastically reduced data and hardware requirements and was more like the human brain, would we be able to see a scenario that took off quickly?

Sam: Of course it's possible, and maybe it doesn't even need to be modified. I still don't think it's the most likely path, but I don't rule it out. I think it's important to think about it within the space of possibilities. I think things will become more continuous, even if they are accelerating. I don't think we're going to sleep one night when the AI is okay and wake up the next day to be truly superintelligent. Even if the take-off happens within a year or years, though, it's still fast.

Logan: Even if you get to this very powerful AGI, will it change society the next day, a year, or a decade from now?

Sam: My guess is that most of the time it's not the next day or a year later, but in ten years, the world is going to be very different. I think the inertia of society is a useful thing here.

Tackling personal and professional challenges

Logan: People can also be suspicious about some things. I guess you don't like the questions being asked, including Elon, equity, and the November board structure. Which one do you like the most?

Sam: I don't hate any of them, there's just nothing new to say.

Logan: Well, I'm not going to ask specific equity questions because you've answered them a lot of times. Although people don't seem to like the argument that enough money is enough.

Sam: yes, if I make trillions of dollars and then donate it, that's in line with expectations or the usual practice.

Logan: Another Sam tried to do that. (ZP note: "Another Sam" refers to Sam Bankman-Fried.) He is the founder and CEO of FTX, a cryptocurrency trading platform, known for his philanthropic commitment and philosophy of "effective altruism." He has publicly stated that he plans to donate most of his wealth to charity. )

Logan: In contrast, what do you think motivated you to pursue AGI? I think most people would feel that even though I had a higher mission, I could still get paid for it. So what motivates you to work every day now? What is the greatest satisfaction?

Sam: I often tell people that I'm willing to make a lot of life compromises and sacrifices right now because it's the most exciting, important, and best thing I've ever been exposed to. It's a crazy time and I'm glad this won't last forever. One day I'll retire on the farm and look back on it like it was a stressful time, but it was also very cool. I can't believe this is happening to me, it's amazing.

Logan: Was there a moment when you felt the most unreal? You've done podcasts, you've talked to Bill · Gates, and you've probably had a lot of interesting people on your phone. Have there been a particularly unreal moment in the past few years?

Sam: There are some things that I find incredible every day. If I had more space to think carefully, it would be crazy.

Sam: But after that incident in November (ZP note: OpenAI experienced a leadership upheaval in November 2023. Than that day or the next day, I received about 10 to 20 text messages from key leaders of some country. It's not surprising, it's strange that it feels normal. I was very busy for those four and a half days, barely sleeping, with high energy levels, very clear and very focused. And then it all happened a week before Thanksgiving, and it was a really crazy few days until Tuesday night to resolve.

Logan: You canceled our podcast.

Sam: yes, I don't usually cancel. Then that Wednesday, Ali and I drove to Napa and stopped at a restaurant. I realized I hadn't eaten in a few days and ordered four entrées and two milkshakes, just for myself. It was a very satisfying moment. While I was eating there, the president of a country texted again to congratulate me on solving the problem. I realized at the time that it didn't feel strange that all this had happened.

Logan: yes, that's interesting.

Sam: My conclusion is that humans are much more capable of adapting to almost anything than we realize. For better or worse, you can quickly adapt to the new normal. I've learned this many times over the years. This is a reflection of the extraordinary nature of human beings and is good for us.

Logan: I remember after 9/11, I was in the town of Summit, New Jersey, and a lot of people died. The sense of normalcy that the whole town was united after the terrorist attack struck me as very unusual. Now I have friends in Israel, you talk to them, and they say it's normal. I said, there is war there. They say, well, you go about your daily life, go buy food, talk to friends. These psychological effects are really interesting. We still need to eat and talk to friends. It's incredibly adaptable.

Sam: Really, it was really my biggest surprise, and it was a deep feeling.

Logan: Models are getting smarter, and you also mentioned the creative element. As models begin to have more capabilities, what do you think will be unique to humans?

Sam: Many years later, humans will still care about other people. I've seen a lot of people online saying that everyone will fall in love with ChatGPT, everyone will have a ChatGPT girlfriend or something. I bet not. I think our long-term focus on other human beings will remain the same, and this is true in a lot of ways, big and small.

Logan: You've probably heard a lot of conspiracy theories about you. You probably won't hear a lot of conspiracy theories about AI, and maybe you don't care. After all, we probably don't have watching robots play football as our main hobby. You set a lot of rules and frameworks for running a company at YC, and then you broke a lot of rules as well. Are you hiring different types of people in this OpenAI business that you wouldn't use when starting a consumer internet company or a B2B software company? Are there different types of people in the top management team?

Sam: Researchers are very different from product engineers.

Logan: Brad or Mira or other executives, as well as researchers, are unique. Does OpenAI bring in different types of executives, or do you recruit people with specific traits?

Sam: I basically didn't do that. I sometimes recruit executives externally because I strongly believe that if you only promote internally, it may strengthen the monoculture. I think you need to bring in some new, high-level talent. Of course, we mainly like to train people in-house. I think this is a positive approach, considering that our work is very different from what we do elsewhere.

Logan: What was the most important decision you made at OpenAI? How did you come up with this decision?

Sam: It's hard for me to name a specific one. We decided to do what we call an iterative deployment, and that decision was very important. We're not going to secretly build AGI and roll it out all at once, which was a mainstream project. I think it's a very important decision. Another important decision is to bet on language models.

Logan: I don't really know the story of betting on language models. How did this start?

Sam: We had other projects at the time, like robotics and video games. There was a person who started working on language modeling, and Ilya believed in it very much, believing that this direction would become a language model. We did GPT-1, GPT-2, looked at Scaling Law, extended GPT-3, and then we made bets. The direction of these decisions seemed obvious in hindsight, but it really wasn't felt at the time.

The role of AI in creativity and personal identity

Logan: You recently mentioned that there are two ways to use AI, one is to copy yourself, and the other is to make it the smartest employee.

Sam: Oh, I'm not saying it's not the AI itself, but how you want to use it. When you imagine using personal AI, do you have the notion that this is entirely me, or is this a stand-alone assistant?

Logan: There's a subtle difference between the two. Can you elaborate?

Sam: If you're going to text me in five years, I think you're going to want to know if you're texting me or my AI assistant. If it's my AI assistant, it aggregates the message, and then you get a response. If it could easily do something you might have my human assistant do, that's fine. I think there's value in keeping these things independent, not to say that AI is just an extension of Sam. I don't want to feel like this thing is just a strange extension of me, but a separate entity that I can communicate with through a boundary.

Logan: In the field of music or creativity, it's fairly easy to copy a Drake or Taylor Swift. We may need some form of verification or centralized verification, which is really the creative work of someone. You may also want to have something similar on a personal level.

Sam: yes, but it's like, my perception of OpenAI is that there are different people here, and I ask them to do something, and they're going to do it, or they ask me to do something, and I'm going to do it. It's not a single board, and I think that's something we're all comfortable with.

Logan: So what does that mean? Can you get in touch? Decentralization that allows individuals to move freely?

Sam: What I want to say is, what is my personal concept of AI? There are two ways to think about it. The first one, I think this AI is me, and it's going to take over my computer and do the best thing because it's me. Will it reply to messages for me, gradually taking over my control? Or do I think it's just a great colleague where I can say "Hey, can you help me with this?" Tell me when you're done". I tend to identify with the second way, where I don't think AI is me.

Adapting the education system for the AI era

Logan: What specific changes do you think the education system should make to prepare for the students of the future? For example, college students in the class of 2030 or 2035.

Sam: The most important point is that people should not only be allowed to use these AI tools, but they should be required to use them. Of course, there are situations where we want people to do things the old way because it helps to understand.

Logan: Like I remember sometimes in math class, you couldn't use a calculator.

Sam: Yes, but in real life, you can use a calculator. So you need to understand it, but you also need to be proficient in using a calculator. If you've never used a calculator in math class, you may not be happy later in life. OpenAI probably wouldn't exist if its researchers had never used a calculator. At least that's the case with computers. We're not going to try to make people use calculators, computers. So we shouldn't train people not to use AI, because that's going to be an important part of doing valuable work in the future.

Logan: One last question. You wrote in your planning for AGI and beyond that the first AGI was just a point on the intelligent continuum. We think that from there, continued progress is likely to happen, and for a long time to maintain the pace of progress that we have seen over the past decade. Have you ever stopped to think or imagine what the future will look like, or is it too abstract?

Sam: I wouldn't think of it as a Star Wars car and a futuristic city, but certainly imagine what it would be like when one person could do the work of hundreds or thousands of well-coordinated people? What would it look like when we could discover all the scientific knowledge? It's going to be a cool world.

Original podcast: EP 104: Sam Altman (CEO, OpenAI) talks GPT-4o and Predicts the Future of AI

https://open.spotify.com/episode/6NZ1P4F7b1GaAo7LDMgmV6

Compilers: Chris, Xu Jiaying

This article is from Xinzhi self-media and does not represent the views and positions of Business Xinzhi.If there is any suspicion of infringement, please contact the administrator of the Business News Platform.Contact: system@shangyexinzhi.com