# Andrej Karpathy — “We’re summoning ghosts, not building animals”

https://www.youtube.com/watch?v=lXUZvyajciY
Translation: zh-CN

[00:00] Reinforcement learning is terrible.
  强化学习很糟糕。

[00:02] Reinforcement learning is terrible.
  强化学习很糟糕。

[00:04] It just so happens that everything that we had before is much worse.
  恰好我们之前拥有的一切都糟糕得多。

[00:06] I'm actually optimistic.
  我实际上很乐观。

[00:08] I think this will work.
  我认为这会奏效。

[00:10] I think it's tractable.
  我认为它是可行的。

[00:11] I'm only sounding pessimistic because when I go on my Twitter timeline, I see all this stuff that makes no sense to me.
  我之所以听起来很悲观，是因为当我浏览我的Twitter时间线时，我看到所有这些对我来说毫无意义的东西。

[00:15] A lot of it is, I think, honestly just uh fundraising.
  我认为，老实说，其中很多只是融资。

[00:16] We're not actually building animals.
  我们实际上并没有在建造动物。

[00:18] We're building ghosts.
  我们正在建造鬼魂。

[00:20] These like sort of ethereal spirit entities because they're fully digital and they're kind of like mimicking humans.
  这些就像是飘渺的精神实体，因为它们是完全数字化的，并且有点像在模仿人类。

[00:23] And it's a different kind of intelligence.
  这是一种不同类型的智能。

[00:25] It's business as usual because we're in an intelligence explosion already and have been for decades.
  这是照常营业，因为我们已经处于智能爆炸之中，并且已经持续了几十年。

[00:30] Everything is gradually being automated.
  一切都在逐渐被自动化。

[00:32] Has been for hundreds of years.
  已经持续了数百年。

[00:34] Don't write blog posts.
  不要写博客文章。

[00:36] Don't do slides.
  不要做幻灯片。

[00:37] Don't do any of that.
  不要做任何那些事情。

[00:39] Like build the code, arrange it, get it to work.
  就像编写代码，组织它，让它工作。

[00:40] It's the only way to go.
  这是唯一的方法。

[00:42] Otherwise, you're missing knowledge.
  否则，你就会失去知识。

[00:43] If you have a perfect AI tutor, maybe you can get extremely far.
  如果你有一个完美的AI导师，也许你可以走得很远。

[00:45] The geniuses of today are barely scratching the surface of what a human mind can do.
  今天的那些天才几乎没有触及人类大脑潜能的皮毛。

[00:48] I think today I'm speaking with Andre Karpathy.
  我想今天我正在和Andre Karpathy交谈。

[00:51] Andre, why do you say that this will be the decade of agents and not the year of agents?
  Andre，你为什么说这将是代理的十年，而不是代理的一年？

[00:55] Mhm. Uh well first of all uh thank you for uh having me here.
  嗯。嗯，首先，谢谢你邀请我来这里。

[00:56] I'm excited to be here.
  我很高兴来到这里。

[00:59] So the quote that you've just
  所以你刚才引用的那句话

[01:01] Here. So the quote that you've just mentioned, it's the decade of agents.
  这里。所以你刚才提到的那句话，是关于“代理的十年”。

[01:02] Mentioned it's the decade of agents. That's actually a reaction to an existing pre-existing quote I should say.
  提到它是“代理的十年”。这实际上是对一个现有的、我应该说是一个先前存在的引用的反应。

[01:05] Where I think a lot of some of the labs I'm not actually sure who said this but they were alluding to this being the year of agents.
  我认为很多实验室，我不确定是谁说的，但他们暗示这是“代理的年份”。

[01:11] Year of agents >> uh with respect to LLMs and uh how they were going to evolve.
  代理的年份>>呃，关于大型语言模型以及它们将如何发展。

[01:16] And I think um I was triggered by that because I feel like there's some overpredictions going on in the industry.
  我想，嗯，我被那句话触动了，因为我觉得行业里有些预测过头了。

[01:19] And uh in my mind this is really a lot more accurately described as the decade of agents.
  呃，在我看来，这更准确地说是“代理的十年”。

[01:23] And we have some very early agents that are actually like extremely impressive and that I use daily uh you know cloud and codeex and so on.
  我们有一些非常早期的代理，它们实际上非常令人印象深刻，而且我每天都在使用，你知道，像云和Codeex等等。

[01:30] But I still feel like there's uh so much work to be done and so I think my like my reaction is like we'll be working with these things for decade.
  但我仍然觉得还有很多工作要做，所以我想我的反应是，我们将与这些东西打交道十年。

[01:36] They're going to get better uh and uh it's going to be wonderful but I think I was just reacting to the timelines I suppose of of the uh implication.
  它们会变得更好，呃，而且会很棒，但我想我只是在对时间线做出反应，我猜是关于这个含义的。

[01:46] And what do you think will take a decade to accomplish? What are the bottlenecks?
  你认为需要十年才能完成什么？瓶颈是什么？

[01:50] Well, um actually make it work. So in my mind, I mean when you're talking about an agent, I guess or what the labs have in mind and what maybe I have in mind as well.
  嗯，呃，实际上是让它奏效。所以在我看来，我的意思是当你谈论一个代理时，我想，或者实验室的想法，以及我可能有的想法。

[01:57] Is it's uh you should think of it almost like an employee or like an intern that you would hire to work with.
  你应该把它想象成一个员工，或者一个你会雇佣来工作的实习生。

[02:01] intern that you would hire to work with you.
  你雇佣来和你一起工作的实习生。

[02:03] Uh so for example, you work with some employees here.
  呃，所以，举个例子，你在这里和一些员工一起工作。

[02:05] Um when would you prefer to have an agent like Cloud or Codeex uh do that work?
  嗯，你什么时候会更倾向于让像 Cloud 或 Codeex 这样的代理来做这项工作？

[02:07] Like currently of course they can't.
  比如现在，当然他们做不到。

[02:08] What would it take for them to be able to do that?
  需要具备什么条件才能让他们做到这一点？

[02:10] Why don't you do it today?
  为什么今天不做呢？

[02:11] And the reason you don't do it today is because they just don't work.
  你今天不做的原因是因为它们根本不起作用。

[02:13] So like they don't have enough intelligence.
  所以，比如它们不够智能。

[02:14] They're not multimodal enough.
  它们不够多模态。

[02:16] They can't do computer use and all this kind of stuff.
  它们无法进行计算机操作以及所有这类事情。

[02:17] And they don't do a lot of things.
  而且它们做不了很多事情。

[02:18] You know, they don't have continual learning.
  你知道，它们没有持续学习的能力。

[02:20] You can't just tell them something and they'll remember it.
  你不能只告诉它们一件事，它们就会记住。

[02:23] And they're just cognitively lacking.
  而且它们在认知上有所欠缺。

[02:24] And it's just not working.
  而且这根本行不通。

[02:25] And I just think that it will take about a decade to work through all of those issues.
  我只是认为需要大约十年时间来解决所有这些问题。

[02:27] >> Interesting.
  >> 有趣。

[02:28] So, um, as a professional podcaster and a viewer of AI from afar, it's sort of easy to identify for me like, oh, here's what's lacking.
  所以，嗯，作为一个专业的播客制作者，以及一个远距离观察人工智能的人，对我来说很容易就能识别出，哦，这里缺少了什么。

[02:30] Continual learning is lacking or multimodality is lacking.
  持续学习是缺失的，或者是多模态能力是缺失的。

[02:31] But I don't really have a good um way of trying to put a timeline on it.
  但我真的没有一个很好的方法来给它设定一个时间表。

[02:32] Like if somebody's like, how long will continual learning take?
  比如有人问，持续学习需要多长时间？

[02:35] I >> there's no like prior I have about like this is a project that should take 5 years, 10 years, 50 years.
  我>> 我没有像这样的先验知识，比如这是一个应该花费 5 年、 10 年、 50 年的项目。

[02:37] >> Why a decade?
  >> 为什么是十年？

[02:40] Why not one year?
  为什么不是一年？

[02:43] Why not
  为什么不是

[03:03] Why a decade? Why not one year? Why not 50 years?
  为什么是十年？为什么不是一年？为什么不是五十年？

[03:04] 50 years? Um, yeah, I guess this is where you get into like a bit of I guess my own intuition a little bit and also just kind of doing a bit of an extrapolation of with respect to my own experience in the field, right?
  五十年？嗯，是的，我想这就是你开始有点……我想是我自己的直觉，还有就是……有点推断，就我自己在该领域的经验而言，对吧？

[03:13] So, I guess I've been in AI for almost two decades.
  所以，我想我在人工智能领域已经快二十年了。

[03:16] I mean, it's going to be maybe 15 years or so, not that long.
  我的意思是，可能十五年左右，不算太长。

[03:19] um you had Richard Sutton here who was around of course for much longer but I do have about 15 years of experience of people making predictions of seeing how they actually uh turned out and also I was in the industry for a while and I was in research and I've worked in the industry for a while so I guess I kind of have uh just a general intuition that I have left from that uh and uh I feel like the problems are uh tractable they're surmountable but uh they're still difficult and if I just average it out that just kind of feels like a ticket I guess to me.
  嗯，你请来了理查德·萨顿，他当然在那里待了更久，但我确实有大约十五年的经验，人们做出预测，看到它们实际如何……嗯，结果如何，而且我还在行业里待了一段时间，我在研究领域，我在行业里待了一段时间，所以我想我从中获得了一种……嗯，一种普遍的直觉，嗯，我觉得这些问题是……嗯，可解决的，是可以克服的，但……嗯，它们仍然很困难，如果我只是平均一下，对我来说，这感觉就像一张……我想是一张入场券。

[03:41] this is actually quite interesting I I want to like hear not only the history but what people in the room felt was about to happen at various different breakthrough moments like what were the ways in which their feelings were either overly pessimistic or overly optimistic?
  这实际上很有趣，我想听听……不仅是历史，还有在各种不同的突破性时刻，房间里的人觉得即将发生什么，比如他们的感受在哪些方面过于悲观或过于乐观？

[04:03] Yeah.
  是的。

[04:03] Yeah.
  是的。

[04:04] Should we just go through each of them one by one?
  我们是否应该逐一过一遍？

[04:05] Oh yeah.
  哦，是的。

[04:06] I mean that's a giant question because of course you're talking about 15 years of stuff that happened.
  我的意思是这是一个宏大的问题，因为你当然是在谈论过去15年发生的事情。

[04:08] I mean AI is actually like so wonderful because there have been a number of I would say seismic shifts
  我的意思是人工智能实际上非常棒，因为我可以说已经发生了一系列地震般的转变

[04:13] that were like the entire field has sort of like suddenly looked a different way,
  整个领域似乎突然以一种不同的方式看待，

[04:16] right?
  对吧？

[04:17] And I guess I've maybe lived through two or three of those.
  我想我可能经历过两三次这样的情况。

[04:19] And I still think there will continue to be some because they come with some kind of like almost surprising irregularity.
  我仍然认为会继续出现一些，因为它们伴随着某种几乎令人惊讶的非规律性。

[04:24] Well, my when my career began, of course, like when I started to work on deep learning, when I became interested in deep learning, this was just kind of like by chance of being right next to Jeff Hinton at University of Toronto.
  嗯，我的职业生涯开始时，当然，就像我开始研究深度学习，当我开始对深度学习感兴趣时，这只是碰巧在多伦多大学的杰夫·辛顿旁边。

[04:33] And Jeff Hinton, of course, is kind of like the godfather figure of AI and he was training all these neural networks and I thought it was incredible and interesting, but this was not like the main thing that everyone in AI was doing by far.
  当然，杰夫·辛顿是人工智能的教父级人物，他正在训练所有这些神经网络，我觉得这令人难以置信且有趣，但这绝不是当时人工智能领域每个人都在做的主要事情。

[04:42] Yeah, this was a niche subject on the side.
  是的，这是一个边缘化的课题。

[04:46] That's kind of maybe like the first like dramatic sort of seismic shift that came with the Alexet and so on.
  这可以说是第一个戏剧性的、类似地震般的转变，伴随着Alexnet等的出现。

[04:51] I would say like Alex sort of reoriented everyone and everyone started to train neural networks.
  我想Alexnet重新定位了所有人，每个人都开始训练神经网络。

[04:56] Uh but it was still like very like per task per specific task.
  但它仍然是非常针对特定任务的。

[05:00] So maybe I have an image classifier or I have a neural machine translator or something like that.
  所以也许我有一个图像分类器，或者我有一个神经机器翻译器之类的东西。

[05:04] translator or something like that.
  翻译器之类的东西。

[05:05] And people became very slowly actually interested in basically kind of agents I would say.
  人们实际上非常缓慢地开始对某种代理感兴趣，我说。

[05:07] uh um and people started to think okay well maybe we have a check mark next to the visual cortex or something like that but what about the other parts of the brain and how can we get an actual like full agent or an full entity that can actually interact in the world and I would say the Atari uh sort of uh deep reinforcement learning shift in 2013 or so uh was part of that early effort of agents in my mind because it was an attempt to try to get agents that not just perceive the world but also take actions and interact and get rewards from environments and at the time this was Atari games
  呃嗯，人们开始想，好吧，也许我们在视觉皮层旁边有一个勾号，或者类似的东西，但大脑的其他部分呢？我们如何才能获得一个真正的、完整的代理或一个能够真正与世界互动的完整实体？我说，2013年左右的雅达利深度强化学习转变，在我看来，是早期代理努力的一部分，因为它试图获得不仅能感知世界，还能采取行动、互动并从环境中获得奖励的代理，当时是雅达利游戏。

[05:35] right
  对

[05:36] and I kind of feel like that was a misstep actually
  我有点觉得那实际上是一个失误

[05:38] and it was a misstep that actually even the early openai that I was a part of of course uh kind of adopted because at that time the sitegeist was reinforcement learning environments games playing beat games get lots of different types of games and open was doing a lot of that.
  而且这个失误实际上连我所属的早期OpenAI当然也采纳了，因为当时的大趋势是强化学习环境、游戏、玩击败游戏、获得很多不同类型的游戏，而OpenAI做了很多这方面的工作。

[05:54] So that was maybe like another like prominent part of I would say AI where maybe for two or three or four years everyone was doing reinforcement learning on games
  所以那可能是我认为AI的另一个突出部分，也许两三年或四年的时间里，每个人都在玩强化学习游戏。

[06:03] and uh basically that was a little bit
  呃，基本上那是有点

[06:05] and uh basically that was a little bit of a misstep of a misstep
  基本上那有点失误，是个失误

[06:08] and what I was trying to do at open a actually is like I was always a little bit suspicious of games as being like this thing that would actually lead to AGI
  我在OpenAI试图做的事情，实际上是我一直对游戏有点怀疑，认为它不会真正导致AGI

[06:13] because in my mind you want something like an accountant or uh like something that's actually interacting with the real world and I just didn't see how games kind of like add up to it
  因为在我看来，你想要的是像会计那样，或者说，能够真正与现实世界互动的东西，而我只是不明白游戏如何能做到这一点

[06:20] and so my project at OpenAI for example was um within in the scope of the universe project on an on an agent that was using keyboard and mouse to operate web pages.
  所以我在OpenAI的项目，例如，是在宇宙项目范围内，关于一个使用键盘和鼠标操作网页的代理。

[06:30] And I really wanted to have something that like interacts with, you know, the actual digital world that can do knowledge work.
  我真的很想拥有一些能够与，你知道的，实际的数字世界互动的东西，能够做知识工作。

[06:35] And it just so turns out that um this was extremely early, way too early.
  结果发现，嗯，这非常早期，太早期了。

[06:39] so early that we shouldn't have been working on that, you know, uh because um if you're just stumbling your way around and keyboard mashing and mouse clicking and trying to get rewards in these environments, um your reward is too sparse and you just won't learn and you're going to burn a forest uh computing and you're never actually going to get something off the ground.
  太早期了，我们不应该在那上面工作，你知道的，呃，因为，嗯，如果你只是摸索着，乱按键盘，乱点鼠标，试图在这些环境中获得奖励，嗯，你的奖励太稀疏了，你根本学不会，你将耗尽计算资源，而且你永远无法真正取得进展。

[06:57] And so what you're missing is this uh power of representation in the neural network.
  所以你缺少的是，嗯，神经网络中的表征能力。

[07:00] And so for example, today people are training those computer using agents, but they're doing it on top of a large language model.
  所以，例如，今天人们正在训练那些计算机使用代理，但他们是在大型语言模型之上进行的。

[07:04] And so you actually have
  所以你实际上拥有

[07:06] Language model.
  语言模型。

[07:06] And so you actually have to get the language model first.
  所以你实际上必须先获得语言模型。

[07:07] You have to get the language model first.
  你必须先获得语言模型。

[07:07] You have to get the representations first.
  你必须先获得表示。

[07:08] And you have to do that by all the pre-training and all the LLM stuff.
  你必须通过所有的预训练和所有的LLM的东西来做到这一点。

[07:11] So I kind of feel like maybe loosely speaking it was like people keep maybe trying to get the full thing too early a few times.
  所以我觉得，也许笼统地说，人们好像有几次试图过早地获得完整的东西。

[07:19] Where people like really try to go after agents too early I would say and that was Atari and Universe.
  我想说，人们似乎过早地试图追求代理，那就是Atari和Universe。

[07:24] Uh and even my own experience and you actually have to do some things first before you sort of get to those agents.
  呃，甚至我自己的经验也表明，在你某种程度上达到那些代理之前，你实际上必须先做一些事情。

[07:29] Um, and maybe now the agents are a lot more competent, but maybe we're still missing uh sort of some parts uh of that stack.
  嗯，也许现在的代理能力强了很多，但也许我们仍然缺少那个堆栈的某些部分。

[07:37] But I would say maybe those are like the three like major buckets of what people were doing.
  但我想说，也许那就是人们所做的三类主要的事情。

[07:40] Uh, training neural nets per tasks trying to the first round of agents uh and then maybe the LLMs and actually seeking the representation power of the neural networks before you uh tack on everything else on top.
  呃，为任务训练神经网络，尝试第一轮代理，然后也许是LLM，并且在之上添加其他所有东西之前，实际寻求神经网络的表示能力。

[07:51] >> Interesting.
  >> 有趣。

[07:51] Yeah, I guess if I were to steal man, the sort of the sudden perspective would be that humans actually can just take on everything at once, right?
  是的，我想如果我要偷换概念，突然的观点将是人类实际上可以一次性处理所有事情，对吧？

[07:59] Even animals can take on everything at once, right?
  即使是动物也可以一次性处理所有事情，对吧？

[08:00] Animals are maybe a better example because they don't even have the scaffold of language.
  动物也许是一个更好的例子，因为它们甚至没有语言的支架。

[08:03] They just get thrown out into the world and they just have to make
  它们只是被扔到世界上，然后它们必须去创造

[08:06] the world and they just have to make sense of everything without any labels.
  世界，它们只需要在没有任何标签的情况下理解一切。

[08:08] sense of everything without any labels.
  理解一切，没有任何标签。

[08:09] Um, >> and the vision for AGI then should just
  嗯，>>那么AGI的愿景就应该是

[08:11] >> and the vision for AGI then should just be something which like just looks at
  >>那么AGI的愿景就应该是某种东西，就像它只是看着

[08:13] be something which like just looks at sensory data, looks at the computer
  某种东西，就像它只是看着感官数据，看着电脑

[08:15] sensory data, looks at the computer screen, and it just like figures out
  感官数据，看着电脑屏幕，然后它就像从头开始弄清楚

[08:16] screen, and it just like figures out what's going on from scratch.
  屏幕，然后它就像从头开始弄清楚发生了什么。

[08:19] what's going on from scratch. I mean, if a human was put in a similar situation,
  发生了什么。我是说，如果一个人处于类似的情况，

[08:20] a human was put in a similar situation, had to be trained from scratch, but I
  一个人处于类似的情况，必须从头开始训练，但我是

[08:21] had to be trained from scratch, but I mean, this is like a human growing up or
  必须从头开始训练，但我是说，这就像人类在成长或者

[08:23] mean, this is like a human growing up or an animal growing up.
  我是说，这就像人类在成长或者动物在成长。

[08:24] an animal growing up. So, why shouldn't that be the vision for AI rather than
  动物在成长。那么，为什么不让这成为AI的愿景，而不是

[08:26] that be the vision for AI rather than like this thing where we're doing
  这成为AI的愿景，而不是像我们正在做的那样

[08:27] like this thing where we're doing millions of years of training?
  像我们正在做的那样进行数百万年的训练？

[08:29] millions of years of training? I think that's a really good question and I
  数百万年的训练？我认为这是一个非常好的问题，而且我

[08:31] that's a really good question and I think um I mean so so Sutton was on your
  这是一个非常好的问题，而且我认为嗯，我是说，所以Sutton在你的

[08:34] podcast and I saw the podcast and I had a write up about that podcast almost
  播客上，我看了那个播客，我写了一篇关于那个播客的文章，几乎

[08:35] podcast and I saw the podcast and I had a write up about that podcast almost that gets into a little bit of how I see
  播客，我看了那个播客，我写了一篇关于那个播客的文章，几乎涉及到我如何看待事物的一点点

[08:37] a write up about that podcast almost that gets into a little bit of how I see things and I I kind of feel like I'm
  我写了一篇关于那个播客的文章，几乎涉及到我如何看待事物的一点点，我我有点觉得我

[08:39] things and I I kind of feel like I'm very careful to make analogies to
  事物，我我有点觉得我非常小心地类比

[08:41] very careful to make analogies to animals because they came about by a
  非常小心地类比动物，因为它们是通过

[08:44] animals because they came about by a very different optimization process.
  动物，因为它们是通过一个非常不同的优化过程产生的。

[08:46] very different optimization process. >> Animals are evolved and they actually
  非常不同的优化过程。>>动物是进化的，它们实际上

[08:48] >> Animals are evolved and they actually come with a huge amount of hardware
  >>动物是进化的，它们实际上自带了大量的硬件

[08:49] come with a huge amount of hardware that's built in.
  自带了大量的内置硬件。

[08:50] that's built in. Um, and when, for example, my example in the post was the
  内置的。嗯，而且，举个例子，我在帖子里的例子是

[08:53] example, my example in the post was the zebra.
  例子，我在帖子里的例子是斑马。

[08:55] zebra. A zebra gets born and a few minutes later it's running around and
  斑马。斑马出生后几分钟就能跑来跑去，

[08:57] minutes later it's running around and following its mother.
  几分钟后就能跑来跑去，跟着它的妈妈。

[08:58] following its mother. That's an extremely complicated thing to do.
  跟着它的妈妈。这是一件极其复杂的事情。

[08:59] extremely complicated thing to do. >> Yeah.
  极其复杂的事情。>>是的。

[09:01] >> Yeah. >> Um, that's not reinforcement learning.
  >>是的。>>嗯，那不是强化学习。

[09:02] >> Um, that's not reinforcement learning. That's something that's baked in.
  >>嗯，那不是强化学习。那是某种固有的东西。

[09:03] That's something that's baked in. And evolution obviously has some way of
  那是某种固有的东西。而进化显然有某种方式

[09:04] baked in. And evolution obviously has some way of
  固有的。而进化显然有某种方式

[09:06] evolution obviously has some way of encoding the weights of our neural nets encoding the weights of our neural nets in ATCGS.
  进化显然有某种方法将我们神经网络的权重编码在ATCGS中。

[09:10] And I have no idea how that works, but it apparently works.
  我不知道那是如何工作的，但它显然是有效的。

[09:13] So, I kind of feel like uh brains just were came from a very different process.
  所以，我有点觉得大脑只是来自一个非常不同的过程。

[09:18] And I I'm very hesitant to take inspiration from it because we're not actually running that process.
  我非常不愿意从中汲取灵感，因为我们实际上并没有运行那个过程。

[09:23] So in my post, I kind of said we're not actually building animals.
  所以在我的帖子里，我有点说我们实际上并没有在建造动物。

[09:26] Uh we're building ghosts. >> Yeah.
  我们正在建造幽灵。 >> 是的。

[09:29] >> Or spirits or whatever people want to call it.
  >> 或者灵魂，或者人们想怎么称呼它。

[09:31] Uh because um we're not uh we're not doing training by evolution.
  因为我们不是通过进化进行训练。

[09:34] Uh we're doing training by basically imitation of humans and the data that they've put on the internet.
  我们基本上是通过模仿人类和他们放在互联网上的数据来进行训练。

[09:40] And so you end up with these like sort of ethereal spirit entities because they're fully digital and they're kind of like mimicking humans.
  所以你最终会得到这些有点飘渺的灵魂实体，因为它们是完全数字化的，而且它们有点像在模仿人类。

[09:45] And it's a different kind of intelligence.
  这是一种不同类型的智能。

[09:47] Like if you imagine a space of intelligences, we're we're starting off at a different point almost.
  就像如果你想象一个智能空间，我们几乎是从一个不同的起点开始的。

[09:51] We're not we're not really building animals, but I think it's also possible to make them a bit more animallike over time.
  我们不是真的在建造动物，但我认为随着时间的推移，也有可能让它们更像动物。

[09:56] And I think we should be doing that.
  我认为我们应该这样做。

[09:57] And so I kind of feel like so just I guess one more point is I do feel like Sutton basically has a very like his framework is like we want to build animals and I actually think that would be wonderful if we can get that to work that would be amazing.
  所以我觉得，我猜还有一点是，我觉得萨顿基本上有一个非常像他的框架，就是我们想建造动物，而且我实际上认为如果我们能让它奏效，那将是美妙的，那将是惊人的。

[10:07] that to work that would be amazing.
  能够做到这一点将是惊人的。

[10:09] If there was a single like there was a single like algorithm that you can just you know run on the internet and it learns everything
  如果有一个像算法，你可以在互联网上运行，并且它能学到一切

[10:13] on the internet and it learns everything that would be incredible.
  在互联网上，它能学到一切，那将是难以置信的。

[10:15] I almost suspect that I'm not actually sure that it exists and that's certainly actually not what animals do
  我几乎怀疑，我其实不确定它是否存在，而这肯定不是动物的行为

[10:21] because animals have this outer loop of evolution,
  因为动物有这个外层循环的进化，

[10:23] right? Um, and a lot of what looks like learning is actually a lot more maturation of the brain and I think that
  对吧？嗯，很多看起来像学习的东西实际上是大脑的成熟，我认为

[10:29] actually very little reinforcement learning for animals and I think a lot of the reinforcement learning is
  实际上对动物来说，强化学习非常少，我认为很多强化学习是

[10:34] actually like more like motor tasks.
  实际上更像是运动任务。

[10:35] It's not intelligence tasks.
  不是智力任务。

[10:37] So I actually kind of think humans don't actually like really use RL roughly speaking is what I would say.
  所以我实际上有点认为，粗略地说，人类并不真正使用强化学习，这就是我想说的。

[10:41] Can you read the last sentence? A lot of that intelligence is not motor task.
  你能读一下最后一句话吗？很多那种智力不是运动任务。

[10:43] It's what? Sorry.
  是什么？抱歉。

[10:44] A lot of the reinforcement learning in my perspective would be things that are a lot more like motor like like uh simple kind of like task throwing a hoop or something like that.
  在我看来，很多强化学习将是更像运动，比如，呃，简单的像投掷呼啦圈这样的任务。

[10:54] Um but I don't think that humans use reinforcement learning for a lot of intelligence tasks like problem solving and so on.
  嗯，但我不认为人类在很多智力任务，比如解决问题等等方面使用强化学习。

[10:59] Interesting.
  有趣。

[11:01] That doesn't mean we don't have we we shouldn't do that for research but I just feel like that's what animals do or don't.
  这并不意味着我们不应该为研究这样做，但我只是觉得这就是动物所做的，或者不做的。

[11:08] that because there's a lot of different ideas.
  因为有很多不同的想法。

[11:11] Maybe one clarifying question I can ask to um understand a perspective.
  也许我可以问一个澄清性的问题来理解一个观点。

[11:14] So I think you suggest that look evolution is doing the kind of thing that pre-training does in the sense of building something which can then understand the world.
  所以我想你建议，你看，进化所做的就是预训练所做的，即构建一个能够理解世界的东西。

[11:24] The difference I guess is that evolution has to be titrated in the case of humans through 3 gigabytes of DNA.
  我想区别在于，在人类的情况下，进化必须通过 3GB 的 DNA 来滴定。

[11:32] And so that's very unlike the weights of a model.
  所以这与模型的权重非常不同。

[11:36] I mean literally the weights of the model are a brain which obviously is not encoded in the the sperm and the egg or does not exist in the sperm and the egg.
  我的意思是，模型的权重就是一个大脑，这显然没有在精子和卵子中编码，或者不存在于精子和卵子中。

[11:43] So it has to be grown and also the information for every single synapse in the brain simply cannot exist in the 3 gigabytes that exist in the DNA.
  所以它必须被生长，而且大脑中每一个突触的信息根本不可能存在于 DNA 中的 3GB 里。

[11:52] Evolution seems closer to finding the algorithm which then does the lifetime learning.
  进化似乎更接近于找到一个算法，然后该算法进行终身学习。

[11:58] Now maybe the lifetime learning is not analogous to RL to your point.
  现在，也许终身学习并不像你所说的与强化学习相似。

[12:03] Is that compatible with the thing you were saying or would you disagree with that?
  这与你所说的相符吗，还是你会不同意？

[12:05] I think so.
  我认为是这样。

[12:06] I would agree with you that there's some miraculous compression going on because obviously the weights
  我同意你的观点，那里正在发生一些神奇的压缩，因为显然权重

[12:09] going on because obviously the weights of the neural net are not stored in of the neural net are not stored in ATCGs.
  正在发生，因为显然神经网络的权重没有存储在ATCGs中。

[12:11] ATCGs.
  ATCGs。

[12:13] There's some kind of a dramatic compression and there's some kind of like learning algorithms encoded that that take over and do some of the learning online.
  存在某种戏剧性的压缩，并且存在某种编码的学习算法，这些算法会接管并在线上进行一些学习。

[12:18] So I definitely agree with you on that.
  所以在这点上我绝对同意你。

[12:20] Basically I would say I'm a lot more kind of like practically minded.
  基本上我会说我更偏向于实际。

[12:23] I don't come at it from the perspective of like let's build animals.
  我不是从“让我们来构建动物”的角度来看待它。

[12:27] I come from it perspective of like let's build useful things.
  我从“让我们来构建有用的东西”的角度来看待它。

[12:28] So I have a hard hat on and I'm just observing that look we're not going to do evolution because I don't know how to do that.
  所以我戴着安全帽，我只是观察到，我们不会进行进化，因为我不知道如何做到这一点。

[12:35] Uh but it does turn out we can build these ghost spirit-l like entities by imitating internet documents.
  呃，但事实证明，我们可以通过模仿互联网文档来构建这些类似幽灵精神的实体。

[12:39] This works and it's actually kind of like it's a way to bring you up to something that has a lot of sort of built-in knowledge and intelligence in some way.
  这很有效，而且实际上它是一种将你带入某种在某种程度上具有大量内置知识和智能的东西的方式。

[12:47] Uh similar to maybe what evolution has done.
  呃，可能类似于进化所做的事情。

[12:49] So that's why I kind of call pre-training this kind of like crappy evolution.
  所以这就是为什么我把预训练称为一种糟糕的进化。

[12:54] It's like the practically possible version with our technology and what we have available to us to get to a starting point where we can actually do things like reinforcement learning and so on.
  这就像是我们技术和可用资源下实际可行的版本，让我们能够达到一个起点，在那里我们可以真正地进行强化学习等等。

[13:03] M just to steelman the other perspective because after doing this in an interview and thinking about it a bit he has an important point here evolution does not
  我只是为了更好地理解另一种观点，因为在一次采访中这样做并思考了一下，他在这里有一个重要的观点，进化不

[13:10] important point here evolution does not give us the knowledge really right it.
  这里有一个重要的观点，进化并不真正地给我们知识，对吧。

[13:12] give us the knowledge really right it gives us the algorithm to find the.
  它给我们知识，对吧，它给我们寻找知识的算法。

[13:13] gives us the algorithm to find the knowledge and that seems different from.
  它给了我们寻找知识的算法，这似乎与...

[13:15] knowledge and that seems different from pre-raining so if perhaps the.
  知识不同，与预训练不同，所以也许...

[13:16] pre-raining so if perhaps the perspective is that pre-training helps.
  预训练，所以也许观点是预训练有助于...

[13:19] perspective is that pre-training helps build the kind of entity which can learn.
  观点是预训练有助于构建能够学习的实体。

[13:21] build the kind of entity which can learn better it teaches metal learning and.
  构建能够更好地学习的实体，它教授元学习，并且...

[13:23] better it teaches metal learning and therefore it is some similar to like.
  更好，它教授元学习，因此它与...有些相似。

[13:24] therefore it is some similar to like finding an algorithm um but if if it's.
  因此它与寻找算法有些相似，但是，如果它是...

[13:26] finding an algorithm um but if if it's like evolution gives us knowledge and.
  寻找算法，但是，如果进化给我们知识，并且...

[13:28] like evolution gives us knowledge and pre-training gives us knowledge they're.
  进化给我们知识，预训练给我们知识，它们...

[13:29] pre-training gives us knowledge they're not that analogy seems to break down.
  预训练给我们知识，它们不是，这种类比似乎失效了。

[13:31] not that analogy seems to break down >> so it's subtle and I You're you're right.
  不是，这种类比似乎失效了。所以它很微妙，你是对的。

[13:32] >>> so it's subtle and I You're you're right to push back on it, but basically the.
  所以它很微妙，你是对的，要反驳它，但基本上...

[13:35] to push back on it, but basically the thing that pre-training is doing, so.
  要反驳它，但基本上预训练正在做的事情，所以...

[13:36] thing that pre-training is doing, so you're basically getting the next token.
  预训练正在做的事情，所以你基本上得到了下一个 token。

[13:37] you're basically getting the next token predictor on over the internet and.
  你基本上得到了互联网上的下一个 token 预测器，并且...

[13:39] predictor on over the internet and you're training that into a neural net.
  互联网上的预测器，并且你正在将其训练到一个神经网络中。

[13:41] you're training that into a neural net. >> It's doing two things actually that are.
  你正在将其训练到一个神经网络中。它实际上在做两件事。

[13:42] >>> It's doing two things actually that are kind of like unrelated. Number one, it's.
  它实际上在做两件有点不相关的事情。第一，它...

[13:44] kind of like unrelated. Number one, it's picking up all this knowledge as I call.
  有点不相关。第一，它正在获取我所说的所有这些知识。

[13:45] picking up all this knowledge as I call it. Number two, it's actually becoming.
  获取我所说的知识。第二，它实际上正在变得...

[13:47] it. Number two, it's actually becoming intelligent.
  它。第二，它实际上正在变得智能。

[13:48] intelligent. >> Um, by observing the algorithmic.
  智能。嗯，通过观察算法...

[13:50] >>> Um, by observing the algorithmic patterns in the internet, it actually.
  嗯，通过观察互联网上的算法模式，它实际上...

[13:51] patterns in the internet, it actually kind of like boots up all these like.
  互联网上的模式，它实际上有点像启动了所有这些...

[13:53] kind of like boots up all these like little circuits and algorithms inside.
  有点像启动了所有这些小电路和算法...

[13:54] little circuits and algorithms inside the neural net to do things like in.
  神经网络内部的电路和算法，以便进行上下文学习等操作。

[13:56] the neural net to do things like in context learning and all this kind of.
  神经网络，以便进行上下文学习和所有这些东西。

[13:57] context learning and all this kind of stuff.
  上下文学习和所有这些东西。

[13:57] stuff. >> And actually, you don't actually need or.
  东西。实际上，你并不真正需要或...

[14:00] >>> And actually, you don't actually need or want the knowledge. I actually think.
  实际上，你并不真正需要或想要这些知识。我实际上认为...

[14:01] want the knowledge. I actually think that's probably actually holding back.
  想要这些知识。我实际上认为这可能实际上阻碍了...

[14:02] that's probably actually holding back the neural networks overall because it's.
  这可能实际上阻碍了整个神经网络的发展，因为它...

[14:04] the neural networks overall because it's actually like getting them to rely on.
  整个神经网络，因为它实际上让它们依赖于...

[14:05] actually like getting them to rely on the knowledge a little too much.
  实际上有点过于依赖知识。

[14:06] the knowledge a little too much sometimes.
  有时会过于依赖知识。

[14:07] sometimes. >> For example, I I kind of feel like.
  有时。例如，我有点觉得...

[14:08] >>> For example, I I kind of feel like agents one thing they're not very good.
  例如，我有点觉得智能体的一件事是它们不太擅长...

[14:10] agents one thing they're not very good at is going off the data manifold of
  智能体的一件事是它们不太擅长偏离数据流形。

[14:12] at is going off the data manifold of what exists on the internet.
  它正在脱离互联网上存在的数据流。

[14:13] what exists on the internet.
  互联网上存在什么。

[14:15] If they had less knowledge or less memory actually maybe they would be better.
  如果他们拥有的知识或记忆更少，也许他们会更好。

[14:17] better.
  更好。

[14:18] Yeah. Yeah. And so what I think we have to do kind of going forward and this will be part of the research paradigms is I actually think we need to start um we need to figure out ways to remove some of the knowledge and to keep what I call this cog is this cognitive core is this like intelligent entity that is kind of stripped from knowledge but contains the algorithms and contains the magic you know of intelligence and problem solving and the strategies of it and all this kind of stuff.
  是的。是的。所以我认为我们前进的方向是，这将是研究范式的一部分，我实际上认为我们需要开始，嗯，我们需要想办法去除一些知识，并保留我称之为“认知核心”的东西，这个智能实体，它被剥离了知识，但包含了算法和智能的魔力，你知道的，以及解决问题的方法和策略，以及所有这些东西。

[14:28] is this like intelligent entity that is kind of stripped from knowledge but contains the algorithms and contains the magic you know of intelligence and problem solving and the strategies of it and all this kind of stuff.
  这个智能实体，它被剥离了知识，但包含了算法和智能的魔力，你知道的，以及解决问题的方法和策略，以及所有这些东西。

[14:39] and all this kind of stuff.
  以及所有这些东西。

[14:40] There's so much interesting stuff there. Okay. So let's start with in context learning.
  那里有太多有趣的东西了。好的。所以我们从上下文学习开始。

[14:45] This is an obvious point, but I think it's worth just like saying it explicitly and meditating on it.
  这是一个显而易见的问题，但我认为值得明确地说出来并加以思考。

[14:49] The situation in which these models seem the most intelligent in which they are like I talk to them and I'm like, "Wow, there's really something on the other end that's like responding to me thinking about things.
  在这些模型看起来最智能的情况下，它们就像我跟它们说话一样，我说：“哇，另一端真的有什么东西在回应我，在思考事情。”

[14:59] If it like makes a mistake, it's like, oh wait, that's actually the wrong way to think about it. I'm backing up."
  如果它犯了一个错误，就像，哦等等，这实际上是错误的思考方式。我正在后退。”

[15:02] All that is happening in context. That's where I feel like the real intelligence you can like visibly see.
  所有这些都发生在上下文中。那就是我感觉到的真正智能，你可以明显看到。

[15:06] And that in context learning process is developed by gradient descent on
  而这种上下文学习过程是通过梯度下降开发的，在

[15:12] developed by gradient descent on pre-training, right?
  通过梯度下降在预训练中进行开发，对吗？

[15:13] like it meta it pre-training, right?
  就像它元学习预训练，对吗？

[15:15] like it meta it spontaneously metalarns in context learning but the incontext learning itself is not gradient descent in the same way that our lifetime intelligence
  就像它元学习自发地在上下文中进行元学习，但上下文学习本身并不是以与我们一生中的智能相同的方式进行梯度下降

[15:17] learning but the incontext learning itself is not gradient descent in the same way that our lifetime intelligence as humans to be able to do things is conditioned by evolution but our actual
  学习，但上下文学习本身并不是以与我们一生中的智能相同的方式进行梯度下降，因为我们人类能够做事是由进化决定的，但我们实际的

[15:20] itself is not gradient descent in the same way that our lifetime intelligence as humans to be able to do things is conditioned by evolution but our actual learning during our lifetime is like happening through some other process
  本身并不是以与我们一生中的智能相同的方式进行梯度下降，因为我们人类能够做事是由进化决定的，但我们实际的终生学习就像通过其他某种过程发生一样

[15:22] same way that our lifetime intelligence as humans to be able to do things is conditioned by evolution but our actual learning during our lifetime is like happening through some other process
  一生中的智能，因为我们人类能够做事是由进化决定的，但我们实际的终生学习就像通过其他某种过程发生一样

[15:25] as humans to be able to do things is conditioned by evolution but our actual learning during our lifetime is like happening through some other process
  作为人类能够做事是由进化决定的，但我们实际的终生学习就像通过其他某种过程发生一样

[15:27] conditioned by evolution but our actual learning during our lifetime is like happening through some other process
  由进化决定，但我们实际的终生学习就像通过其他某种过程发生一样

[15:29] learning during our lifetime is like happening through some other process
  我们终生学习就像通过其他某种过程发生一样

[15:30] happening through some other process >> I actually don't fully agree with that
  通过其他某种过程发生>>我实际上并不完全同意这一点

[15:32] >>> I actually don't fully agree with that but you should continue with
  >>>我实际上并不完全同意这一点，但你应该继续

[15:33] but you should continue with >> okay actually then I I'm very curious to
  但你应该继续>>好的，实际上那么我我非常好奇

[15:35] >>> okay actually then I I'm very curious to understand how that analogy breaks down
  >>>好的，实际上那么我我非常好奇想了解那个类比是如何失效的

[15:36] understand how that analogy breaks down >> I think I'm hesitant to say that in
  了解那个类比是如何失效的>>我认为我犹豫是否要说，在

[15:38] >>> I think I'm hesitant to say that in context learning is not doing gradient
  >>>我认为我犹豫是否要说，上下文学习不是在进行梯度

[15:40] context learning is not doing gradient descent uh because I mean it's not doing
  上下文学习不是在进行梯度下降，呃，因为我的意思是它没有进行

[15:42] descent uh because I mean it's not doing explicit creating descent, but I I still think that
  下降，呃，因为我的意思是它没有进行显式的创建下降，但我我仍然认为

[15:43] explicit creating descent, but I I still think that
  显式的创建下降，但我我仍然认为

[15:44] think that >> so in context learning basically it's
  认为>>所以上下文学习基本上是

[15:46] >>> so in context learning basically it's it's pattern completion within uh a token window, right?
  >>>所以上下文学习基本上是，它是对一个标记窗口内的模式进行补全，对吗？

[15:48] it's pattern completion within uh a token window, right? And it just turns out that there's a huge amount of
  它是对一个标记窗口内的模式进行补全，对吗？而且碰巧互联网上存在大量的

[15:50] token window, right? And it just turns out that there's a huge amount of patterns on the internet. And so you're
  标记窗口，对吗？而且碰巧互联网上存在大量的模式。所以你

[15:50] out that there's a huge amount of patterns on the internet. And so you're right, the model kind of like learns to
  碰巧互联网上存在大量的模式。所以你是对的，模型就像学会了

[15:52] patterns on the internet. And so you're right, the model kind of like learns to complete the pattern. Yeah.
  互联网上的模式。所以你是对的，模型就像学会了补全模式。是的。

[15:53] right, the model kind of like learns to complete the pattern. Yeah.
  对的，模型就像学会了补全模式。是的。

[15:54] complete the pattern. Yeah. >> And that's inside the weights. The
  补全模式。是的。>>而那是在权重里面。那些

[15:56] >>> And that's inside the weights. The weights of the neural network are trying to discover patterns and complete the
  >>>而那是在权重里面。神经网络的权重正在尝试发现模式并补全

[15:58] weights of the neural network are trying to discover patterns and complete the pattern. And there's some kind of an
  神经网络的权重正在尝试发现模式并补全模式。并且存在某种形式的

[15:59] to discover patterns and complete the pattern. And there's some kind of an adaptation that happens inside the
  发现模式并补全模式。并且存在某种形式的适应发生在

[16:00] pattern. And there's some kind of an adaptation that happens inside the neural network, right?
  模式。并且存在某种形式的适应发生在神经网络内部，对吗？

[16:02] adaptation that happens inside the neural network, right?
  适应发生在神经网络内部，对吗？

[16:03] neural network, right? >> Uh which is kind of magical and just
  神经网络，对吗？>>呃，这有点神奇，而且只是

[16:04] >>> Uh which is kind of magical and just falls out from internet just because there's a lot of patterns. I will say
  >>>呃，这有点神奇，而且只是因为互联网上有很多模式而自然产生的。我会说

[16:06] falls out from internet just because there's a lot of patterns. I will say that there have been some papers that I
  因为互联网上有很多模式而自然产生的。我会说，有一些论文我认为很有趣，它们实际上

[16:08] there's a lot of patterns. I will say that there have been some papers that I thought were interesting that actually
  有很多模式。我会说，有一些论文我认为很有趣，它们实际上

[16:10] that there have been some papers that I thought were interesting that actually look at the mechanisms behind in context
  有一些论文我认为很有趣，它们实际上研究了上下文背后的机制

[16:11] thought were interesting that actually look at the mechanisms behind in context
  我认为很有趣，它们实际上研究了上下文背后的机制

[16:13] look at the mechanisms behind in context learning and I do think it's possible
  看看情境学习背后的机制，我认为这是可能的

[16:14] learning and I do think it's possible that in context learning actually runs a
  学习，我认为情境学习实际上可以运行一个

[16:16] that in context learning actually runs a small gradient descent loop internally
  情境学习实际上可以运行一个小的梯度下降循环

[16:18] small gradient descent loop internally in the layers of the neural network and
  小的梯度下降循环在神经网络的层中，并且

[16:19] in the layers of the neural network and so I recall one paper in particular
  在神经网络的层中，我特别记得有一篇论文

[16:21] so I recall one paper in particular where they were doing um uh linear
  我特别记得有一篇论文，他们在做嗯嗯线性

[16:23] where they were doing um uh linear regression actually using in context
  他们在做嗯嗯线性回归，实际上是使用情境学习

[16:25] regression actually using in context learning. So basically your inputs into
  回归。所以基本上你的输入到

[16:27] learning. So basically your inputs into the neural network are XY pairs
  学习。所以基本上你的输入到神经网络是XY对

[16:30] the neural network are XY pairs >> XY XY XY that happen to be on the line
  神经网络是XY对>>XY XY XY，它们恰好在线上

[16:33] >>> XY XY XY that happen to be on the line >> and then you do X and you expect the Y
  >>> XY XY XY，它们恰好在线上>>然后你输入X，你期望得到Y

[16:35] >>> and then you do X and you expect the Y and the neural network when you train it
  >>>然后你输入X，你期望得到Y，而神经网络在你训练它时

[16:36] and the neural network when you train it in this way actually does do um does do
  而神经网络在你以这种方式训练它时，实际上确实会做嗯，会做

[16:39] in this way actually does do um does do linear regression
  以这种方式，实际上确实会做嗯，会做线性回归

[16:40] linear regression >> and um normally when you would run
  线性回归>>而且嗯，通常当你运行

[16:42] >>> and um normally when you would run linear regression you have a small
  >>>而且嗯，通常当你运行线性回归时，你有一个小的

[16:43] linear regression you have a small gradient descent optimizer that
  线性回归，你有一个小的梯度下降优化器，它

[16:45] gradient descent optimizer that basically looks at XY looks at an error
  梯度下降优化器，它基本上查看XY，查看一个误差

[16:47] basically looks at XY looks at an error calculates the gradient of the weights
  基本上查看XY，查看一个误差，计算权重的梯度

[16:49] calculates the gradient of the weights and does the update a few times. It just
  计算权重的梯度并进行几次更新。它只是

[16:51] and does the update a few times. It just turns out that when they looked at the
  并进行几次更新。结果是，当他们查看

[16:52] turns out that when they looked at the weights of that in context learning
  结果是，当他们查看情境学习的权重时

[16:53] weights of that in context learning algorithm uh they actually found some
  情境学习算法的权重时，嗯，他们实际上发现了一些

[16:55] algorithm uh they actually found some analogies to uh to gradient descent
  算法，嗯，他们实际上发现了一些与嗯，与梯度下降

[16:58] analogies to uh to gradient descent mechanics. In fact, I think even the
  类比，与梯度下降力学。事实上，我认为甚至那篇

[17:00] mechanics. In fact, I think even the paper went was stronger because they
  力学。事实上，我认为甚至那篇论文的说法更强，因为他们

[17:02] paper went was stronger because they actually hardcoded the weights of a
  论文的说法更强，因为他们实际上硬编码了一个

[17:03] actually hardcoded the weights of a neural network to do gradient descent
  实际上硬编码了一个神经网络的权重来执行梯度下降

[17:05] neural network to do gradient descent through uh attention and all the all the
  神经网络来执行梯度下降，通过嗯，注意力机制和所有所有

[17:08] through uh attention and all the all the internals of of the neural network. So,
  通过嗯，注意力机制和所有所有神经网络的内部。所以，

[17:10] internals of of the neural network. So, I guess that's just my only push back is
  神经网络的内部。所以，我想这只是我唯一的反驳是

[17:12] I guess that's just my only push back is that who knows how in context learning
  我想这只是我唯一的反驳是，谁知道情境学习如何

[17:14] that who knows how in context learning works, but I actually think that it's works, but I actually think that it's probably doing a little bit of some kind of funky gradient descent internally and that I think that that's that's possible.
  谁知道上下文学习是如何工作的，但我实际上认为它有效，但我实际上认为它可能在内部进行一些有趣的梯度下降，而且我认为这是可能的。

[17:22] So, I guess I I was only pushing back on you're saying it's not doing in context learning.
  所以，我想我只是在反驳你说它没有进行上下文学习。

[17:25] Who knows what it's doing, but it's probably maybe doing something similar to it, but we don't know.
  谁知道它在做什么，但它可能在做类似的事情，但我们不知道。

[17:29] So then it's worth thinking about okay if both of them are implementing gradient sorry if in context learning and pre-training are both implementing something like gradient descent why does it feel like in context learning actually we're getting to this like continual learning real intelligence like thing whereas you don't get the analogous feeling just from pre-training at least you could argue that and so if it's the same algorithm what could be different well one way you can think about it is how much information does the model store perform information it receives from training.
  所以，值得思考的是，如果两者都在实现梯度，抱歉，如果上下文学习和预训练都在实现类似梯度下降的东西，为什么感觉上下文学习实际上让我们达到了这种持续学习的真正智能的东西，而你只是从预训练中得不到类似的感受，至少你可以争辩说，所以如果算法相同，有什么可能不同呢？嗯，你可以从一个方面来考虑它，那就是模型存储了多少信息，它从训练中接收到的信息。

[18:00] And if you look at pre-training, if uh I think if you look at llama 3 for example, I think it's trained on 15 trillion tokens and if you look at the 70B model, that would be the equivalent of 07 bits per token in that it sees in
  如果你看看预训练，我想如果你看看 Llama 3，例如，我认为它是在 15 万亿个 token 上训练的，如果你看看 70B 模型，那将相当于它看到的每个 token 0.7 比特。

[18:14] per token in that it sees in pre-training in terms of like the pre-training in terms of like the information in the weights of the model information in the weights of the model compared to the tokens it reads. compared to the tokens it reads.
  每当它在预训练中看到的每个 token，就像预训练中模型权重中的信息，与它读取的 token 相比。

[18:18] Whereas if you look at the KV cache and how it grows per additional token in in context learning, it's like 320 kilobytes.
  而如果你看 KV 缓存以及它在上下文学习中每增加一个 token 的增长方式，大约是 320 千字节。

[18:26] Yeah. So that's a 35 millionfold difference in how much information per token is assimilated by the model.
  是的。所以每 token 被模型吸收的信息量有 3500 万倍的差异。

[18:33] I wonder if that's relevant at all.
  我想知道这是否相关。

[18:34] I think I kind of agree. I mean the way I usually put this is that anything that happens during the training of the neural network. The knowledge is only kind of like a hazy recollection of what happened in train in the training time and that's because the compression is dramatic.
  我想我有点同意。我的意思是，我通常这样说，在神经网络训练期间发生的任何事情。知识只是对训练期间发生的事情的一种模糊的记忆，这是因为压缩非常剧烈。

[18:46] You've you're taking 15 trillion tokens and you're compressing it to just your final network of a few billion parameters. So obviously it's a massive amount of compression going on.
  你正在处理 15 万亿个 token，并将其压缩到只有你最终的几亿参数的网络中。所以显然正在进行大量的压缩。

[18:54] uh so I kind of refer to it as like a hazy recollection of the internet documents whereas anything that happens in the context window of the neural network you're plugging all the tokens and it's building up all this KV cache representation is very directly accessible to the neural net so I compare the KV cache and the the stuff that happens at test time to like more like a working memory
  嗯，所以我称之为对互联网文档的模糊记忆，而任何发生在神经网络的上下文窗口中的事情，你将所有 token 插入其中，它正在构建所有这些 KV 缓存表示，这些表示对神经网络非常直接可访问，所以我将 KV 缓存和在测试时发生的事情与更像工作记忆进行比较。

[19:11] uh like all the stuff that's in the in um in the context window is very
  嗯，就像在上下文窗口中的所有东西都非常

[19:14] um in the context window is very directly accessible to the neural net so
  上下文窗口中的内容可以直接访问神经网络，所以

[19:16] directly accessible to the neural net so there's always like these um almost
  可以直接访问神经网络，所以总是有一些，嗯，几乎

[19:18] there's always like these um almost surprising analogies between LLMs and
  总是有一些，嗯，几乎令人惊讶的类比存在于大型语言模型和

[19:20] surprising analogies between LLMs and humans and I find them kind of
  令人惊讶的类比存在于大型语言模型和人类之间，我觉得它们有点

[19:21] humans and I find them kind of surprising because we're not trying to
  令人惊讶，因为我们当然不是在试图

[19:23] surprising because we're not trying to build a human brain of course u just
  构建一个人类大脑，我们只是

[19:24] build a human brain of course u just directly we're just finding that this
  直接地，我们发现这样做是有效的，而且我们正在这样做

[19:26] directly we're just finding that this works and we're doing it
  这样做是有效的，而且我们正在这样做

[19:27] works and we're doing it >> but I do think that
  >> 但我认为

[19:29] >> but I do think that >> anything that's in the weights it's kind
  >> 但我认为 >> 任何存在于权重中的东西，它有点像

[19:30] of like a hazy recollection of what you
  模糊的记忆，关于你

[19:31] read a year ago anything that you give
  一年前读过的任何东西，任何你在测试时提供给它的上下文

[19:34] it as a context uh at test time is
  嗯，在测试时提供给它的上下文，它直接存在于工作记忆中

[19:36] directly in the working memory um and I
  嗯，而且我认为这是一个非常强大的类比，可以用来思考问题

[19:38] think that's a very powerful analogy to
  所以当你例如去问一个大型语言模型关于某本书以及其中发生的事情时

[19:40] think through things so when you for
  例如，就像南的书之类的

[19:41] example go to an LLM and you ask it
  大型语言模型通常会给你一些大致正确的信息，但如果你给它整个章节并问它问题

[19:43] about some book and what happened in it
  你将获得更好的结果

[19:45] like nan's book or something like that
  因为现在它已经被加载到模型的

[19:47] the LM will often give you some stuff
  工作记忆中了。所以我基本上同意你用很长的方式说的

[19:48] which is roughly correct but if you give
  我有点同意，这就是为什么

[19:50] it the full chapter and ask it questions
  >> 回过头来看，我们最没能用这些模型复制的人类智能的哪一部分呢？

[19:51] you're going to get much better results
  模型？

[19:53] because it's now loaded in the working
  嗯，我几乎觉得，嗯，只是，只是很多它仍然是。所以也许一种方式是

[19:55] memory of the model. So I basically

[19:56] agree with your very long way of saying

[19:58] agree with your very long way of saying

[20:00] that I kind of agree and that's why

[20:01] >> stepping back what is it the part about

[20:05] human intelligence that we like have

[20:06] most failed to replicate with these

[20:08] models? >> Um I almost feel like um just uh just a

[20:12] lot of it still. So maybe one way to

[20:15] lot of it still. So maybe one way to think about it, I don't know if this is

[20:16] think about it, I don't know if this is the the best way, but I almost kind of

[20:18] the the best way, but I almost kind of feel like again making these analogies,

[20:20] feel like again making these analogies, imperfect as they are, um we've stumbled

[20:22] imperfect as they are, um we've stumbled by with the transformer neural network,

[20:24] by with the transformer neural network, which extremely powerful, very general.

[20:27] which extremely powerful, very general. You can train transformers on audio or

[20:29] You can train transformers on audio or video or text or whatever you want and

[20:31] video or text or whatever you want and it just learns patterns and they're very

[20:33] it just learns patterns and they're very powerful and it works really well. That

[20:35] powerful and it works really well. That to me almost indicates that this is kind

[20:36] to me almost indicates that this is kind of like some piece of cortical tissue.

[20:38] of like some piece of cortical tissue. Uh it's something like that because the

[20:40] Uh it's something like that because the cortex is famously very um plastic as

[20:42] cortex is famously very um plastic as well. you can rewire um you know parts

[20:44] well. you can rewire um you know parts of brains and there was the slightly

[20:47] of brains and there was the slightly gruesome experiments with rewiring like

[20:49] gruesome experiments with rewiring like visual cortex to the auditory cortex and

[20:51] visual cortex to the auditory cortex and this animal like learn find etc. Um, so

[20:54] this animal like learn find etc. Um, so I think that this is kind of like

[20:55] I think that this is kind of like cortical tissue. I think when we're

[20:57] cortical tissue. I think when we're doing reasoning and planning inside the

[20:59] doing reasoning and planning inside the neural networks, so basically doing a

[21:01] neural networks, so basically doing a reasoning traces um for thinking models,

[21:04] reasoning traces um for thinking models, that's kind of like the prefrontal

[21:05] that's kind of like the prefrontal cortex. Um, and then um I think we uh

[21:09] cortex. Um, and then um I think we uh maybe those are like little check marks,

[21:11] maybe those are like little check marks, but I still think there's many uh brain

[21:13] but I still think there's many uh brain parts and nuclei that are not explored.

[21:15] parts and nuclei that are not explored. So maybe for example there's a basic

[21:16] So maybe for example there's a basic ganglia doing a bit of reinforcement

[21:17] ganglia doing a bit of reinforcement learning when we fine tetune the models

[21:18] learning when we fine tetune the models on reinforcement learning but you you

[21:20] on reinforcement learning but you you know whereas like the hippocampus not

[21:21] know whereas like the hippocampus not obvious what that would be some parts

[21:24] obvious what that would be some parts are probably not important maybe the

[21:25] are probably not important maybe the cerebellum is like not important to

[21:26] cerebellum is like not important to cognition it's thought so so we can skip

[21:28] cognition it's thought so so we can skip some of it uh but I still think there's

[21:29] some of it uh but I still think there's for example the amydala all the emotions

[21:31] for example the amydala all the emotions and instincts um and there's probably

[21:33] and instincts um and there's probably like a bunch of other nuclei in the

[21:35] like a bunch of other nuclei in the brain that are very ancient that I don't

[21:37] brain that are very ancient that I don't think we've like really replicated I

[21:38] think we've like really replicated I don't actually know that we should be

[21:40] don't actually know that we should be pursuing you know the building of an

[21:42] pursuing you know the building of an analog of human brain I'm again an

[21:44] analog of human brain I'm again an engineer mostly at heart. But um I still

[21:47] engineer mostly at heart. But um I still feel like maybe another way to answer

[21:49] feel like maybe another way to answer the question is you're not going to hire

[21:51] the question is you're not going to hire this thing as an intern and it's missing

[21:52] this thing as an intern and it's missing a lot of it's because it comes with a

[21:53] a lot of it's because it comes with a lot of these cognitive deficits that we

[21:55] lot of these cognitive deficits that we all intuitively feel when we talk to the

[21:56] all intuitively feel when we talk to the models.

[21:57] models. >> Um

[21:58] >> Um >> and so it's just like not fully there

[21:59] >> and so it's just like not fully there yet. You can look at it as like not all

[22:01] yet. You can look at it as like not all the brain parts are checked off yet.

[22:04] the brain parts are checked off yet. >> This is maybe relevant to the question

[22:06] >> This is maybe relevant to the question of thinking about how fast these issues

[22:08] of thinking about how fast these issues will be solved. So sometimes people will

[22:11] will be solved. So sometimes people will say about continual learning. Look,

[22:13] say about continual learning. Look, actually you could already you could

[22:16] actually you could already you could easily replicate this capability just as

[22:18] easily replicate this capability just as in context learning emerged

[22:19] in context learning emerged spontaneously as a result of

[22:20] spontaneously as a result of pre-training.

[22:22] pre-training. Continual learning over longer horizons

[22:24] Continual learning over longer horizons will emerge spontaneously if the model

[22:26] will emerge spontaneously if the model is incentivized to recollect information

[22:29] is incentivized to recollect information over longer horizons or horizons longer

[22:32] over longer horizons or horizons longer than one session. So if there's um some

[22:36] than one session. So if there's um some like outer loop RL which has many

[22:40] like outer loop RL which has many sessions within that outer loop then

[22:42] sessions within that outer loop then like this continual learning where it

[22:44] like this continual learning where it uses like it fine-tunes itself or it

[22:46] uses like it fine-tunes itself or it writes to an external memory or

[22:47] writes to an external memory or something will just sort of like emerge

[22:48] something will just sort of like emerge spontaneously. Do you think

[22:50] spontaneously. Do you think >> do you think things are that are

[22:51] >> do you think things are that are plausible? I just I don't have really a

[22:53] plausible? I just I don't have really a prior over like how plausible is that?

[22:54] prior over like how plausible is that? How likely is that to happen?

[22:55] How likely is that to happen? >> I don't know that I fully resonate with

[22:56] >> I don't know that I fully resonate with that because I feel like these models

[22:58] that because I feel like these models when you boot them up and they have zero

[22:59] when you boot them up and they have zero tokens in the window, they're always

[23:01] tokens in the window, they're always like restarting from scratch where they

[23:02] like restarting from scratch where they were. So I don't actually know in that

[23:04] were. So I don't actually know in that worldview what it looks like. Uh because

[23:07] worldview what it looks like. Uh because um again making maybe making some

[23:10] um again making maybe making some analogies to humans just because I think

[23:11] analogies to humans just because I think it's roughly concrete and kind of

[23:13] it's roughly concrete and kind of interesting to think through. I feel

[23:14] interesting to think through. I feel like when I'm awake I'm building up a

[23:16] like when I'm awake I'm building up a context window of stuff that's happening

[23:17] context window of stuff that's happening during the day but I feel like when I go

[23:18] during the day but I feel like when I go to sleep something magical happens where

[23:20] to sleep something magical happens where uh I don't actually think that that

[23:21] uh I don't actually think that that context window stays around. Um I think

[23:23] context window stays around. Um I think there's some process of distillation

[23:25] there's some process of distillation into weights of my brain.

[23:27] into weights of my brain. >> Yeah.

[23:27] >> Yeah. >> Um and this happens during sleep and all

[23:29] >> Um and this happens during sleep and all this kind of stuff. We don't have an

[23:30] this kind of stuff. We don't have an equivalent for of that in large language

[23:32] equivalent for of that in large language models and that's to me more adjacent to

[23:35] models and that's to me more adjacent to when you talk about continual learning

[23:36] when you talk about continual learning and so on as absent. These models don't

[23:38] and so on as absent. These models don't really have this distillation phase um

[23:41] really have this distillation phase um of taking what happened, analyzing it,

[23:44] of taking what happened, analyzing it, obsessively thinking through it, um

[23:47] obsessively thinking through it, um basically doing some kind of a synthetic

[23:48] basically doing some kind of a synthetic data generation process and distilling

[23:49] data generation process and distilling it back back into the weights and maybe

[23:51] it back back into the weights and maybe having uh you know specific neural net

[23:54] having uh you know specific neural net per person uh maybe it's a Laura it's

[23:57] per person uh maybe it's a Laura it's not a full uh yeah it's not a full

[23:59] not a full uh yeah it's not a full weight uh neural network that's it's

[24:01] weight uh neural network that's it's just small some of the small sparse

[24:03] just small some of the small sparse subset of the weights are changed

[24:05] subset of the weights are changed >> but basically we do want to create ways

[24:07] >> but basically we do want to create ways of creating these individuals that have

[24:09] of creating these individuals that have very long contexts. It's not only

[24:11] very long contexts. It's not only remaining in the context window because

[24:12] remaining in the context window because the context windows grow very very long

[24:14] the context windows grow very very long like maybe we have some very elaborate

[24:16] like maybe we have some very elaborate sparse attention over it

[24:17] sparse attention over it >> but I still think that humans obviously

[24:19] >> but I still think that humans obviously have some process for distilling some of

[24:21] have some process for distilling some of that knowledge into the weights we're

[24:22] that knowledge into the weights we're missing it and I do also think that

[24:25] missing it and I do also think that humans um have some kind of a very

[24:26] humans um have some kind of a very elaborate sparse attention scheme

[24:29] elaborate sparse attention scheme >> um which I think we're starting to see

[24:31] >> um which I think we're starting to see some early hints of uh so deepse v3.2 2

[24:35] some early hints of uh so deepse v3.2 2 just came out and I saw that they have

[24:36] just came out and I saw that they have like a sparse attention as an example

[24:38] like a sparse attention as an example and this is one way to have very very

[24:39] and this is one way to have very very long context windows.

[24:40] long context windows. >> So I almost feel like we are redoing a

[24:42] >> So I almost feel like we are redoing a lot of the cognitive tricks that

[24:45] lot of the cognitive tricks that evolution came up with through a very

[24:46] evolution came up with through a very different process but we're I think

[24:48] different process but we're I think going to converge on a similar

[24:49] going to converge on a similar architecture cognitively.

[24:50] architecture cognitively. >> Interesting. In 10 years do you think

[24:52] >> Interesting. In 10 years do you think it'll still be something like a

[24:53] it'll still be something like a transformer but with a much more

[24:54] transformer but with a much more modified attention and more sparse uh

[24:57] modified attention and more sparse uh MLPS and so forth? Well, the way I like

[24:59] MLPS and so forth? Well, the way I like to think about it is okay, let's uh

[25:00] to think about it is okay, let's uh translation invariance in time, right?

[25:02] translation invariance in time, right? So 10 years ago, where were we?

[25:04] So 10 years ago, where were we? >> 2015 uh we had uh convolutional neural

[25:06] >> 2015 uh we had uh convolutional neural networks primarily. Residual networks

[25:08] networks primarily. Residual networks just came out. Um so remarkably similar

[25:11] just came out. Um so remarkably similar I guess, but quite a bit different

[25:12] I guess, but quite a bit different still. I mean transformer was not

[25:14] still. I mean transformer was not around. Um

[25:15] around. Um >> you know all the um all these sort of

[25:18] >> you know all the um all these sort of like more modern uh tweaks on the

[25:20] like more modern uh tweaks on the transformer were not around. So maybe

[25:22] transformer were not around. So maybe some of the things that we can bet on I

[25:23] some of the things that we can bet on I think in 10 years uh by translational

[25:25] think in 10 years uh by translational sort of equivariance is um we're still

[25:28] sort of equivariance is um we're still training giant neural networks with uh

[25:30] training giant neural networks with uh forward backward pass and update through

[25:31] forward backward pass and update through gradient descent um but maybe it looks a

[25:35] gradient descent um but maybe it looks a little bit different

[25:36] little bit different >> and it's just everything is much bigger

[25:38] >> and it's just everything is much bigger actually recently I also went back all

[25:40] actually recently I also went back all the way to 1989 which was kind of a fun

[25:43] the way to 1989 which was kind of a fun uh exercise for me a few years ago uh

[25:45] uh exercise for me a few years ago uh because I was reproducing uh Yan Lakun's

[25:47] because I was reproducing uh Yan Lakun's 1989 convolutional network which was the

[25:50] 1989 convolutional network which was the first neural network I'm aware of

[25:51] first neural network I'm aware of trained via gradient descent like modern

[25:53] trained via gradient descent like modern neural network trained gradient descent

[25:55] neural network trained gradient descent on uh digit recognition

[25:57] on uh digit recognition >> and I was just interested in okay how

[25:59] >> and I was just interested in okay how can I modernize this how much of this is

[26:00] can I modernize this how much of this is algorithms how much of this is data how

[26:02] algorithms how much of this is data how much of this progress is uh compute and

[26:03] much of this progress is uh compute and systems and I was able to very quickly

[26:05] systems and I was able to very quickly like half the learning rate just knowing

[26:07] like half the learning rate just knowing by tra time travel by 33 years so if I

[26:10] by tra time travel by 33 years so if I time travel by algorithms to 33 years I

[26:13] time travel by algorithms to 33 years I could adjust what yan did in 1989 and I

[26:15] could adjust what yan did in 1989 and I could basically half the learning half

[26:17] could basically half the learning half the error but to get further gains I had

[26:20] the error but to get further gains I had to add a lot more data. I had to like

[26:21] to add a lot more data. I had to like 10x the training set and then I had to

[26:23] 10x the training set and then I had to actually add more computational

[26:24] actually add more computational optimizations. Uh had to basically train

[26:27] optimizations. Uh had to basically train for much longer with dropout and other

[26:29] for much longer with dropout and other regularization techniques.

[26:30] regularization techniques. >> And so it's almost like all these things

[26:32] >> And so it's almost like all these things have to improve simultaneously. So um

[26:34] have to improve simultaneously. So um you know we're probably going to have a

[26:35] you know we're probably going to have a lot more data. We're probably going to

[26:36] lot more data. We're probably going to have a lot better hardware. Probably

[26:38] have a lot better hardware. Probably going to have a lot better kernels and

[26:39] going to have a lot better kernels and software. We're probably going to have

[26:40] software. We're probably going to have better algorithms. And all of those it's

[26:42] better algorithms. And all of those it's almost like no one of them is winning

[26:44] almost like no one of them is winning too much. All of them are surprisingly

[26:46] too much. All of them are surprisingly equal.

[26:48] equal. M

[26:48] M >> and this has kind of been the trend for

[26:49] >> and this has kind of been the trend for a while. So I guess to answer maybe your

[26:52] a while. So I guess to answer maybe your question, I expect differences

[26:54] question, I expect differences algorithmically to what's happening

[26:56] algorithmically to what's happening today. Uh but I do also expect that some

[26:58] today. Uh but I do also expect that some of the things that have stuck around for

[26:59] of the things that have stuck around for a very long time will probably still be

[27:01] a very long time will probably still be there. It's probably still giant neural

[27:02] there. It's probably still giant neural network trained with gradient descent.

[27:04] network trained with gradient descent. That would be my guess.

[27:05] That would be my guess. >> It's surprising that all of those things

[27:06] >> It's surprising that all of those things together only haved um uh half the

[27:11] together only haved um uh half the error. Yeah. which is so like 30 years

[27:13] error. Yeah. which is so like 30 years of progress is uh maybe maybe half is a

[27:15] of progress is uh maybe maybe half is a lot because like if you half the error

[27:17] lot because like if you half the error that actually means that

[27:18] that actually means that >> half is a lot. Yeah.

[27:18] >> half is a lot. Yeah. >> Yeah. Yeah. Okay. Um

[27:19] >> Yeah. Yeah. Okay. Um >> but it's I guess what was shocking to me

[27:21] >> but it's I guess what was shocking to me is everything needs to improve across

[27:23] is everything needs to improve across the board.

[27:23] the board. >> Uh architecture optimizer loss function

[27:26] >> Uh architecture optimizer loss function and also has improved across the board

[27:27] and also has improved across the board forever. So I kind of expect all those

[27:29] forever. So I kind of expect all those changes to be alive and well. Well,

[27:31] changes to be alive and well. Well, yeah. Actually, I was about to ask a

[27:32] yeah. Actually, I was about to ask a very similar question about nano chat

[27:34] very similar question about nano chat because since you just coded up

[27:36] because since you just coded up recently, every single sort of step in

[27:38] recently, every single sort of step in the, you know, process of building a

[27:40] the, you know, process of building a chatbot is like fresh in your RAM.

[27:42] chatbot is like fresh in your RAM. >> And I'm curious if you had similar

[27:44] >> And I'm curious if you had similar thoughts about like, oh, there was no

[27:46] thoughts about like, oh, there was no one thing that was relevant to going

[27:48] one thing that was relevant to going from GPT2 to nanohat. What are sort of

[27:52] from GPT2 to nanohat. What are sort of like surprising takeaways from the

[27:55] like surprising takeaways from the experience

[27:55] experience >> building? So, nanohat is a kind of a

[27:58] >> building? So, nanohat is a kind of a repository I released was it yesterday

[27:59] repository I released was it yesterday or day before? I can't remember.

[28:03] or day before? I can't remember. We can see the sleeve deliberation that

[28:04] We can see the sleeve deliberation that went into the

[28:06] went into the >> um well it's just trying to be a it's

[28:09] >> um well it's just trying to be a it's trying to be the simplest complete

[28:11] trying to be the simplest complete repository that covers the whole

[28:12] repository that covers the whole pipeline end to end of building a chacha

[28:14] pipeline end to end of building a chacha pt clone

[28:16] pt clone >> and so you know you have all of the

[28:17] >> and so you know you have all of the steps not just any individual step which

[28:19] steps not just any individual step which is a bunch of I worked on all the

[28:21] is a bunch of I worked on all the individual steps sort of in the past and

[28:22] individual steps sort of in the past and really small pieces of code that kind of

[28:24] really small pieces of code that kind of um show you how that's done in

[28:26] um show you how that's done in algorithmic sense um uh in like simple

[28:29] algorithmic sense um uh in like simple code but this kind of handles the entire

[28:31] code but this kind of handles the entire pipeline I I think in terms of learning

[28:33] pipeline I I think in terms of learning it's not it's not so much um I don't

[28:35] it's not it's not so much um I don't know that I actually found something

[28:36] know that I actually found something that I learned from from it necessarily.

[28:38] that I learned from from it necessarily. I kind of already had in my mind as like

[28:39] I kind of already had in my mind as like how you build it and this is just a

[28:41] how you build it and this is just a process of mechanically uh building it

[28:45] process of mechanically uh building it and making it clean enough and uh so

[28:47] and making it clean enough and uh so that people can actually learn from it

[28:49] that people can actually learn from it and um that uh they find it useful.

[28:51] and um that uh they find it useful. >> Yeah. What is the best way for somebody

[28:53] >> Yeah. What is the best way for somebody to learn from it? Is it just like delete

[28:54] to learn from it? Is it just like delete all the code and try to reimplement from

[28:56] all the code and try to reimplement from scratch? Try to add modifications to it?

[28:58] scratch? Try to add modifications to it? >> Uh yeah, I think that's a that's a great

[29:00] >> Uh yeah, I think that's a that's a great question. I would probably say so

[29:01] question. I would probably say so basically it's about 8,000 lines of code

[29:03] basically it's about 8,000 lines of code that takes you through the entire

[29:04] that takes you through the entire pipeline. I would probably put it on the

[29:06] pipeline. I would probably put it on the right monitor like if you have two

[29:07] right monitor like if you have two monitors you put it on the on the right.

[29:09] monitors you put it on the on the right. >> Um and you want to build it from

[29:11] >> Um and you want to build it from scratch. You build it from start. You're

[29:13] scratch. You build it from start. You're not allowed to copy paste. You're

[29:14] not allowed to copy paste. You're allowed to reference. You're not allowed

[29:15] allowed to reference. You're not allowed to copy paste. Maybe that's how I would

[29:17] to copy paste. Maybe that's how I would do it.

[29:17] do it. >> Um but I also think the repository by

[29:19] >> Um but I also think the repository by itself it is like a pretty large beast.

[29:21] itself it is like a pretty large beast. I mean it's you know it's a it's when

[29:23] I mean it's you know it's a it's when you write this code you don't go from

[29:25] you write this code you don't go from top to bottom. you go from chunks and

[29:27] top to bottom. you go from chunks and you grow the chunks and uh that

[29:29] you grow the chunks and uh that information is absent like you wouldn't

[29:30] information is absent like you wouldn't know where to start and so I think it's

[29:32] know where to start and so I think it's not just the final repository that's

[29:33] not just the final repository that's needed it's like the building of the

[29:35] needed it's like the building of the repository which is a complicated chunk

[29:37] repository which is a complicated chunk growing process

[29:38] growing process >> right

[29:38] >> right >> uh so that part is not there yet I would

[29:40] >> uh so that part is not there yet I would love to actually like add that probably

[29:42] love to actually like add that probably later this week or something in some way

[29:43] later this week or something in some way like either it's a uh it's probably a

[29:46] like either it's a uh it's probably a video or something like that but um but

[29:48] video or something like that but um but maybe roughly speaking that's what I

[29:50] maybe roughly speaking that's what I would try to do is build the stuff

[29:51] would try to do is build the stuff yourself uh but uh don't allow yourself

[29:53] yourself uh but uh don't allow yourself copy paste

[29:54] copy paste >> I do think that there's two types of

[29:56] >> I do think that there's two types of knowledge almost like there's the high

[29:57] knowledge almost like there's the high level surface knowledge but the thing is

[29:59] level surface knowledge but the thing is that when you actually build something

[30:00] that when you actually build something from scratch you're forced to come to

[30:02] from scratch you're forced to come to terms with what you don't actually

[30:03] terms with what you don't actually understand and you don't know that you

[30:04] understand and you don't know that you don't understand it

[30:05] don't understand it >> interesting

[30:06] >> interesting >> and it always leads to a deeper

[30:07] >> and it always leads to a deeper understanding uh and um it's like just

[30:10] understanding uh and um it's like just the only way to to build is like if I

[30:12] the only way to to build is like if I can't build it I don't understand it is

[30:13] can't build it I don't understand it is that a fine code I believe or something

[30:15] that a fine code I believe or something along those lines

[30:17] along those lines >> I 100% I've always believed this very

[30:19] >> I 100% I've always believed this very strongly uh because there's all these

[30:21] strongly uh because there's all these like micro things that are just not

[30:23] like micro things that are just not properly arranged and you don't really

[30:24] properly arranged and you don't really have the knowledge you just in had the

[30:25] have the knowledge you just in had the knowledge. So, don't write blog posts,

[30:27] knowledge. So, don't write blog posts, don't do slides, don't do any of that.

[30:28] don't do slides, don't do any of that. Like, build the code, arrange it, get it

[30:30] Like, build the code, arrange it, get it to work. It's the only way to go.

[30:31] to work. It's the only way to go. Otherwise, you're missing knowledge.

[30:32] Otherwise, you're missing knowledge. >> Um, you tweeted out that coding models

[30:35] >> Um, you tweeted out that coding models were actually of very little help to you

[30:36] were actually of very little help to you in assembling this repository and I'm

[30:39] in assembling this repository and I'm curious why that was.

[30:41] curious why that was. >> Yeah. Uh, so the repository, I guess I

[30:43] >> Yeah. Uh, so the repository, I guess I built it over a period of a bit more

[30:45] built it over a period of a bit more than a month, and I would say there's

[30:47] than a month, and I would say there's like three major classes of how people

[30:48] like three major classes of how people interact with code right now. Some

[30:50] interact with code right now. Some people completely reject all of LLMs and

[30:52] people completely reject all of LLMs and they are just uh writing by scratch. I

[30:54] they are just uh writing by scratch. I think this is probably not the the right

[30:56] think this is probably not the the right thing to do anymore. Um the intermediate

[30:59] thing to do anymore. Um the intermediate part which is where I am is you still

[31:01] part which is where I am is you still write a lot of things from scratch but

[31:02] write a lot of things from scratch but you use uh the autocomplete uh that's

[31:04] you use uh the autocomplete uh that's basically uh available now from these

[31:06] basically uh available now from these models. So when you start writing out

[31:08] models. So when you start writing out little piece of it it will it would auto

[31:09] little piece of it it will it would auto complete for you and you can just tap

[31:11] complete for you and you can just tap through and most of the time it's

[31:12] through and most of the time it's correct. Sometimes it's not and you edit

[31:13] correct. Sometimes it's not and you edit it but you're still very much the um

[31:16] it but you're still very much the um sort of architect of what you're

[31:17] sort of architect of what you're writing. And then there's the, you know,

[31:19] writing. And then there's the, you know, VIP coding, uh, you know, hi, please

[31:21] VIP coding, uh, you know, hi, please implement this or that, uh, you know,

[31:23] implement this or that, uh, you know, enter and then let the model do it. And

[31:25] enter and then let the model do it. And that's the agents.

[31:26] that's the agents. >> Um, I do feel like the agents work in

[31:30] >> Um, I do feel like the agents work in very specific settings and I would use

[31:32] very specific settings and I would use them in specific settings. But again,

[31:33] them in specific settings. But again, these are all tools available to you and

[31:35] these are all tools available to you and you have to like learn what they what

[31:36] you have to like learn what they what they're good at and what they're not

[31:37] they're good at and what they're not good at and when to use them.

[31:38] good at and when to use them. >> So the agents are actually pretty good.

[31:40] >> So the agents are actually pretty good. For example, if you're doing boilerplate

[31:41] For example, if you're doing boilerplate stuff,

[31:42] stuff, >> boilerplate code that's like just cop,

[31:44] >> boilerplate code that's like just cop, you know, just copy paste stuff. They're

[31:45] you know, just copy paste stuff. They're very good at that. they're very good at

[31:47] very good at that. they're very good at stuff that occurs very often in the

[31:48] stuff that occurs very often in the internet

[31:50] internet um because there's lots of examples of

[31:51] um because there's lots of examples of it in the training sets of these models.

[31:53] it in the training sets of these models. Um so so there's like features of things

[31:56] Um so so there's like features of things that where the models will do very well.

[31:58] that where the models will do very well. I would say nanohat is not an example of

[32:00] I would say nanohat is not an example of this because it's a fairly unique

[32:02] this because it's a fairly unique repository. There's not that much code I

[32:04] repository. There's not that much code I think in the way that I've structured it

[32:05] think in the way that I've structured it and um and it's not boilerplate code.

[32:08] and um and it's not boilerplate code. It's like actually like intellectually

[32:10] It's like actually like intellectually intense code almost and everything has

[32:11] intense code almost and everything has to be very precisely arranged and the

[32:13] to be very precisely arranged and the models are always trying to

[32:14] models are always trying to >> they kept trying to I mean they have so

[32:16] >> they kept trying to I mean they have so many cognitive deficits right so one

[32:18] many cognitive deficits right so one example they keep trying to they keep

[32:20] example they keep trying to they keep misunderstanding the code um because

[32:22] misunderstanding the code um because they they have too much memory from all

[32:24] they they have too much memory from all the typical ways of doing things on the

[32:26] the typical ways of doing things on the internet that I just wasn't adopting.

[32:28] internet that I just wasn't adopting. >> Uh so the models for example

[32:30] >> Uh so the models for example >> I mean I don't know if I want to get

[32:31] >> I mean I don't know if I want to get into the full details but they keep they

[32:32] into the full details but they keep they keep um they keep thinking I'm writing

[32:34] keep um they keep thinking I'm writing normal code and I'm not. Maybe one

[32:37] normal code and I'm not. Maybe one example maybe

[32:38] example maybe >> one example is so the way to synchronize

[32:40] >> one example is so the way to synchronize so you have eight GPUs that are all

[32:42] so you have eight GPUs that are all doing forward backwards the way to

[32:43] doing forward backwards the way to synchronize gradients between them is to

[32:45] synchronize gradients between them is to use a distributed data parallel

[32:46] use a distributed data parallel container of PyTorch which automatically

[32:48] container of PyTorch which automatically does all the as you're doing the

[32:50] does all the as you're doing the backward it will start communicating and

[32:51] backward it will start communicating and synchronizing gradients I didn't use DDP

[32:53] synchronizing gradients I didn't use DDP because I didn't want to use it because

[32:55] because I didn't want to use it because it's not necessary so I threw it out

[32:57] it's not necessary so I threw it out >> and I basically wrote my own

[32:58] >> and I basically wrote my own synchronization routine that's inside

[33:00] synchronization routine that's inside the step of the optimizer and so the

[33:02] the step of the optimizer and so the models were trying to get me to use the

[33:04] models were trying to get me to use the DDB container

[33:05] DDB container >> and they very concerned about okay this

[33:08] >> and they very concerned about okay this gets way too technical but I wasn't

[33:10] gets way too technical but I wasn't using that container because I don't

[33:11] using that container because I don't need it and I have a custom

[33:12] need it and I have a custom implementation of something like it

[33:13] implementation of something like it >> and they just couldn't internalize that

[33:15] >> and they just couldn't internalize that you had your own

[33:15] you had your own >> yeah they couldn't they couldn't get

[33:17] >> yeah they couldn't they couldn't get past that and then um they kept trying

[33:20] past that and then um they kept trying to like mess up the style like they're

[33:22] to like mess up the style like they're way too overdefensive they make all

[33:23] way too overdefensive they make all these try catch statements they keep

[33:25] these try catch statements they keep trying to make a production codebase and

[33:27] trying to make a production codebase and I have a bunch of assumptions in my code

[33:28] I have a bunch of assumptions in my code and it's okay and uh and it's just like

[33:31] and it's okay and uh and it's just like I don't need all this extra stuff in

[33:33] I don't need all this extra stuff in there and so I just kind of feel like

[33:35] there and so I just kind of feel like they're bloating the codebase. They're

[33:36] they're bloating the codebase. They're bloating the complexity. They keep

[33:37] bloating the complexity. They keep misunderstanding. They're using

[33:38] misunderstanding. They're using deprecated APIs a bunch of times. So,

[33:41] deprecated APIs a bunch of times. So, it's total mess. Um, and uh, it's just

[33:45] it's total mess. Um, and uh, it's just it's just not that useful. I can go in,

[33:46] it's just not that useful. I can go in, I can clean it up, but it's not that

[33:48] I can clean it up, but it's not that useful. I also feel like it's kind of

[33:50] useful. I also feel like it's kind of annoying to have to like type out what I

[33:51] annoying to have to like type out what I want in English because it's just too

[33:53] want in English because it's just too much typing. Like, if I just navigate to

[33:55] much typing. Like, if I just navigate to the part of the code that I want and I

[33:56] the part of the code that I want and I go where I where I know the code has to

[33:58] go where I where I know the code has to appear and I start typing out the first

[33:59] appear and I start typing out the first three letters, autocomplete gets it and

[34:01] three letters, autocomplete gets it and just gives you the code. And so I think

[34:03] just gives you the code. And so I think it's this is a very high information

[34:05] it's this is a very high information bandwidth to specify what you want is if

[34:07] bandwidth to specify what you want is if you point to the code where you want it

[34:08] you point to the code where you want it and you type out the first few pieces

[34:10] and you type out the first few pieces and the model will complete it.

[34:11] and the model will complete it. >> So I guess what I mean is um I think

[34:14] >> So I guess what I mean is um I think these models are good in certain parts

[34:16] these models are good in certain parts of the stack actually use the models a

[34:19] of the stack actually use the models a little bit in um there are two examples

[34:21] little bit in um there are two examples where I actually use the models that I

[34:22] where I actually use the models that I think are illustrative. Uh one was when

[34:25] think are illustrative. Uh one was when I generated the report that's actually

[34:26] I generated the report that's actually more boilerplatey. So I actually bcoded

[34:28] more boilerplatey. So I actually bcoded part partially some of that stuff that

[34:30] part partially some of that stuff that was fine um because it's not like

[34:32] was fine um because it's not like mission critical stuff and it works

[34:33] mission critical stuff and it works fine.

[34:34] fine. >> And then the other part is when I was

[34:35] >> And then the other part is when I was rewriting the tokenizer uh in Rust uh

[34:37] rewriting the tokenizer uh in Rust uh I'm actually not as good at Rust because

[34:39] I'm actually not as good at Rust because I'm fairly new to Rust. So I was doing

[34:41] I'm fairly new to Rust. So I was doing there's a bit of vibe coding going on uh

[34:44] there's a bit of vibe coding going on uh in when I was writing some of the Rust

[34:45] in when I was writing some of the Rust code but I had Python implementation

[34:47] code but I had Python implementation that I fully understand and I'm just

[34:48] that I fully understand and I'm just making sure I'm making a more efficient

[34:50] making sure I'm making a more efficient version of it and I have tests so I feel

[34:52] version of it and I have tests so I feel safer doing that stuff. Um and so

[34:54] safer doing that stuff. Um and so basically they lower or like they

[34:56] basically they lower or like they increase accessibility to uh languages

[34:59] increase accessibility to uh languages or paradigms that you might not as be

[35:01] or paradigms that you might not as be not be as familiar with. Uh so I think

[35:02] not be as familiar with. Uh so I think they're very helpful there as well.

[35:04] they're very helpful there as well. >> Yeah.

[35:04] >> Yeah. >> Uh because there's a ton of Rust code

[35:06] >> Uh because there's a ton of Rust code out there. The models are actually

[35:07] out there. The models are actually pretty good at it. I happen to not know

[35:08] pretty good at it. I happen to not know that much about it. So the models are

[35:09] that much about it. So the models are very useful there.

[35:10] very useful there. >> Um the reason I think this question is

[35:12] >> Um the reason I think this question is so interesting is because the main story

[35:15] so interesting is because the main story people have about AI exploding and

[35:18] people have about AI exploding and getting to super intelligence pretty

[35:20] getting to super intelligence pretty rapidly. is AI automating, AI

[35:22] rapidly. is AI automating, AI engineering and AI research.

[35:25] engineering and AI research. So they'll look at the fact that you can

[35:26] So they'll look at the fact that you can have cloud code make entire applications

[35:28] have cloud code make entire applications from scratch and be like if you had this

[35:30] from scratch and be like if you had this incapability inside of open AI and deep

[35:33] incapability inside of open AI and deep mind and everything well just imagine

[35:34] mind and everything well just imagine the level of like just you know a

[35:36] the level of like just you know a thousand of you or a million of you in

[35:38] thousand of you or a million of you in parallel finding little architectural

[35:40] parallel finding little architectural tweaks and so it's quite interesting to

[35:42] tweaks and so it's quite interesting to hear you say that this is the thing

[35:44] hear you say that this is the thing they're sort of asymmetrically worse at

[35:46] they're sort of asymmetrically worse at and it's like quite relevant to

[35:47] and it's like quite relevant to forecasting whether the AI 2027 type

[35:50] forecasting whether the AI 2027 type explosion

[35:51] explosion >> is likely to happen anytime soon. I

[35:53] >> is likely to happen anytime soon. I think that's a good way of putting it.

[35:54] think that's a good way of putting it. And I think you're getting at some of my

[35:56] And I think you're getting at some of my like why my timelines are a bit longer.

[35:58] like why my timelines are a bit longer. You're right. Um I think um yeah,

[36:01] You're right. Um I think um yeah, they're not very good at code that

[36:02] they're not very good at code that hasn't never been written before maybe

[36:03] hasn't never been written before maybe is like one way to put it, which is like

[36:05] is like one way to put it, which is like what we're trying to achieve when we're

[36:06] what we're trying to achieve when we're building these models.

[36:07] building these models. >> Very naive question, but um the

[36:10] >> Very naive question, but um the architectural tweaks that you're adding

[36:12] architectural tweaks that you're adding to uh Nanohat, they're in a paper

[36:15] to uh Nanohat, they're in a paper somewhere, right? They might even be in

[36:16] somewhere, right? They might even be in a repo somewhere. So it's

[36:19] a repo somewhere. So it's um is it surprising that they aren't

[36:21] um is it surprising that they aren't able to integrate that into whenever

[36:24] able to integrate that into whenever you're like add rope embeddings or

[36:26] you're like add rope embeddings or something they do that in the wrong way.

[36:29] something they do that in the wrong way. >> It's it's tough. I think they kind of

[36:31] >> It's it's tough. I think they kind of know they kind of know but they don't

[36:32] know they kind of know but they don't fully know and they don't know how to

[36:33] fully know and they don't know how to fully integrate it into the repo and

[36:34] fully integrate it into the repo and your style and your code and your place

[36:36] your style and your code and your place and some of the custom things that

[36:37] and some of the custom things that you're doing and

[36:38] you're doing and >> and uh how it fits with all the

[36:40] >> and uh how it fits with all the assumptions of the repository and all

[36:41] assumptions of the repository and all this kind of stuff. So I think they do

[36:43] this kind of stuff. So I think they do have some knowledge but um they haven't

[36:46] have some knowledge but um they haven't gotten to the place where they can

[36:47] gotten to the place where they can actually integrate it, make sense of it

[36:49] actually integrate it, make sense of it uh and so on. I do think that a lot of

[36:51] uh and so on. I do think that a lot of the stuff by the way continues to

[36:52] the stuff by the way continues to improve. So um I think currently

[36:54] improve. So um I think currently probably state-of-the-art model that I

[36:55] probably state-of-the-art model that I go to is the GP5 Pro.

[36:57] go to is the GP5 Pro. >> Um and uh that's a very very powerful

[36:59] >> Um and uh that's a very very powerful model. So if I actually have 20 minutes

[37:01] model. So if I actually have 20 minutes I will copy paste my entire repo and I

[37:02] I will copy paste my entire repo and I go to GPT5 Pro the Oracle for like some

[37:05] go to GPT5 Pro the Oracle for like some questions and often it's not too bad and

[37:07] questions and often it's not too bad and surprisingly good compared to what

[37:08] surprisingly good compared to what existed a year ago.

[37:09] existed a year ago. >> Yeah. Um, but I do think that uh overall

[37:12] >> Yeah. Um, but I do think that uh overall the models are are um they're not there.

[37:14] the models are are um they're not there. And I kind of feel like the industry

[37:16] And I kind of feel like the industry it's it's um it's over it's it's making

[37:20] it's it's um it's over it's it's making too big of a jump and it's trying to

[37:23] too big of a jump and it's trying to pretend like this is amazing and it's

[37:25] pretend like this is amazing and it's not. It's slop and I think they're not

[37:27] not. It's slop and I think they're not coming to terms with it and maybe

[37:28] coming to terms with it and maybe they're trying to fund raise or

[37:29] they're trying to fund raise or something like that. I'm not sure what's

[37:30] something like that. I'm not sure what's going on but it's we're at this

[37:32] going on but it's we're at this intermediate stage. The models are

[37:33] intermediate stage. The models are amazing. They still need a lot of work

[37:35] amazing. They still need a lot of work for now. autocomplete is my sweet spot

[37:38] for now. autocomplete is my sweet spot >> but sometimes for some types of code I

[37:40] >> but sometimes for some types of code I will go to a nom agent.

[37:41] will go to a nom agent. >> Yeah. Yeah.

[37:42] >> Yeah. Yeah. >> Actually this this is also here's

[37:43] >> Actually this this is also here's another reason why this is really

[37:44] another reason why this is really interesting. Um through the history of

[37:46] interesting. Um through the history of programming there's been many

[37:50] programming there's been many productivity improvements compilers

[37:53] productivity improvements compilers linting better programming languages etc

[37:55] linting better programming languages etc which have increased programmer

[37:57] which have increased programmer productivity but have not led to an

[37:58] productivity but have not led to an explosion. So that's like one that

[38:01] explosion. So that's like one that sounds very much like autocomplete tab

[38:03] sounds very much like autocomplete tab and this other category is just like

[38:05] and this other category is just like automation of the programmer

[38:07] automation of the programmer >> and it's interesting you're seeing more

[38:08] >> and it's interesting you're seeing more in the category of the historical

[38:10] in the category of the historical analogies of like you know better

[38:12] analogies of like you know better compilers or something

[38:13] compilers or something >> maybe because this one other kind of

[38:15] >> maybe because this one other kind of thought that is like

[38:16] thought that is like >> I do feel like I have a hard time

[38:18] >> I do feel like I have a hard time differentiating where AI begins and

[38:19] differentiating where AI begins and stops because I do see AI as

[38:21] stops because I do see AI as fundamentally an extension of computing

[38:23] fundamentally an extension of computing in some in some pretty fundamental way

[38:25] in some in some pretty fundamental way and I I feel like I see a continuum of

[38:27] and I I feel like I see a continuum of this kind of like recursive

[38:28] this kind of like recursive self-improvement or like of speeding up

[38:30] self-improvement or like of speeding up uh programmers all the way from the

[38:32] uh programmers all the way from the beginning like even like I would say

[38:33] beginning like even like I would say like uh code editors

[38:36] like uh code editors >> um uh syntax highlighting

[38:39] >> um uh syntax highlighting >> uh syntax uh or like checking even of

[38:41] >> uh syntax uh or like checking even of the of the types like data type checking

[38:43] the of the types like data type checking >> um

[38:44] >> um >> all these kinds of tools that we've

[38:45] >> all these kinds of tools that we've built for each for each other even

[38:47] built for each for each other even search engines like why aren't search

[38:48] search engines like why aren't search engines part of AI like

[38:50] engines part of AI like >> I don't know like ranking is kind of AI

[38:52] >> I don't know like ranking is kind of AI right at some point Google was like even

[38:54] right at some point Google was like even early on they were thinking of

[38:55] early on they were thinking of themselves as an AI company doing Google

[38:56] themselves as an AI company doing Google search engine which I think is totally

[38:58] search engine which I think is totally fair

[38:58] fair >> and So, I kind of see it as a lot more

[39:00] >> and So, I kind of see it as a lot more of a continuum than I think other people

[39:01] of a continuum than I think other people do and I don't it's hard for me to draw

[39:03] do and I don't it's hard for me to draw the line and I kind of feel like okay,

[39:04] the line and I kind of feel like okay, we're now getting a much better

[39:05] we're now getting a much better autocomplete and now we're also getting

[39:07] autocomplete and now we're also getting some agents which are kind of like these

[39:08] some agents which are kind of like these loopy things but they kind of go off

[39:10] loopy things but they kind of go off rails sometimes. Um, and what's going on

[39:14] rails sometimes. Um, and what's going on is that the human is progressively doing

[39:15] is that the human is progressively doing a bit less and less of the low-level

[39:17] a bit less and less of the low-level stuff. For example, we're not writing

[39:18] stuff. For example, we're not writing the assembly code because we have

[39:19] the assembly code because we have compilers,

[39:20] compilers, >> right? Like compilers will take my high

[39:21] >> right? Like compilers will take my high level language and C and write the

[39:22] level language and C and write the assembly code. So we're abstracting

[39:24] assembly code. So we're abstracting ourselves very very slowly and there's

[39:26] ourselves very very slowly and there's this what I call autonomy slider of like

[39:28] this what I call autonomy slider of like more and more stuff is automated of the

[39:30] more and more stuff is automated of the stuff that can be automated at any point

[39:31] stuff that can be automated at any point in time and we're doing a bit less and

[39:33] in time and we're doing a bit less and less and uh raising ourselves in the

[39:35] less and uh raising ourselves in the layer abstraction over the automation.

[39:37] layer abstraction over the automation. One of the big problems with RL is that

[39:39] One of the big problems with RL is that it's incredibly information sparse.

[39:41] it's incredibly information sparse. Lelbox can help you with this by

[39:43] Lelbox can help you with this by increasing the amount of information

[39:45] increasing the amount of information that your agent gets to learn from with

[39:47] that your agent gets to learn from with every single episode. For example, one

[39:50] every single episode. For example, one of their customers wanted to train a

[39:51] of their customers wanted to train a coding agent. So, Labelbox augmented an

[39:54] coding agent. So, Labelbox augmented an IDE with a bunch of extra data

[39:56] IDE with a bunch of extra data collection tools and staffed a team of

[39:58] collection tools and staffed a team of expert software engineers from their

[40:00] expert software engineers from their aligner network to generate trajectories

[40:02] aligner network to generate trajectories that were optimized for training. Now,

[40:04] that were optimized for training. Now, obviously, these engineers evaluated

[40:06] obviously, these engineers evaluated these interactions on a past field

[40:08] these interactions on a past field basis, but they also rated every single

[40:10] basis, but they also rated every single response on a bunch of different

[40:12] response on a bunch of different dimensions like readability and

[40:14] dimensions like readability and performance. And they wrote down their

[40:16] performance. And they wrote down their thought processes for every single

[40:18] thought processes for every single rating that they gave. So you're

[40:19] rating that they gave. So you're basically showing every single step an

[40:22] basically showing every single step an engineer takes and every single thought

[40:24] engineer takes and every single thought that they have while they're doing their

[40:26] that they have while they're doing their job. And this is just something you

[40:27] job. And this is just something you could never get from usage data alone.

[40:30] could never get from usage data alone. And so Labelbox packaged up all these

[40:32] And so Labelbox packaged up all these evaluations and included all the Asian

[40:35] evaluations and included all the Asian trajectories and the corrective human

[40:37] trajectories and the corrective human edits for the customer to train on. This

[40:39] edits for the customer to train on. This is just one example. So go check out how

[40:41] is just one example. So go check out how Labelbox can get you highquality

[40:43] Labelbox can get you highquality frontier data across domains,

[40:45] frontier data across domains, modalities, and training paradigms.

[40:48] modalities, and training paradigms. reach out at labelbox.com.

[40:53] Let's talk about RL a bit. Uh you two

[40:56] Let's talk about RL a bit. Uh you two did some very interesting things about

[40:57] did some very interesting things about this. Um conceptually, how should we

[41:00] this. Um conceptually, how should we think about the way that humans are able

[41:03] think about the way that humans are able to build a rich world model just from

[41:06] to build a rich world model just from interacting with our environment and in

[41:07] interacting with our environment and in ways that seems almost irrespective of

[41:10] ways that seems almost irrespective of the final reward at the end of the

[41:12] the final reward at the end of the episode.

[41:12] episode. >> Mhm. If somebody has, you know,

[41:14] >> Mhm. If somebody has, you know, somebody's starting to start a business

[41:16] somebody's starting to start a business and at the end of 10 years, she finds

[41:17] and at the end of 10 years, she finds out whether the business succeeded or

[41:18] out whether the business succeeded or failed,

[41:19] failed, >> we say that she's earned a bunch of

[41:20] >> we say that she's earned a bunch of wisdom and experience,

[41:22] wisdom and experience, >> but it's not because like the log probs

[41:24] >> but it's not because like the log probs of every single thing that happened over

[41:25] of every single thing that happened over the last 10 years are updated or

[41:26] the last 10 years are updated or downweighted. It's something much more

[41:28] downweighted. It's something much more deliberate and uh rich is happening.

[41:30] deliberate and uh rich is happening. What is the ML analogy and how does that

[41:33] What is the ML analogy and how does that compare to what we're doing with other

[41:34] compare to what we're doing with other ones right now?

[41:34] ones right now? >> Yeah, maybe the way I would put it is

[41:36] >> Yeah, maybe the way I would put it is humans don't use reinforcement learning

[41:37] humans don't use reinforcement learning is maybe what I've as I've said it. I I

[41:39] is maybe what I've as I've said it. I I think they do something different which

[41:40] think they do something different which is yeah you experience so reinforcement

[41:43] is yeah you experience so reinforcement learning is a lot worse than I think the

[41:44] learning is a lot worse than I think the average person thinks

[41:47] average person thinks reinforcement learning is terrible.

[41:50] reinforcement learning is terrible. It just so happens that uh everything

[41:52] It just so happens that uh everything that we had before is much worse

[41:55] that we had before is much worse u because previously we're just

[41:56] u because previously we're just imitating people so it has all these

[41:58] imitating people so it has all these issues. Um so in reinforcement learning

[42:00] issues. Um so in reinforcement learning say you're working with uh you're

[42:01] say you're working with uh you're solving a math problem because it's very

[42:03] solving a math problem because it's very simple. You're given a math problem and

[42:05] simple. You're given a math problem and you're trying to find the solution. Um

[42:07] you're trying to find the solution. Um now in reinforcement learning you will

[42:10] now in reinforcement learning you will try lots of things in parallel first. So

[42:13] try lots of things in parallel first. So uh you're given a problem you try

[42:15] uh you're given a problem you try hundreds of different attempts and these

[42:17] hundreds of different attempts and these attempts can be complex right they can

[42:18] attempts can be complex right they can be like oh let me try this let me try

[42:20] be like oh let me try this let me try that this didn't work that didn't work

[42:21] that this didn't work that didn't work etc. And then maybe you get an answer

[42:23] etc. And then maybe you get an answer and now you check the back of the book

[42:25] and now you check the back of the book and you see okay the correct answer is

[42:27] and you see okay the correct answer is this and then you can see that okay this

[42:29] this and then you can see that okay this one this one and that one got the

[42:30] one this one and that one got the correct answer but these other 97 of

[42:32] correct answer but these other 97 of them didn't. So literally what

[42:34] them didn't. So literally what reinforcement learning does is it goes

[42:35] reinforcement learning does is it goes to the ones that worked really well and

[42:37] to the ones that worked really well and every single thing you did along the way

[42:39] every single thing you did along the way every single token gets upweighted of

[42:41] every single token gets upweighted of like do more of this. The problem with

[42:43] like do more of this. The problem with that is I mean people will say that u

[42:45] that is I mean people will say that u your estimator has high variance but

[42:46] your estimator has high variance but what I mean it's just noisy it's noisy.

[42:49] what I mean it's just noisy it's noisy. So basically it kind of almost assumes

[42:51] So basically it kind of almost assumes that every single little piece of the

[42:53] that every single little piece of the solution that you made that ride the

[42:54] solution that you made that ride the right answer was correct thing to do

[42:55] right answer was correct thing to do which is not true. Like you may have

[42:57] which is not true. Like you may have gone down the wrong alleys until you

[42:59] gone down the wrong alleys until you arrive the right solution. Every single

[43:01] arrive the right solution. Every single one of those incorrect things you did,

[43:02] one of those incorrect things you did, as long as you got to the correct

[43:03] as long as you got to the correct solution, will be upweed as do more of

[43:05] solution, will be upweed as do more of this. It's terrible.

[43:06] this. It's terrible. >> Yeah, it's noise. You've done all this

[43:08] >> Yeah, it's noise. You've done all this work only to find a single at the end,

[43:11] work only to find a single at the end, you get a single number of like, oh, you

[43:12] you get a single number of like, oh, you did correct. And and based on that, you

[43:14] did correct. And and based on that, you weigh that entire trajectory as like

[43:16] weigh that entire trajectory as like upweight or down weight. And so you're

[43:19] upweight or down weight. And so you're the way I like to put it is you're

[43:20] the way I like to put it is you're sucking supervision through a straw. Uh

[43:22] sucking supervision through a straw. Uh because you've done all this work that

[43:23] because you've done all this work that could be a minute of rollout and you're

[43:25] could be a minute of rollout and you're you're like sucking the bits of

[43:26] you're like sucking the bits of supervision of the final reward signal

[43:28] supervision of the final reward signal through a straw and you're like putting

[43:29] through a straw and you're like putting it You're like

[43:32] it You're like basically like um yeah, you're

[43:33] basically like um yeah, you're broadcasting that across the entire

[43:35] broadcasting that across the entire trajectory and using that to upway or

[43:36] trajectory and using that to upway or down with that trajectory. It's crazy. A

[43:39] down with that trajectory. It's crazy. A human would never do this. Number one, a

[43:40] human would never do this. Number one, a human would never do hundreds of

[43:41] human would never do hundreds of rollouts. Uh number two, when a person

[43:44] rollouts. Uh number two, when a person sort of finds a solution, they will have

[43:46] sort of finds a solution, they will have a pretty complicated process of review

[43:48] a pretty complicated process of review of like, okay, I think these parts that

[43:49] of like, okay, I think these parts that I did well, these parts I did not do

[43:51] I did well, these parts I did not do that well, I should probably do this or

[43:53] that well, I should probably do this or that. And they think through things.

[43:55] that. And they think through things. There's nothing in current LLM that does

[43:56] There's nothing in current LLM that does this. There's no equivalent of it. Um

[43:59] this. There's no equivalent of it. Um but I do see papers popping out that are

[44:01] but I do see papers popping out that are trying to do this because it's obvious

[44:02] trying to do this because it's obvious to everyone in the field.

[44:03] to everyone in the field. >> Yeah.

[44:04] >> Yeah. >> So I kind of see as like the first

[44:05] >> So I kind of see as like the first imitation learning actually by the way

[44:06] imitation learning actually by the way was extremely surprising and miraculous

[44:08] was extremely surprising and miraculous and amazing that we can uh fine-tune by

[44:10] and amazing that we can uh fine-tune by imitation on humans. Um and that was

[44:12] imitation on humans. Um and that was incredible because in the beginning all

[44:14] incredible because in the beginning all we had was base models. Base models are

[44:15] we had was base models. Base models are autocomplete. uh and it wasn't obvious

[44:18] autocomplete. uh and it wasn't obvious to me at the time uh and I had to learn

[44:20] to me at the time uh and I had to learn this and the paper that like blew my

[44:22] this and the paper that like blew my mind was instruct GPT because it pointed

[44:24] mind was instruct GPT because it pointed out that hey you can take the

[44:25] out that hey you can take the pre-trained model which is autocomplete

[44:28] pre-trained model which is autocomplete and if you just fine-tune it on text

[44:29] and if you just fine-tune it on text that looks like conversations the model

[44:31] that looks like conversations the model will very rapidly adapt to become very

[44:33] will very rapidly adapt to become very conversational and it keeps all the

[44:35] conversational and it keeps all the knowledge from pre-training and this

[44:37] knowledge from pre-training and this blew my mind because I didn't understand

[44:38] blew my mind because I didn't understand that it's just like stylistically can

[44:40] that it's just like stylistically can adjust so quickly and become an

[44:41] adjust so quickly and become an assistant to a user through through just

[44:44] assistant to a user through through just a few loops of fine-tuning on that kind

[44:46] a few loops of fine-tuning on that kind of data It was very miraculous to me

[44:47] of data It was very miraculous to me that that that worked. So incredible.

[44:50] that that that worked. So incredible. And that was like two years, three years

[44:51] And that was like two years, three years of work. And now came RL. And RL allows

[44:54] of work. And now came RL. And RL allows you to do a bit better than just

[44:56] you to do a bit better than just imitation learning, right? Because you

[44:58] imitation learning, right? Because you you can't have these re um reward

[44:59] you can't have these re um reward functions and you can hill climb on the

[45:01] functions and you can hill climb on the reward functions. And so some problems

[45:03] reward functions. And so some problems have just correct answers. You can hill

[45:04] have just correct answers. You can hill climb on that without getting expert

[45:06] climb on that without getting expert trajectories to imitate. So that's

[45:07] trajectories to imitate. So that's amazing. And the model can also discover

[45:10] amazing. And the model can also discover solutions that the human might never

[45:11] solutions that the human might never come up with.

[45:12] come up with. >> Uh so this is incredible. And yet it's

[45:14] >> Uh so this is incredible. And yet it's still stupid. Um, so I think we need we

[45:17] still stupid. Um, so I think we need we need more and so I saw a paper from

[45:19] need more and so I saw a paper from Google yesterday that tried to have this

[45:21] Google yesterday that tried to have this reflect and review p um uh idea uh in

[45:24] reflect and review p um uh idea uh in mind. Uh what was the memory bank paper

[45:27] mind. Uh what was the memory bank paper or something? I don't know. I've

[45:28] or something? I don't know. I've actually seen a few papers along these

[45:30] actually seen a few papers along these lines. So I expect there to be some kind

[45:31] lines. So I expect there to be some kind of a major update to how we do

[45:33] of a major update to how we do algorithms for LLMs coming in that realm

[45:37] algorithms for LLMs coming in that realm and then I think we need three or four

[45:38] and then I think we need three or four or five more

[45:40] or five more um something like that. But you you're

[45:42] um something like that. But you you're so good at coming up with the evocative

[45:44] so good at coming up with the evocative evocative phrases sucking supervision

[45:47] evocative phrases sucking supervision through a straw. It's like so good. Um

[45:51] through a straw. It's like so good. Um why hasn't So you're saying like your

[45:53] why hasn't So you're saying like your problem with outcome based reward is

[45:54] problem with outcome based reward is that you have this huge trajectory and

[45:57] that you have this huge trajectory and then at the end you're you're trying to

[46:00] then at the end you're you're trying to learn every single possible thing about

[46:01] learn every single possible thing about what you should do and what you should

[46:02] what you should do and what you should learn about the world from that one

[46:04] learn about the world from that one final bit. um why hasn't given the fact

[46:07] final bit. um why hasn't given the fact that this is obvious, why hasn't process

[46:09] that this is obvious, why hasn't process based supervision

[46:10] based supervision >> as an alternative been a successful way

[46:12] >> as an alternative been a successful way to make models more capable? What what

[46:14] to make models more capable? What what has been preventing us from using this

[46:15] has been preventing us from using this alternative paradigm?

[46:16] alternative paradigm? >> So process based supervision just refers

[46:18] >> So process based supervision just refers to the fact that we're not going to have

[46:19] to the fact that we're not going to have a reward function only at the very end

[46:20] a reward function only at the very end of after you've made 10 minutes of work.

[46:22] of after you've made 10 minutes of work. I'm not going to tell you you did well

[46:23] I'm not going to tell you you did well or not well.

[46:24] or not well. >> I'm going to tell you at every single

[46:25] >> I'm going to tell you at every single step of the way how well you're doing.

[46:26] step of the way how well you're doing. >> Um and this is basically the reason we

[46:28] >> Um and this is basically the reason we don't have that is it's not trick it's

[46:30] don't have that is it's not trick it's tricky how you do that properly.

[46:31] tricky how you do that properly. >> Um because you have partial solutions

[46:32] >> Um because you have partial solutions and you don't know how to assign credit.

[46:34] and you don't know how to assign credit. So when you get the right answer, it's

[46:36] So when you get the right answer, it's just uh an equality match to the answer.

[46:38] just uh an equality match to the answer. Very simple to implement.

[46:40] Very simple to implement. >> If you're doing basically process

[46:42] >> If you're doing basically process supervision, how do you assign an

[46:44] supervision, how do you assign an automatable way partial credit

[46:46] automatable way partial credit assignment? It's not obvious how you do

[46:47] assignment? It's not obvious how you do it. Lots of labs, I think, are trying to

[46:49] it. Lots of labs, I think, are trying to do it with these LLM judges. So

[46:50] do it with these LLM judges. So basically, you get LLMs to try to do it.

[46:52] basically, you get LLMs to try to do it. So you prompt an LLM, hey, look at a

[46:54] So you prompt an LLM, hey, look at a partial solution of a student. How well

[46:55] partial solution of a student. How well do you think they're doing if the answer

[46:56] do you think they're doing if the answer is this? And they try to tune the

[46:58] is this? And they try to tune the prompt. Um, the reason that I think this

[47:00] prompt. Um, the reason that I think this is kind of tricky is quite subtle. And

[47:02] is kind of tricky is quite subtle. And it's the fact that anytime you use an

[47:04] it's the fact that anytime you use an LLM to assign a reward, those LLMs are

[47:07] LLM to assign a reward, those LLMs are giant things with billions of parameters

[47:08] giant things with billions of parameters and they're gameable.

[47:09] and they're gameable. >> And if you're reinforcement learning

[47:10] >> And if you're reinforcement learning with respect to them, you will find

[47:12] with respect to them, you will find adversarial examples for your LM judges

[47:14] adversarial examples for your LM judges >> almost guaranteed.

[47:15] >> almost guaranteed. >> You can't do this for too long. You do

[47:17] >> You can't do this for too long. You do maybe 10 steps or 20 steps, maybe it

[47:18] maybe 10 steps or 20 steps, maybe it will work. But you can't do a hundred or

[47:20] will work. But you can't do a hundred or a thousand because it's not obvious

[47:22] a thousand because it's not obvious because um I know I understand it's not

[47:24] because um I know I understand it's not obvious but basically the model will

[47:25] obvious but basically the model will find little cracks.

[47:28] find little cracks. It will find all these like spurious

[47:30] It will find all these like spurious things in the nooks and crannies of the

[47:32] things in the nooks and crannies of the giant model and find a way to cheat it.

[47:34] giant model and find a way to cheat it. So one example uh that's prominently in

[47:36] So one example uh that's prominently in my mind is I think this I think this was

[47:39] my mind is I think this I think this was probably public but basically if you're

[47:41] probably public but basically if you're using an LM judge for a reward so you

[47:43] using an LM judge for a reward so you just give it a solution from a student

[47:45] just give it a solution from a student and ask it if the student will or not.

[47:47] and ask it if the student will or not. We were training with reinforcement

[47:48] We were training with reinforcement learning against that reward function

[47:50] learning against that reward function >> and it worked really well and then um

[47:53] >> and it worked really well and then um suddenly the reward became extremely

[47:55] suddenly the reward became extremely large like it was massive jump and it

[47:56] large like it was massive jump and it did perfect and you're looking at it

[47:58] did perfect and you're looking at it like

[47:58] like >> wow this this means the student is

[48:00] >> wow this this means the student is perfect in all these problems it's fully

[48:02] perfect in all these problems it's fully solved math

[48:03] solved math >> but actually what's happening is that

[48:04] >> but actually what's happening is that when you look at the completions that

[48:06] when you look at the completions that you're getting from the model they are

[48:07] you're getting from the model they are complete nonsense they start out okay

[48:09] complete nonsense they start out okay and then they change to duh duh duh duh

[48:11] and then they change to duh duh duh duh so it's just like oh okay let's take two

[48:12] so it's just like oh okay let's take two plus three and we do this and this and

[48:14] plus three and we do this and this and then duh duh duh duh duh

[48:15] then duh duh duh duh duh >> and you're looking at it's like this

[48:16] >> and you're looking at it's like this crazy. How is it getting a reward of one

[48:18] crazy. How is it getting a reward of one or 100%. Um, and you look at the LLM

[48:21] or 100%. Um, and you look at the LLM judge and it turns out the is an

[48:22] judge and it turns out the is an adversarial example for the model and it

[48:24] adversarial example for the model and it assigns 100% probability to it. And it's

[48:27] assigns 100% probability to it. And it's just because this is an out of sample

[48:29] just because this is an out of sample example to the LLM. It's never seen it

[48:30] example to the LLM. It's never seen it during training and you're in pure

[48:32] during training and you're in pure generalization land,

[48:33] generalization land, >> right?

[48:34] >> right? >> It's never seen it during training. And

[48:35] >> It's never seen it during training. And in the pure generalization land, you can

[48:37] in the pure generalization land, you can find these examples that that uh break

[48:39] find these examples that that uh break it.

[48:40] it. >> You're basically training the LLM to be

[48:43] >> You're basically training the LLM to be a prompt injection model. Not even that

[48:45] a prompt injection model. Not even that prompting injection is way too fancy.

[48:46] prompting injection is way too fancy. You're you're finding adversarial

[48:47] You're you're finding adversarial examples as they're called. These are

[48:49] examples as they're called. These are nonsensical uh solutions um that are

[48:52] nonsensical uh solutions um that are obviously wrong, but the model things

[48:54] obviously wrong, but the model things are amazing.

[48:55] are amazing. >> So to the extent you think this is the

[48:56] >> So to the extent you think this is the bottleneck to making RL more functional,

[48:59] bottleneck to making RL more functional, then that will require making LLMs

[49:01] then that will require making LLMs better judges if you want to do this in

[49:03] better judges if you want to do this in an automated way. And then so is it just

[49:05] an automated way. And then so is it just going to be like some sort of GAN-like

[49:06] going to be like some sort of GAN-like approach where you had to train models

[49:08] approach where you had to train models to be more robust? Yeah. To

[49:09] to be more robust? Yeah. To >> I think the labs are probably doing all

[49:11] >> I think the labs are probably doing all that like okay so the obvious thing is

[49:12] that like okay so the obvious thing is like the should not get 100% reward.

[49:14] like the should not get 100% reward. Okay well take the put in the training

[49:16] Okay well take the put in the training set of the LM judge and say this is not

[49:18] set of the LM judge and say this is not 100% this is 0%. You can do this

[49:20] 100% this is 0%. You can do this >> but every time you do this you get a new

[49:22] >> but every time you do this you get a new LLM and it still has adversarial

[49:24] LLM and it still has adversarial examples. There's infinity adversarial

[49:25] examples. There's infinity adversarial examples. And I think probably if you

[49:27] examples. And I think probably if you iterate this a few times, it'll probably

[49:29] iterate this a few times, it'll probably be harder and harder to find real

[49:30] be harder and harder to find real examples, but I'm not 100% sure because

[49:32] examples, but I'm not 100% sure because this thing has a trillion parameters or

[49:34] this thing has a trillion parameters or whatnot. Um, so I bet you the the labs

[49:37] whatnot. Um, so I bet you the the labs are trying. Uh, I don't actually I I

[49:40] are trying. Uh, I don't actually I I still think I still think we need other

[49:43] still think I still think we need other ideas.

[49:44] ideas. >> Interesting. Do do you have some shape

[49:46] >> Interesting. Do do you have some shape of what the other idea

[49:49] of what the other idea >> could be? So like this this idea of like

[49:52] >> could be? So like this this idea of like review um review solution encompass

[49:54] review um review solution encompass synthetic examples such that when you

[49:56] synthetic examples such that when you train on them you get uh you get better

[49:58] train on them you get uh you get better and like metal learn it in some way and

[50:00] and like metal learn it in some way and I think there's some papers that I'm

[50:01] I think there's some papers that I'm starting to see pop out. I only am at a

[50:03] starting to see pop out. I only am at a stage of like reading abstracts because

[50:04] stage of like reading abstracts because a lot of these papers, you know, they're

[50:06] a lot of these papers, you know, they're just ideas. Someone has to actually like

[50:07] just ideas. Someone has to actually like make it work on a frontier LLM lab scale

[50:10] make it work on a frontier LLM lab scale uh in full generality because when you

[50:12] uh in full generality because when you see these papers, they pop up and it's

[50:14] see these papers, they pop up and it's just like a little bit of noisy, you

[50:15] just like a little bit of noisy, you know, it's cool ideas, but I haven't

[50:17] know, it's cool ideas, but I haven't actually seen anyone convincingly uh

[50:20] actually seen anyone convincingly uh show that this is possible. That said,

[50:21] show that this is possible. That said, the LLM labs are fairly closed. Uh so,

[50:23] the LLM labs are fairly closed. Uh so, who knows what they're doing now, but

[50:25] who knows what they're doing now, but >> yeah. So I guess I can I I see a very um

[50:28] >> yeah. So I guess I can I I see a very um not easy but like I I can conceptualize

[50:31] not easy but like I I can conceptualize how you would be able to train on

[50:34] how you would be able to train on synthetic examples or synthetic problems

[50:35] synthetic examples or synthetic problems that you have made for yourself. But

[50:36] that you have made for yourself. But there seems to be another thing humans

[50:38] there seems to be another thing humans do. Maybe sleep is this, maybe

[50:39] do. Maybe sleep is this, maybe daydreaming is this

[50:41] daydreaming is this >> which is not necessarily come up with

[50:43] >> which is not necessarily come up with fake problems but just like reflect.

[50:46] fake problems but just like reflect. >> Yeah.

[50:46] >> Yeah. >> And I'm not sure what the ML analogy

[50:48] >> And I'm not sure what the ML analogy for, you know, daydreaming or sleeping

[50:50] for, you know, daydreaming or sleeping but just like just reflecting. I haven't

[50:51] but just like just reflecting. I haven't come up with any problem. Yeah, I mean

[50:52] come up with any problem. Yeah, I mean obviously the very basic analogy would

[50:54] obviously the very basic analogy would just be like fine-tuning on reflection

[50:57] just be like fine-tuning on reflection bits, but I feel like in practice that

[50:58] bits, but I feel like in practice that probably wouldn't work that well. So I

[51:00] probably wouldn't work that well. So I don't know if you have some take on what

[51:02] don't know if you have some take on what the analogy of like this thing is.

[51:04] the analogy of like this thing is. >> Yeah, I do think that that we're missing

[51:06] >> Yeah, I do think that that we're missing some aspects there. So as an example,

[51:08] some aspects there. So as an example, >> uh when you're reading a book, um

[51:10] >> uh when you're reading a book, um >> I almost feel like currently when LLMs

[51:12] >> I almost feel like currently when LLMs are reading a book, what that means is

[51:13] are reading a book, what that means is we stretch out the sequence of text and

[51:15] we stretch out the sequence of text and the model is predicting the next token

[51:17] the model is predicting the next token and it's getting some knowledge from

[51:18] and it's getting some knowledge from that. That's not really what humans do,

[51:20] that. That's not really what humans do, right? So when you're reading a book, I

[51:21] right? So when you're reading a book, I almost don't even feel like the book is

[51:23] almost don't even feel like the book is like exposition I'm supposed to be

[51:24] like exposition I'm supposed to be attending to and training on. The book

[51:26] attending to and training on. The book is a is a set of prompts for me to do

[51:28] is a is a set of prompts for me to do synthetic data generation

[51:30] synthetic data generation >> or for you to get to a book club and

[51:32] >> or for you to get to a book club and talk about it with your friends. And

[51:33] talk about it with your friends. And it's by manipulating that information

[51:35] it's by manipulating that information that you actually gain that knowledge.

[51:36] that you actually gain that knowledge. And I I think we have no equivalent of

[51:38] And I I think we have no equivalent of that again with LLMs. They don't really

[51:40] that again with LLMs. They don't really do that. But I'd love to see during

[51:42] do that. But I'd love to see during pre-training some kind of a stage that

[51:43] pre-training some kind of a stage that uh thinks through the material and tries

[51:45] uh thinks through the material and tries to reconcile it with what it already

[51:46] to reconcile it with what it already knows and thinks through for like some

[51:48] knows and thinks through for like some amount of time. and um gets that to

[51:50] amount of time. and um gets that to work. And so there's no equivalence of

[51:53] work. And so there's no equivalence of any of this. This is all research.

[51:54] any of this. This is all research. There's some subtle very subtle that I

[51:56] There's some subtle very subtle that I think are very hard to understand

[51:58] think are very hard to understand reasons why it's not trivial. So if I

[51:59] reasons why it's not trivial. So if I can just describe one,

[52:01] can just describe one, >> why can we just synthetically generate

[52:02] >> why can we just synthetically generate and train on it?

[52:03] and train on it? >> Well, because every synthetic example

[52:05] >> Well, because every synthetic example like if I just give synthetic generation

[52:07] like if I just give synthetic generation of the model thinking about a book, you

[52:09] of the model thinking about a book, you look at it and you're like, "This looks

[52:10] look at it and you're like, "This looks great. Why can't I train on it?" Well,

[52:12] great. Why can't I train on it?" Well, you could try, but the model will

[52:13] you could try, but the model will actually get much worse if you continue

[52:14] actually get much worse if you continue trying. And that's because all of the

[52:16] trying. And that's because all of the samples you get from models are silently

[52:18] samples you get from models are silently collapsed. They're silently, this is not

[52:20] collapsed. They're silently, this is not obvious if you look at any individual

[52:22] obvious if you look at any individual example of it. They occupy a very tiny

[52:24] example of it. They occupy a very tiny manifold of the possible space of um

[52:26] manifold of the possible space of um sort of thoughts about content. So the

[52:28] sort of thoughts about content. So the LLMs when they come off, they're what we

[52:31] LLMs when they come off, they're what we call collapsed. They have a collapsed

[52:32] call collapsed. They have a collapsed data distribution. If you sample, one

[52:35] data distribution. If you sample, one easy way to see it is go to Chachi PT

[52:37] easy way to see it is go to Chachi PT and ask it tell me a joke. It only has

[52:38] and ask it tell me a joke. It only has like three jokes.

[52:40] like three jokes. >> It's not giving you the whole breath of

[52:41] >> It's not giving you the whole breath of possible jokes.

[52:42] possible jokes. >> It's giving you like it knows like three

[52:44] >> It's giving you like it knows like three jokes. Yeah,

[52:44] jokes. Yeah, >> they're silently collapsed. So

[52:46] >> they're silently collapsed. So basically, you're not getting the

[52:47] basically, you're not getting the richness and the diversity and the

[52:49] richness and the diversity and the entropy uh from these models as you

[52:51] entropy uh from these models as you would get from humans. So humans are a

[52:53] would get from humans. So humans are a lot more sort of noisier, but at least

[52:54] lot more sort of noisier, but at least they're not biased. They're not um in in

[52:56] they're not biased. They're not um in in a statistical sense, they're not

[52:58] a statistical sense, they're not silently collapsed. They maintain a huge

[52:59] silently collapsed. They maintain a huge amount of entropy. So how do you get

[53:01] amount of entropy. So how do you get synthetic data generation to work

[53:03] synthetic data generation to work despite the collapse and while

[53:05] despite the collapse and while maintaining the entropy is a research

[53:06] maintaining the entropy is a research problem. Um, just to make sure I

[53:08] problem. Um, just to make sure I understood, the reason that the collapse

[53:10] understood, the reason that the collapse is relevant to synthetic data generation

[53:12] is relevant to synthetic data generation is because you want to be able to come

[53:13] is because you want to be able to come up with synthetic problems or

[53:16] up with synthetic problems or reflections which are not already in

[53:18] reflections which are not already in your data distribution.

[53:20] your data distribution. >> I guess what I'm saying is um, say we

[53:22] >> I guess what I'm saying is um, say we have a chapter of a book and I ask a nom

[53:24] have a chapter of a book and I ask a nom to think about it.

[53:25] to think about it. >> Um, it will give you something that

[53:27] >> Um, it will give you something that looks very reasonable. But if I ask it

[53:29] looks very reasonable. But if I ask it 10 times, you'll notice that all of them

[53:31] 10 times, you'll notice that all of them are the same. You can't just leave

[53:32] are the same. You can't just leave scaling scaling quote unquote reflection

[53:36] scaling scaling quote unquote reflection on the same amount of uh you know prompt

[53:39] on the same amount of uh you know prompt information and then get returns from

[53:40] information and then get returns from that. Okay.

[53:41] that. Okay. >> Yeah. Yeah. So any individual sample

[53:43] >> Yeah. Yeah. So any individual sample will look okay but the distribution of

[53:44] will look okay but the distribution of it is is quite terrible and it's quite

[53:46] it is is quite terrible and it's quite terrible in such a way that if you

[53:48] terrible in such a way that if you continue training on too much of your

[53:49] continue training on too much of your own stuff you actually collapse. I

[53:51] own stuff you actually collapse. I actually think that um there's no like

[53:52] actually think that um there's no like fundamental solutions to this possibly

[53:54] fundamental solutions to this possibly and I also think humans collapse over

[53:55] and I also think humans collapse over time. Uh I think this is uh again these

[53:58] time. Uh I think this is uh again these analogies are surprisingly good but

[54:00] analogies are surprisingly good but humans collapse during the course of

[54:01] humans collapse during the course of their lives. This is why children have

[54:03] their lives. This is why children have completely u you know they haven't

[54:04] completely u you know they haven't overfit yet and they will say stuff that

[54:07] overfit yet and they will say stuff that will shock you because it's kind of you

[54:08] will shock you because it's kind of you can see where they're coming from but

[54:10] can see where they're coming from but it's just not the thing people say

[54:12] it's just not the thing people say >> and because they're not yet collapsed

[54:14] >> and because they're not yet collapsed but we're collapsed. We end up

[54:15] but we're collapsed. We end up revisiting the same thoughts. we end up,

[54:18] revisiting the same thoughts. we end up, you know, saying more and more of the

[54:19] you know, saying more and more of the same stuff and the learning rates go

[54:21] same stuff and the learning rates go down and uh the collapse continues to

[54:23] down and uh the collapse continues to get worse and then um everything

[54:26] get worse and then um everything deteriorates.

[54:27] deteriorates. >> Have Have you seen a super interesting

[54:28] >> Have Have you seen a super interesting paper that dreaming is a way of

[54:31] paper that dreaming is a way of preventing this kind of overfitting and

[54:33] preventing this kind of overfitting and collapse that the reason dreaming is uh

[54:36] collapse that the reason dreaming is uh evolutionary adaptive is to

[54:39] evolutionary adaptive is to >> put you in weird situations that are

[54:41] >> put you in weird situations that are like very unlike your day-to-day reality

[54:43] like very unlike your day-to-day reality so that to prevent this kind of

[54:44] so that to prevent this kind of >> overfitting. It's an interesting idea. I

[54:45] >> overfitting. It's an interesting idea. I mean, I do think that when you're

[54:47] mean, I do think that when you're generating things in your head and then

[54:49] generating things in your head and then you're attending to it, you're kind of

[54:50] you're attending to it, you're kind of like training on your own samples.

[54:51] like training on your own samples. You're training on your synthetic data

[54:52] You're training on your synthetic data and if you do it for too long, you go

[54:54] and if you do it for too long, you go off rails um and you collapse way too

[54:55] off rails um and you collapse way too much. So, you always have to like seek

[54:58] much. So, you always have to like seek um entropy in your life.

[54:59] um entropy in your life. >> Yeah.

[55:00] >> Yeah. >> Uh so talking to other people is a great

[55:02] >> Uh so talking to other people is a great source of entropy

[55:03] source of entropy >> and uh things like that. So maybe the

[55:05] >> and uh things like that. So maybe the brain has also built some internal

[55:07] brain has also built some internal mechanisms uh for increasing the amount

[55:08] mechanisms uh for increasing the amount of entropy um in in that process. But

[55:12] of entropy um in in that process. But yeah, maybe that's an interesting idea.

[55:14] yeah, maybe that's an interesting idea. This is a very ill-formed thought. So I

[55:16] This is a very ill-formed thought. So I I'll just put it out and let you react

[55:18] I'll just put it out and let you react to it. The best learners that we are

[55:20] to it. The best learners that we are aware of, which are children, are

[55:22] aware of, which are children, are extremely

[55:23] extremely bad at recollecting information. In

[55:25] bad at recollecting information. In fact, at the very earliest stages of

[55:27] fact, at the very earliest stages of childhood, you will forget everything.

[55:29] childhood, you will forget everything. You're just an amnesiac about everything

[55:30] You're just an amnesiac about everything that happens before a certain uh year

[55:32] that happens before a certain uh year date, but you're like extremely good at

[55:33] date, but you're like extremely good at picking up new languages and learning

[55:35] picking up new languages and learning from the world. And maybe there's some

[55:36] from the world. And maybe there's some element of like being able to see the

[55:37] element of like being able to see the forest for the trees. Whereas if you

[55:39] forest for the trees. Whereas if you compare it to the ex opposite end of the

[55:41] compare it to the ex opposite end of the spectrum, you have LLM pre-training

[55:44] spectrum, you have LLM pre-training which these models will literally able

[55:45] which these models will literally able to regurgitate word for word what is the

[55:48] to regurgitate word for word what is the next thing in a Wikipedia page, but

[55:50] next thing in a Wikipedia page, but their ability to learn abstract concepts

[55:52] their ability to learn abstract concepts really quickly the way a child can is

[55:54] really quickly the way a child can is much more limited. And then adults are

[55:56] much more limited. And then adults are somewhere in between where they don't

[55:58] somewhere in between where they don't have the flexibility of childhood

[55:59] have the flexibility of childhood learning, but they can, you know, adults

[56:02] learning, but they can, you know, adults can memorize facts and information in a

[56:04] can memorize facts and information in a way that is harder for kids. And I don't

[56:06] way that is harder for kids. And I don't know if there's something interesting

[56:07] know if there's something interesting about that. I think there's something

[56:08] about that. I think there's something very interesting about that. Yeah, 100%.

[56:10] very interesting about that. Yeah, 100%. I do think that humans actually

[56:12] I do think that humans actually >> um they do kind of like have a lot more

[56:14] >> um they do kind of like have a lot more of an element compared to like seeing

[56:16] of an element compared to like seeing the forest for the trees

[56:17] the forest for the trees >> and and we're not actually that good at

[56:19] >> and and we're not actually that good at memorization which is actually a

[56:20] memorization which is actually a feature.

[56:22] feature. >> Um because we're not that good at

[56:24] >> Um because we're not that good at memorization, we actually are kind of

[56:25] memorization, we actually are kind of like forced to uh find the patterns uh

[56:29] like forced to uh find the patterns uh um like more in a more general sense. I

[56:31] um like more in a more general sense. I think lens for in comparison are

[56:33] think lens for in comparison are extremely good at memorization. they

[56:34] extremely good at memorization. they will recite passages from all these uh

[56:37] will recite passages from all these uh training sources. Uh you can give them

[56:39] training sources. Uh you can give them completely nonsensical data like you can

[56:40] completely nonsensical data like you can take um you can hash some amount of text

[56:42] take um you can hash some amount of text or something like that. You get

[56:43] or something like that. You get completely random sequence. If you train

[56:45] completely random sequence. If you train on it even just I think a single

[56:46] on it even just I think a single iteration or two it can suddenly

[56:48] iteration or two it can suddenly regurgitate the entire thing. It will

[56:49] regurgitate the entire thing. It will memorize it. There's no way a person can

[56:51] memorize it. There's no way a person can read a single sequence of random numbers

[56:53] read a single sequence of random numbers and recite it to you. Um, and that's a

[56:55] and recite it to you. Um, and that's a feature, not a bug almost. Uh, because

[56:57] feature, not a bug almost. Uh, because it forces you to like only learn the

[56:59] it forces you to like only learn the generalizable components, whereas LLMs

[57:01] generalizable components, whereas LLMs are distracted by all the memory that

[57:03] are distracted by all the memory that they have of the pre-trained documents

[57:05] they have of the pre-trained documents and it's probably very distracting to

[57:07] and it's probably very distracting to them, uh, in a certain sense. So that's

[57:09] them, uh, in a certain sense. So that's why when I talk about the cognitive

[57:10] why when I talk about the cognitive core, I actually want to remove the

[57:12] core, I actually want to remove the memory, which is what we talked about.

[57:13] memory, which is what we talked about. I'd love to have it them have less

[57:15] I'd love to have it them have less memory so that they have to look things

[57:16] memory so that they have to look things up uh and that they only maintain the

[57:19] up uh and that they only maintain the algorithms for like thought uh and the

[57:21] algorithms for like thought uh and the idea of an experiment and all this

[57:23] idea of an experiment and all this cognitive glue of um of acting

[57:26] cognitive glue of um of acting >> and this is also relevant to preventing

[57:28] >> and this is also relevant to preventing model collapse.

[57:29] model collapse. >> Um let me think um

[57:34] >> Um let me think um I'm not sure I think it's almost like a

[57:36] I'm not sure I think it's almost like a separate axis. M

[57:37] separate axis. M >> it's almost like the the models are way

[57:38] >> it's almost like the the models are way too good at uh memorization and somehow

[57:40] too good at uh memorization and somehow we should we should remove that and I

[57:42] we should we should remove that and I think people people are much worse but

[57:44] think people people are much worse but it's a good thing.

[57:46] it's a good thing. >> What is a solution to model collapse? I

[57:48] >> What is a solution to model collapse? I mean you could so there's very naive

[57:49] mean you could so there's very naive things you could attempt is just like

[57:52] things you could attempt is just like >> um the distribution over lo should be

[57:54] >> um the distribution over lo should be wider or something like there's many

[57:56] wider or something like there's many naive things you could try. What ends up

[57:58] naive things you could try. What ends up being the problem with the naive

[57:59] being the problem with the naive approaches?

[58:00] approaches? >> Um yeah I think that's a great question.

[58:01] >> Um yeah I think that's a great question. I mean you can imagine having a

[58:02] I mean you can imagine having a regularization for entropy and things

[58:04] regularization for entropy and things like that. I guess they just don't work

[58:05] like that. I guess they just don't work as well empirically because uh right now

[58:08] as well empirically because uh right now like the models are collapsed but I will

[58:10] like the models are collapsed but I will say um most of the tasks that we want of

[58:13] say um most of the tasks that we want of them don't actually demand the diversity

[58:16] them don't actually demand the diversity >> is probably the the answer of what's

[58:18] >> is probably the the answer of what's going on and so it's just that the model

[58:20] going on and so it's just that the model the frontier labs are trying to make the

[58:21] the frontier labs are trying to make the models useful and I kind of just feel

[58:23] models useful and I kind of just feel like the diversity of the outputs is not

[58:25] like the diversity of the outputs is not so much number one it's much harder to

[58:27] so much number one it's much harder to work with and evaluate and all this kind

[58:28] work with and evaluate and all this kind of stuff but maybe it's not what's

[58:29] of stuff but maybe it's not what's actually capturing most of the value.

[58:31] actually capturing most of the value. Um,

[58:32] Um, >> in fact, it's actively penalized, right?

[58:33] >> in fact, it's actively penalized, right? If you if you're like super creative in

[58:36] If you if you're like super creative in RL, it's like not good.

[58:37] RL, it's like not good. >> Yeah. Or like maybe if you're doing a

[58:38] >> Yeah. Or like maybe if you're doing a lot of writing help from LMS and stuff

[58:40] lot of writing help from LMS and stuff like that, I think it's probably bad

[58:41] like that, I think it's probably bad because the models will give you these

[58:42] because the models will give you these like silently

[58:44] like silently >> all the same stuff, you know? So,

[58:46] >> all the same stuff, you know? So, they're not um they won't explore lots

[58:48] they're not um they won't explore lots of different ways of answering a

[58:49] of different ways of answering a question, right?

[58:50] question, right? >> But I kind of feel like maybe this

[58:51] >> But I kind of feel like maybe this diversity is just not as big of um yeah,

[58:54] diversity is just not as big of um yeah, maybe like yeah, not as many

[58:55] maybe like yeah, not as many applications need it, so the models

[58:56] applications need it, so the models don't have it, but then it's actually a

[58:58] don't have it, but then it's actually a problem at synthetic generation time,

[58:59] problem at synthetic generation time, etc. So we're actually shooting

[59:00] etc. So we're actually shooting ourselves in the foot by not allowing

[59:02] ourselves in the foot by not allowing this entropy to maintain in the model.

[59:03] this entropy to maintain in the model. And I think possibly uh the labs should

[59:06] And I think possibly uh the labs should try harder.

[59:06] try harder. >> And then I think you hinted that it's a

[59:08] >> And then I think you hinted that it's a it's a very fundamental problem. It

[59:10] it's a very fundamental problem. It won't be easy to solve. And yeah, what's

[59:12] won't be easy to solve. And yeah, what's your intuition for that?

[59:14] your intuition for that? >> I don't actually know if it's um super

[59:16] >> I don't actually know if it's um super fundamental. Uh I don't actually know if

[59:17] fundamental. Uh I don't actually know if I intended to to say that. I do think

[59:20] I intended to to say that. I do think that um

[59:22] that um I haven't done these experiments, but I

[59:23] I haven't done these experiments, but I do think that you could probably

[59:24] do think that you could probably regularize the entropy to be uh to be

[59:26] regularize the entropy to be uh to be higher. So you're encouraging the model

[59:27] higher. So you're encouraging the model to give you more and more solutions. Um

[59:30] to give you more and more solutions. Um but you don't want it to start deviating

[59:32] but you don't want it to start deviating too much from the training data. It's

[59:33] too much from the training data. It's going to start making up its own

[59:34] going to start making up its own language. It's going to start using

[59:35] language. It's going to start using words that are extremely rare. U you

[59:37] words that are extremely rare. U you know so it's going to drift too much

[59:38] know so it's going to drift too much from the distribution. Uh so I think

[59:40] from the distribution. Uh so I think controlling the distribution is just

[59:41] controlling the distribution is just like a tricky it's just like someone

[59:44] like a tricky it's just like someone just has to

[59:45] just has to >> it's probably not trivial in that sense.

[59:47] >> it's probably not trivial in that sense. >> How many bits should the optimal core

[59:51] >> How many bits should the optimal core >> of intelligence end up being if you just

[59:54] >> of intelligence end up being if you just had to make a guess? the thing we put on

[59:55] had to make a guess? the thing we put on the uh van

[59:57] the uh van >> pros how big does it have to be?

[01:00:00] >> pros how big does it have to be? >> So it's really interesting in the

[01:00:01] >> So it's really interesting in the history of the field because at one

[01:00:02] history of the field because at one point everything was very um scaling

[01:00:04] point everything was very um scaling pill in terms of like oh we're going to

[01:00:06] pill in terms of like oh we're going to make much bigger models trillions of

[01:00:07] make much bigger models trillions of parameter models and actually what the

[01:00:09] parameter models and actually what the models have done in size is they've gone

[01:00:11] models have done in size is they've gone up and now they've actually kind of like

[01:00:14] up and now they've actually kind of like actually even come down their models are

[01:00:16] actually even come down their models are smaller

[01:00:16] smaller >> and even then I actually think they

[01:00:18] >> and even then I actually think they memorized way too much. Um, so I think I

[01:00:21] memorized way too much. Um, so I think I had a prediction a while back that I I

[01:00:22] had a prediction a while back that I I almost feel like we can get cognitive

[01:00:24] almost feel like we can get cognitive cores that are very good at even like a

[01:00:25] cores that are very good at even like a billion billion parameters. It it should

[01:00:28] billion billion parameters. It it should be already like like if you talk to a

[01:00:30] be already like like if you talk to a billion parameter model I think in 20

[01:00:31] billion parameter model I think in 20 years you can actually have a very

[01:00:33] years you can actually have a very productive conversation. It thinks um

[01:00:35] productive conversation. It thinks um and it's a lot more like a human. But if

[01:00:37] and it's a lot more like a human. But if you ask it some factual question might

[01:00:39] you ask it some factual question might have to look it up but it knows that it

[01:00:40] have to look it up but it knows that it doesn't know and it might have to look

[01:00:41] doesn't know and it might have to look it up and it will just do all the

[01:00:42] it up and it will just do all the reasonable things. That that's actually

[01:00:44] reasonable things. That that's actually surprising that you think it will take a

[01:00:45] surprising that you think it will take a billion because already we have a

[01:00:47] billion because already we have a billion parameter models or a couple

[01:00:48] billion parameter models or a couple billion parameter models that are like

[01:00:50] billion parameter models that are like very intelligent.

[01:00:51] very intelligent. >> Well, some of our models are like a

[01:00:52] >> Well, some of our models are like a trillion parameters, right? But they

[01:00:54] trillion parameters, right? But they remember so much stuff like just

[01:00:56] remember so much stuff like just >> Yeah. But I'm surprised that in 10 years

[01:00:58] >> Yeah. But I'm surprised that in 10 years given the pace, okay, we have GPT OSS

[01:01:03] given the pace, okay, we have GPT OSS 20B that's way better than GPD4 original

[01:01:07] 20B that's way better than GPD4 original which was a trillion plus uh parameters.

[01:01:09] which was a trillion plus uh parameters. So given that trend, I'm actually

[01:01:11] So given that trend, I'm actually surprised you think in 10 years the

[01:01:12] surprised you think in 10 years the cognitive core is still a billion

[01:01:14] cognitive core is still a billion parameters. I would I'm surprised you're

[01:01:16] parameters. I would I'm surprised you're not like it's going to be like uh tens

[01:01:18] not like it's going to be like uh tens of millions or millions.

[01:01:20] of millions or millions. >> No, because I basically think that the

[01:01:21] >> No, because I basically think that the training data is so here's the issue.

[01:01:22] training data is so here's the issue. The training data is the internet which

[01:01:24] The training data is the internet which is really terrible.

[01:01:25] is really terrible. >> So there's a huge amount of gains to be

[01:01:27] >> So there's a huge amount of gains to be made because the internet is terrible.

[01:01:28] made because the internet is terrible. Like if you actually and even the

[01:01:29] Like if you actually and even the internet when you and I think of the

[01:01:30] internet when you and I think of the internet, you're thinking of like a Wall

[01:01:32] internet, you're thinking of like a Wall Street Journal or

[01:01:33] Street Journal or >> that's not what this is. When you're

[01:01:35] >> that's not what this is. When you're actually looking at a preaching data set

[01:01:36] actually looking at a preaching data set in the Frontier Lab and you look at a

[01:01:38] in the Frontier Lab and you look at a random internet document, it's total

[01:01:39] random internet document, it's total garbage. Like I don't even know how this

[01:01:41] garbage. Like I don't even know how this works at all. It's some like stock

[01:01:44] works at all. It's some like stock ticker symbols. Uh

[01:01:46] ticker symbols. Uh >> it's a huge amount of slop and garbage

[01:01:48] >> it's a huge amount of slop and garbage from like all the corners of the

[01:01:49] from like all the corners of the internet. It's not like your Wall Street

[01:01:51] internet. It's not like your Wall Street Journal article that's extremely rare.

[01:01:53] Journal article that's extremely rare. >> Um so I almost feel like because the

[01:01:55] >> Um so I almost feel like because the internet is so terrible, we actually

[01:01:56] internet is so terrible, we actually have to sort of like build really big

[01:01:58] have to sort of like build really big models to compress all that. Uh most of

[01:02:00] models to compress all that. Uh most of that compression is memory work instead

[01:02:02] that compression is memory work instead of like cognitive work. But what we

[01:02:04] of like cognitive work. But what we really want is the cognitive part

[01:02:05] really want is the cognitive part actually delete the memory

[01:02:06] actually delete the memory >> and then so I guess what I'm saying is

[01:02:09] >> and then so I guess what I'm saying is like we need intelligent models to help

[01:02:11] like we need intelligent models to help us refine even the pre-training set to

[01:02:13] us refine even the pre-training set to just narrow it down to the cognitive

[01:02:15] just narrow it down to the cognitive components and then I think you get away

[01:02:16] components and then I think you get away with a much smaller model because it's a

[01:02:18] with a much smaller model because it's a much better data set and you could train

[01:02:19] much better data set and you could train it on it but probably it's not trained

[01:02:21] it on it but probably it's not trained directly on it. It's probably distilled

[01:02:22] directly on it. It's probably distilled for a much better model still but

[01:02:24] for a much better model still but >> but why is the distilled version still a

[01:02:26] >> but why is the distilled version still a billion is I guess the thing I'm curious

[01:02:28] billion is I guess the thing I'm curious about.

[01:02:28] about. >> I just feel like distillation work

[01:02:29] >> I just feel like distillation work extremely well. So um almost every small

[01:02:32] extremely well. So um almost every small model if you have a small model it's

[01:02:33] model if you have a small model it's almost certainly distilled. Why would

[01:02:34] almost certainly distilled. Why would you train on

[01:02:35] you train on >> right? No no but why is a distillation

[01:02:37] >> right? No no but why is a distillation not in 10 years not getting below 1

[01:02:39] not in 10 years not getting below 1 billion.

[01:02:39] billion. >> Oh you think it should be smaller than a

[01:02:41] >> Oh you think it should be smaller than a million?

[01:02:41] million? >> I mean come on right I don't know at

[01:02:44] >> I mean come on right I don't know at some point uh it should take at least a

[01:02:46] some point uh it should take at least a billion knobs uh to do something

[01:02:48] billion knobs uh to do something interesting. You're thinking it should

[01:02:50] interesting. You're thinking it should be even smaller.

[01:02:51] be even smaller. >> Yeah. I mean just like if you look at

[01:02:52] >> Yeah. I mean just like if you look at the trend over the last few years just

[01:02:54] the trend over the last few years just finding low hanging fruit and going from

[01:02:55] finding low hanging fruit and going from like trillion plus models that are like

[01:02:58] like trillion plus models that are like literally two orders of magnitude

[01:02:59] literally two orders of magnitude smaller in a matter of two years and

[01:03:01] smaller in a matter of two years and having better performance.

[01:03:03] having better performance. >> Yeah.

[01:03:03] >> Yeah. >> It makes me think the the sort of like

[01:03:05] >> It makes me think the the sort of like core of intelligence might be

[01:03:07] core of intelligence might be >> even way way smaller like plenty of room

[01:03:09] >> even way way smaller like plenty of room at the bottom to to paraphrase fineman.

[01:03:11] at the bottom to to paraphrase fineman. >> I mean I almost feel like I'm already

[01:03:12] >> I mean I almost feel like I'm already contrarian by talking about a billion

[01:03:14] contrarian by talking about a billion parameter cognitive core and you're

[01:03:16] parameter cognitive core and you're outdoing me. I think um yeah maybe we

[01:03:18] outdoing me. I think um yeah maybe we could get a little bit smaller. I mean,

[01:03:19] could get a little bit smaller. I mean, I still think that there should be

[01:03:21] I still think that there should be enough.

[01:03:21] enough. >> Yeah, maybe it can be smaller.

[01:03:23] >> Yeah, maybe it can be smaller. >> I do think that practically speaking,

[01:03:24] >> I do think that practically speaking, you want the model to have some

[01:03:26] you want the model to have some knowledge. You don't want it to be

[01:03:26] knowledge. You don't want it to be looking up everything.

[01:03:28] looking up everything. >> Um because then you can't like think in

[01:03:29] >> Um because then you can't like think in your head. You're looking up way too

[01:03:30] your head. You're looking up way too much stuff all the time. So, I do think

[01:03:32] much stuff all the time. So, I do think it needs to be some basic curriculum

[01:03:34] it needs to be some basic curriculum needs to be there for knowledge.

[01:03:36] needs to be there for knowledge. >> Uh but it doesn't have esoteric

[01:03:37] >> Uh but it doesn't have esoteric knowledge, you know.

[01:03:38] knowledge, you know. >> Yeah. So, we're discussing what like

[01:03:39] >> Yeah. So, we're discussing what like plausibly could be the cognitive core.

[01:03:41] plausibly could be the cognitive core. There's a separate question which is

[01:03:43] There's a separate question which is what will actually be the size of French

[01:03:45] what will actually be the size of French models over time? And I'm curious to

[01:03:47] models over time? And I'm curious to have a prediction. So we had increasing

[01:03:50] have a prediction. So we had increasing scale up to maybe 4.5 and now we're

[01:03:52] scale up to maybe 4.5 and now we're seeing decreasing/plateing scale.

[01:03:55] seeing decreasing/plateing scale. There's many reasons that could be going

[01:03:56] There's many reasons that could be going on but do you have a prediction about

[01:03:57] on but do you have a prediction about going forward will scale will the

[01:03:59] going forward will scale will the biggest models be bigger? Will they be

[01:04:01] biggest models be bigger? Will they be smaller? Will they be the same?

[01:04:03] smaller? Will they be the same? >> Um yeah I don't know that I have a super

[01:04:05] >> Um yeah I don't know that I have a super strong prediction. I do think that the

[01:04:07] strong prediction. I do think that the labs are just being practical. They have

[01:04:08] labs are just being practical. They have a flops budget and a cost budget. And it

[01:04:11] a flops budget and a cost budget. And it just turns out that pre-shraining is not

[01:04:12] just turns out that pre-shraining is not where you want to put most of your flops

[01:04:13] where you want to put most of your flops or your cost. So that's why the models

[01:04:14] or your cost. So that's why the models have gotten smaller because they are a

[01:04:16] have gotten smaller because they are a bit smaller. or the pre-training stages

[01:04:18] bit smaller. or the pre-training stages smaller etc but they make it up in

[01:04:19] smaller etc but they make it up in reinforcement learning and all this kind

[01:04:20] reinforcement learning and all this kind of stuff mid training and all this kind

[01:04:22] of stuff mid training and all this kind of stuff that follows

[01:04:23] of stuff that follows >> uh so they're just being practical in

[01:04:24] >> uh so they're just being practical in terms of all the stages and how you get

[01:04:26] terms of all the stages and how you get the most bang for the buck um so I guess

[01:04:28] the most bang for the buck um so I guess like forecasting that trend I think uh

[01:04:29] like forecasting that trend I think uh is quite hard I do still expect that

[01:04:31] is quite hard I do still expect that there's so much longing for it that's my

[01:04:33] there's so much longing for it that's my basic that's my basic expectation um and

[01:04:37] basic that's my basic expectation um and so I I have a very wide distribution

[01:04:39] so I I have a very wide distribution here um do you expect the longing for it

[01:04:42] here um do you expect the longing for it to be similar in kind to the kinds of

[01:04:45] to be similar in kind to the kinds of things that have been happening over the

[01:04:47] things that have been happening over the two to five years like just in terms of

[01:04:49] two to five years like just in terms of like if I look at nano chat versus nano

[01:04:51] like if I look at nano chat versus nano GPT and then the architectural tweaks

[01:04:53] GPT and then the architectural tweaks you made

[01:04:54] you made >> is that basically like the flavor of

[01:04:55] >> is that basically like the flavor of things you continue to keep happening or

[01:04:57] things you continue to keep happening or is there you're not expecting any giant

[01:05:00] is there you're not expecting any giant >> part yeah I I expect the data sets to

[01:05:01] >> part yeah I I expect the data sets to get much much better because when you

[01:05:02] get much much better because when you look at the average data sets they're

[01:05:03] look at the average data sets they're extremely terrible like so bad that I

[01:05:05] extremely terrible like so bad that I don't even know how anything works to be

[01:05:06] don't even know how anything works to be honest like look at the average example

[01:05:08] honest like look at the average example in the training set

[01:05:10] in the training set >> like factual mistakes errors yeah

[01:05:12] >> like factual mistakes errors yeah >> nonsensical things um somehow when you

[01:05:15] >> nonsensical things um somehow when you do it at scale the the noise washes away

[01:05:17] do it at scale the the noise washes away and you're left with some of the signal.

[01:05:19] and you're left with some of the signal. Um so data sets will improve a ton. It's

[01:05:21] Um so data sets will improve a ton. It's just everything gets better. So um our

[01:05:24] just everything gets better. So um our hardware um all the kernels um uh all

[01:05:28] hardware um all the kernels um uh all the kernels for running the hardware and

[01:05:29] the kernels for running the hardware and maximizing what you get with the

[01:05:30] maximizing what you get with the hardware, you know. So NVIDIA is slowly

[01:05:32] hardware, you know. So NVIDIA is slowly tuning the actual hardware itself tensor

[01:05:33] tuning the actual hardware itself tensor course and so on. All that needs to

[01:05:35] course and so on. All that needs to happen and will continue to happen. Uh

[01:05:36] happen and will continue to happen. Uh all the kernels will get better and

[01:05:38] all the kernels will get better and utilize the chip to the max extent. all

[01:05:40] utilize the chip to the max extent. all the algorithms will probably improve

[01:05:41] the algorithms will probably improve improve over optimization architecture

[01:05:43] improve over optimization architecture and um just all the modeling components

[01:05:45] and um just all the modeling components of how everything is done and what the

[01:05:46] of how everything is done and what the algorithms are that we're even training

[01:05:48] algorithms are that we're even training with. So I do I do kind of expect like a

[01:05:50] with. So I do I do kind of expect like a just very just everything nothing

[01:05:53] just very just everything nothing dominates everything plus 20%.

[01:05:56] dominates everything plus 20%. >> Right. Interesting.

[01:05:57] >> Right. Interesting. >> This is like roughly what I've seen.

[01:05:59] >> This is like roughly what I've seen. >> Okay. This is my general manager Max.

[01:06:01] >> Okay. This is my general manager Max. >> Good to be here here every day.

[01:06:02] >> Good to be here here every day. >> And you have been here since you were

[01:06:04] >> And you have been here since you were onboarded about 6 months ago. But when I

[01:06:06] onboarded about 6 months ago. But when I was

[01:06:06] was >> months ago

[01:06:06] >> months ago >> Oh, right. Um, time passes so fast. But

[01:06:09] >> Oh, right. Um, time passes so fast. But when I on boarded you, I was in France

[01:06:12] when I on boarded you, I was in France and so we basically didn't get the

[01:06:14] and so we basically didn't get the chance to talk at all almost

[01:06:16] chance to talk at all almost >> and you basically just gave me one

[01:06:18] >> and you basically just gave me one login.

[01:06:18] login. >> I gave you access to my Mercury

[01:06:20] >> I gave you access to my Mercury platform, which is the banking platform

[01:06:22] platform, which is the banking platform that I was using at the time to run the

[01:06:24] that I was using at the time to run the podcast.

[01:06:24] podcast. >> And so I logged into Mercury assuming

[01:06:26] >> And so I logged into Mercury assuming that that would just be the first of

[01:06:27] that that would just be the first of many steps, but I realized that was how

[01:06:29] many steps, but I realized that was how you were running the entire business,

[01:06:31] you were running the entire business, even down to a lot of our editors are

[01:06:33] even down to a lot of our editors are international contractors. And so you

[01:06:35] international contractors. And so you had just figured out how to set up these

[01:06:36] had just figured out how to set up these recurring payments to set up basic

[01:06:38] recurring payments to set up basic payroll.

[01:06:39] payroll. >> I mean, Mercury made the experience of

[01:06:41] >> I mean, Mercury made the experience of all of these things I was doing before

[01:06:42] all of these things I was doing before so seamless that it didn't even occur to

[01:06:44] so seamless that it didn't even occur to me until you pointed it out that this is

[01:06:45] me until you pointed it out that this is not the natural way to set up payroll or

[01:06:47] not the natural way to set up payroll or invoicing or any of these other things.

[01:06:49] invoicing or any of these other things. >> I I was surprised, but I was like, it's

[01:06:51] >> I I was surprised, but I was like, it's worked so far, so maybe I'll trust it.

[01:06:54] worked so far, so maybe I'll trust it. And then now I can't think of doing

[01:06:55] And then now I can't think of doing anything else.

[01:06:56] anything else. >> All right, you heard him. Visit

[01:06:57] >> All right, you heard him. Visit mercury.com to apply online in minutes.

[01:07:01] mercury.com to apply online in minutes. Cool. Thanks, Max.

[01:07:02] Cool. Thanks, Max. >> Thanks for having me.

[01:07:02] >> Thanks for having me. >> Dude, you're great at this. I'm so

[01:07:04] >> Dude, you're great at this. I'm so nervous, but thank you.

[01:07:05] nervous, but thank you. >> Mercury is a financial technology

[01:07:07] >> Mercury is a financial technology company, not a bank. Banking services

[01:07:08] company, not a bank. Banking services provided through Choice Financial Group,

[01:07:10] provided through Choice Financial Group, column NA, and Evolve Bank and Trust

[01:07:12] column NA, and Evolve Bank and Trust members FDIC. People have proposed

[01:07:14] members FDIC. People have proposed different ways of charting how much

[01:07:18] different ways of charting how much progress we've made towards full AGI

[01:07:20] progress we've made towards full AGI because if you can come up with some

[01:07:22] because if you can come up with some line, then you can see where that line

[01:07:24] line, then you can see where that line intersects with AGI and where that would

[01:07:26] intersects with AGI and where that would happen on the X-axis. And so people have

[01:07:28] happen on the X-axis. And so people have proposed, oh, it's like the education

[01:07:29] proposed, oh, it's like the education level, like we had a high schooler and

[01:07:30] level, like we had a high schooler and then then they went to college with RL

[01:07:33] then then they went to college with RL and they're going to get a PhD. I don't

[01:07:34] and they're going to get a PhD. I don't like that one.

[01:07:34] like that one. >> Um or and then they'll propose horizon

[01:07:36] >> Um or and then they'll propose horizon length. So maybe they can do tasks that

[01:07:38] length. So maybe they can do tasks that take a minute. Uh they can do those

[01:07:41] take a minute. Uh they can do those autonomously, then they can autonomously

[01:07:42] autonomously, then they can autonomously do tasks that take an hour, a human an

[01:07:44] do tasks that take an hour, a human an hour, a human a week, etc.

[01:07:46] hour, a human a week, etc. >> How do you think about what is the

[01:07:48] >> How do you think about what is the relevant um y-axis here? What is the how

[01:07:51] relevant um y-axis here? What is the how should we think about how AI is making

[01:07:53] should we think about how AI is making progress?

[01:07:54] progress? >> So I guess I have two answers to that.

[01:07:55] >> So I guess I have two answers to that. Number one, I'm almost tempted to like

[01:07:57] Number one, I'm almost tempted to like reject the question entirely because

[01:07:58] reject the question entirely because again like I see this as an extension of

[01:08:00] again like I see this as an extension of computing. Have we talked about like how

[01:08:02] computing. Have we talked about like how to chart progress in computing or how do

[01:08:03] to chart progress in computing or how do you chart progress in computing since

[01:08:05] you chart progress in computing since 1970s or whatever. What is the x-axis?

[01:08:07] 1970s or whatever. What is the x-axis? So I kind of feel like the whole

[01:08:09] So I kind of feel like the whole question is kind of like funny from that

[01:08:10] question is kind of like funny from that perspective a little bit. Um but I will

[01:08:12] perspective a little bit. Um but I will say I guess like when people talk about

[01:08:14] say I guess like when people talk about AI and the original AGI and how we spoke

[01:08:16] AI and the original AGI and how we spoke about it when we um when OpenAI started

[01:08:18] about it when we um when OpenAI started >> AGI was a system you can go to that can

[01:08:21] >> AGI was a system you can go to that can do any task that is economically

[01:08:23] do any task that is economically valuable any economically valuable task

[01:08:25] valuable any economically valuable task at um human performance or better.

[01:08:29] at um human performance or better. >> Okay. So that was the definition and I

[01:08:30] >> Okay. So that was the definition and I was pretty happy with that at the time

[01:08:31] was pretty happy with that at the time and I kind of feel like I've stuck to

[01:08:33] and I kind of feel like I've stuck to that definition forever and then people

[01:08:35] that definition forever and then people have made up all kinds of other

[01:08:36] have made up all kinds of other definitions but I I like I feel like I

[01:08:39] definitions but I I like I feel like I like that definition. Now, number one,

[01:08:41] like that definition. Now, number one, the first concession that people make

[01:08:42] the first concession that people make all the time is they just take out all

[01:08:44] all the time is they just take out all the physical stuff because we're just

[01:08:46] the physical stuff because we're just talking about digital knowledge work. I

[01:08:47] talking about digital knowledge work. I feel like that's a pretty major

[01:08:48] feel like that's a pretty major concession compared to the original

[01:08:50] concession compared to the original definition which was like any task a

[01:08:52] definition which was like any task a human can do. I can lift things, etc.

[01:08:54] human can do. I can lift things, etc. Like AI can't do that obviously. So,

[01:08:55] Like AI can't do that obviously. So, okay, but we'll take it.

[01:08:57] okay, but we'll take it. >> Uh, what fraction of the economy are we

[01:08:59] >> Uh, what fraction of the economy are we taking away by saying, "Oh, only

[01:09:00] taking away by saying, "Oh, only knowledge work." Um, I don't actually

[01:09:02] knowledge work." Um, I don't actually know the numbers. I feel like um it's

[01:09:04] know the numbers. I feel like um it's about 10 to 20% if I had to guess. Is um

[01:09:07] about 10 to 20% if I had to guess. Is um is only knowledge work. uh like someone

[01:09:09] is only knowledge work. uh like someone could work from home and perform tasks

[01:09:11] could work from home and perform tasks something like that. Um I still think

[01:09:13] something like that. Um I still think it's a really large market. Uh like um

[01:09:16] it's a really large market. Uh like um yeah what is the size of the economy and

[01:09:17] yeah what is the size of the economy and what is 10 20% like we're still talking

[01:09:19] what is 10 20% like we're still talking about few trillion dollars of even in

[01:09:21] about few trillion dollars of even in the US of market share almost or like

[01:09:24] the US of market share almost or like work.

[01:09:25] work. >> Um so still a very massive bucket. So

[01:09:28] >> Um so still a very massive bucket. So but I guess like going back to the

[01:09:29] but I guess like going back to the definition I guess what I would be

[01:09:30] definition I guess what I would be looking for is uh to what extent is that

[01:09:32] looking for is uh to what extent is that definition true? Uh so um are there jobs

[01:09:35] definition true? Uh so um are there jobs or lots of tasks? If we think of tasks

[01:09:38] or lots of tasks? If we think of tasks as you know not jobs but tasks kind of

[01:09:40] as you know not jobs but tasks kind of difficult because the problem is like

[01:09:43] difficult because the problem is like society will refactor based on the tasks

[01:09:45] society will refactor based on the tasks that make up jobs compared to what's

[01:09:47] that make up jobs compared to what's yeah based on what's automatable or not

[01:09:49] yeah based on what's automatable or not but today what jobs are replaceable by

[01:09:51] but today what jobs are replaceable by AI so a good example recently was um

[01:09:54] AI so a good example recently was um Jeff Hinton's prediction that

[01:09:56] Jeff Hinton's prediction that radiologists would not be a job anymore

[01:09:57] radiologists would not be a job anymore and this turned out to be very wrong in

[01:09:59] and this turned out to be very wrong in a bunch of ways right so radiologists

[01:10:01] a bunch of ways right so radiologists are alive and well and growing even

[01:10:03] are alive and well and growing even though computer vision is really really

[01:10:04] though computer vision is really really good at recognizing all the different

[01:10:05] good at recognizing all the different things that they have to recognize in

[01:10:07] things that they have to recognize in and it's just messy complicated job with

[01:10:10] and it's just messy complicated job with a lot of surfaces and dealing with

[01:10:11] a lot of surfaces and dealing with patients and all this kind of stuff in

[01:10:12] patients and all this kind of stuff in the context of it. Um so I guess I don't

[01:10:16] the context of it. Um so I guess I don't actually know that by that definition AI

[01:10:18] actually know that by that definition AI has made a huge amount of dent yet. Um

[01:10:20] has made a huge amount of dent yet. Um but some of the some of the jobs maybe

[01:10:22] but some of the some of the jobs maybe that I would be looking for have some

[01:10:24] that I would be looking for have some features that I think make it very

[01:10:25] features that I think make it very amenable to automation earlier than

[01:10:26] amenable to automation earlier than later. As an example, call center

[01:10:28] later. As an example, call center employees often come up and I think

[01:10:29] employees often come up and I think rightly so. Uh because call center

[01:10:31] rightly so. Uh because call center employees have a number of simplifying

[01:10:33] employees have a number of simplifying uh properties with respect to what's

[01:10:35] uh properties with respect to what's automatable today. um their jobs are

[01:10:38] automatable today. um their jobs are pretty simple. It's a sequence of tasks

[01:10:40] pretty simple. It's a sequence of tasks and every task looks similar like you

[01:10:42] and every task looks similar like you take a phone call with a person, it's 10

[01:10:44] take a phone call with a person, it's 10 minutes of interaction or whatever it

[01:10:45] minutes of interaction or whatever it is, probably a bit longer in my

[01:10:46] is, probably a bit longer in my experience, a lot longer. Um and you

[01:10:49] experience, a lot longer. Um and you complete some task in some scheme and

[01:10:51] complete some task in some scheme and you change some database entries around

[01:10:53] you change some database entries around or something like that. So you keep

[01:10:54] or something like that. So you keep repeating something over and over again

[01:10:56] repeating something over and over again and that's your job. So basically you do

[01:10:58] and that's your job. So basically you do want to bring in the task horizon how

[01:11:01] want to bring in the task horizon how long it takes to perform a task.

[01:11:02] long it takes to perform a task. >> And then you want to also remove context

[01:11:04] >> And then you want to also remove context like you're not dealing with different

[01:11:06] like you're not dealing with different parts of services of companies or other

[01:11:08] parts of services of companies or other customers. It's just the database you

[01:11:10] customers. It's just the database you and a person you're serving. And so it's

[01:11:12] and a person you're serving. And so it's more closed. It's more understandable

[01:11:14] more closed. It's more understandable and it's purely digital. So I I would be

[01:11:16] and it's purely digital. So I I would be looking for those things. But even there

[01:11:18] looking for those things. But even there I'm not actually looking at full

[01:11:19] I'm not actually looking at full automation yet. I'm looking for an

[01:11:20] automation yet. I'm looking for an autonomy slider and I almost expect that

[01:11:22] autonomy slider and I almost expect that we are not going to instantly replace

[01:11:25] we are not going to instantly replace people. We're going to be swapping in

[01:11:26] people. We're going to be swapping in AIs that do 80% of the volume. They

[01:11:29] AIs that do 80% of the volume. They delegate 20% of the volume to humans and

[01:11:31] delegate 20% of the volume to humans and humans are supervising teams of five AIs

[01:11:33] humans are supervising teams of five AIs doing the call center work that's more

[01:11:34] doing the call center work that's more rote. Um so I would be looking for new

[01:11:37] rote. Um so I would be looking for new interfaces or new um companies that

[01:11:40] interfaces or new um companies that provide some kind of a layer that allows

[01:11:43] provide some kind of a layer that allows you to manage some of these AIs that are

[01:11:45] you to manage some of these AIs that are not yet perfect.

[01:11:46] not yet perfect. >> Yeah.

[01:11:46] >> Yeah. >> And then I would expect that across the

[01:11:48] >> And then I would expect that across the economy and a lot of jobs are a lot

[01:11:49] economy and a lot of jobs are a lot harder than call center employee. I

[01:11:51] harder than call center employee. I wonder with radiologists,

[01:11:54] wonder with radiologists, I'm totally speculating. I have no idea

[01:11:55] I'm totally speculating. I have no idea how what the actual workflow of a

[01:11:57] how what the actual workflow of a radiologist involves,

[01:11:58] radiologist involves, >> but one analogy that might be applicable

[01:12:01] >> but one analogy that might be applicable is um when we were first being ruled

[01:12:05] is um when we were first being ruled out, there would be a person sitting in

[01:12:06] out, there would be a person sitting in the front seat

[01:12:08] the front seat >> and you just had to have them there to

[01:12:10] >> and you just had to have them there to make sure that if something went really

[01:12:11] make sure that if something went really wrong, they're there to monitor. And I

[01:12:12] wrong, they're there to monitor. And I think even today, people are still

[01:12:13] think even today, people are still watching to make sure things are going

[01:12:14] watching to make sure things are going well. Um Robo Taxi, who was just

[01:12:16] well. Um Robo Taxi, who was just deployed, actually still has a person

[01:12:18] deployed, actually still has a person inside it. And we we could be in a

[01:12:20] inside it. And we we could be in a similar situation where if you automate

[01:12:23] similar situation where if you automate 99% of a job, that last 1% the human has

[01:12:25] 99% of a job, that last 1% the human has to do is incredibly valuable because

[01:12:28] to do is incredibly valuable because it's bottlenecking everything else. And

[01:12:29] it's bottlenecking everything else. And if it end had if it was the case with

[01:12:31] if it end had if it was the case with like with radiologists where the person

[01:12:32] like with radiologists where the person sitting in the front of the Uber or the

[01:12:34] sitting in the front of the Uber or the front of the Whimo has to be specially

[01:12:35] front of the Whimo has to be specially trained for years in order to be able to

[01:12:37] trained for years in order to be able to provide the last 1%. Their wages should

[01:12:39] provide the last 1%. Their wages should go go up tremendously because they're

[01:12:40] go go up tremendously because they're like the one the one thing bottlenecking

[01:12:42] like the one the one thing bottlenecking wide deployment. So radiologists I think

[01:12:44] wide deployment. So radiologists I think their wages have gone up for similar

[01:12:45] their wages have gone up for similar reasons. if you're like the last

[01:12:46] reasons. if you're like the last bottleneck, you should you're like and

[01:12:48] bottleneck, you should you're like and you're not funible, which like you know

[01:12:50] you're not funible, which like you know a way driver might be fungeable with

[01:12:51] a way driver might be fungeable with other things. Um so you might see this

[01:12:54] other things. Um so you might see this thing where like your wages go like

[01:12:56] thing where like your wages go like >> and until you get to 90% and then like

[01:12:57] >> and until you get to 90% and then like just like that

[01:12:58] just like that >> and when the last 1% is gone.

[01:13:00] >> and when the last 1% is gone. >> I see.

[01:13:00] >> I see. >> Um and I wonder if we're similar things

[01:13:03] >> Um and I wonder if we're similar things with radiology or salaries of call

[01:13:05] with radiology or salaries of call center workers or anything like that.

[01:13:06] center workers or anything like that. >> Yeah, I think that's that's an

[01:13:08] >> Yeah, I think that's that's an interesting um question. I don't think

[01:13:10] interesting um question. I don't think we're currently seeing that with

[01:13:11] we're currently seeing that with radiology or uh and I don't have like um

[01:13:14] radiology or uh and I don't have like um in my understanding but I think

[01:13:15] in my understanding but I think radiology is not a good example

[01:13:17] radiology is not a good example basically. I don't know why Jeff Hinton

[01:13:18] basically. I don't know why Jeff Hinton picked on radiology uh because I think

[01:13:21] picked on radiology uh because I think it's an extremely messy messy

[01:13:23] it's an extremely messy messy complicated profession.

[01:13:24] complicated profession. >> Yeah.

[01:13:24] >> Yeah. >> Uh so I would be a lot more interested

[01:13:26] >> Uh so I would be a lot more interested in what's happening with call center

[01:13:27] in what's happening with call center employees today for example uh because I

[01:13:29] employees today for example uh because I would expect a lot of the road stuff to

[01:13:30] would expect a lot of the road stuff to be uh automatable today

[01:13:32] be uh automatable today >> and I don't have a first level access to

[01:13:34] >> and I don't have a first level access to it but maybe I would be looking for

[01:13:35] it but maybe I would be looking for trends of what's happening with the call

[01:13:36] trends of what's happening with the call center employees. Maybe some of the

[01:13:38] center employees. Maybe some of the things I would also expect is maybe they

[01:13:40] things I would also expect is maybe they are uh swapping in AI but then I would

[01:13:43] are uh swapping in AI but then I would still wait for a year or two because I

[01:13:45] still wait for a year or two because I would potentially expect them to pull

[01:13:46] would potentially expect them to pull pull back and actually rehire some of

[01:13:48] pull back and actually rehire some of the people.

[01:13:49] the people. >> I think there's been evidence that

[01:13:50] >> I think there's been evidence that that's already been happening in the

[01:13:51] that's already been happening in the generally like companies that have been

[01:13:53] generally like companies that have been adopting AI which I think is quite

[01:13:54] adopting AI which I think is quite surprising and I also find what was

[01:13:56] surprising and I also find what was really surprising.

[01:13:57] really surprising. >> Okay. Um AGI right like a thing which

[01:14:01] >> Okay. Um AGI right like a thing which should do everything and okay we'll take

[01:14:03] should do everything and okay we'll take out physical work. So think we should be

[01:14:04] out physical work. So think we should be able to do all knowledge work. And what

[01:14:06] able to do all knowledge work. And what you would have naively anticipated that

[01:14:09] you would have naively anticipated that the way this regression would happen is

[01:14:10] the way this regression would happen is like you take a little task that a

[01:14:13] like you take a little task that a consultant is doing, you take that out

[01:14:15] consultant is doing, you take that out of the bucket. You take a little task

[01:14:17] of the bucket. You take a little task that um an accountant is doing, you take

[01:14:19] that um an accountant is doing, you take that out of the bucket. Uh and then

[01:14:21] that out of the bucket. Uh and then you're just doing this across all

[01:14:22] you're just doing this across all knowledge work. But instead, if we do

[01:14:24] knowledge work. But instead, if we do believe we're on the path of hi with the

[01:14:26] believe we're on the path of hi with the current paradigm, the progression is

[01:14:28] current paradigm, the progression is very much not like that. at least um

[01:14:30] very much not like that. at least um >> it just does not seem like consultants

[01:14:31] >> it just does not seem like consultants and accountants or whatever are getting

[01:14:32] and accountants or whatever are getting like huge productive improvement. It's

[01:14:34] like huge productive improvement. It's very much like

[01:14:35] very much like >> programmers are like getting more and

[01:14:38] >> programmers are like getting more and more chills of the way of their work. If

[01:14:40] more chills of the way of their work. If you to look at the revenues of these

[01:14:41] you to look at the revenues of these companies discounting just like normal

[01:14:43] companies discounting just like normal chat revenue which I think is like I

[01:14:45] chat revenue which I think is like I don't know that's similar to like Google

[01:14:47] don't know that's similar to like Google or something just looking at API

[01:14:49] or something just looking at API revenues it's like dominated by coding

[01:14:51] revenues it's like dominated by coding right so this thing which is general

[01:14:53] right so this thing which is general quote unquote should be able to do any

[01:14:55] quote unquote should be able to do any knowledge work is just overwhelmingly

[01:14:57] knowledge work is just overwhelmingly doing only coding and it's a surprising

[01:14:59] doing only coding and it's a surprising way that you would expect like the AGI

[01:15:01] way that you would expect like the AGI to be deployed

[01:15:02] to be deployed >> so I think there's there's an

[01:15:04] >> so I think there's there's an interesting point here because I do

[01:15:05] interesting point here because I do believe coding is like the perfect first

[01:15:07] believe coding is like the perfect first thing for uh for a for uh these LLMs and

[01:15:11] thing for uh for a for uh these LLMs and uh agents and that's because coding has

[01:15:13] uh agents and that's because coding has always fundamentally uh worked around

[01:15:16] always fundamentally uh worked around text.

[01:15:17] text. >> It's computer terminals and text and

[01:15:18] >> It's computer terminals and text and everything is based around text and LLMs

[01:15:21] everything is based around text and LLMs the way they're trained on the internet

[01:15:23] the way they're trained on the internet love text

[01:15:24] love text >> and so they're perfect text processors

[01:15:25] >> and so they're perfect text processors and there's all this data out there and

[01:15:27] and there's all this data out there and it's just perfect fit. Um and also we

[01:15:29] it's just perfect fit. Um and also we have a lot of infrastructure pre-built

[01:15:31] have a lot of infrastructure pre-built for handling uh code and text. So for

[01:15:33] for handling uh code and text. So for example, we have a Visual Studio Code or

[01:15:36] example, we have a Visual Studio Code or you know um your favorite um uh IDE

[01:15:39] you know um your favorite um uh IDE showing you code um and an agent can

[01:15:42] showing you code um and an agent can plug into that. So for example, if an

[01:15:43] plug into that. So for example, if an agent has a diff where it made some

[01:15:44] agent has a diff where it made some change, we suddenly have all this code

[01:15:46] change, we suddenly have all this code already that shows all the differences

[01:15:48] already that shows all the differences to a codebase uh using a diff. So we've

[01:15:51] to a codebase uh using a diff. So we've it's almost like we've pre-built a lot

[01:15:52] it's almost like we've pre-built a lot of the a lot of the infrastructure for

[01:15:55] of the a lot of the infrastructure for code. Now contrast that with some of the

[01:15:57] code. Now contrast that with some of the things that that don't enjoy that at

[01:15:58] things that that don't enjoy that at all. So as an example like um there's

[01:16:00] all. So as an example like um there's people trying to build automation not

[01:16:02] people trying to build automation not for coding but for example for slides

[01:16:03] for coding but for example for slides like I saw a company doing slides that's

[01:16:05] like I saw a company doing slides that's much much harder and the reason it's

[01:16:07] much much harder and the reason it's much much harder is because slides are

[01:16:08] much much harder is because slides are not text.

[01:16:09] not text. >> Yeah.

[01:16:09] >> Yeah. >> Slides are little graphics and they're

[01:16:11] >> Slides are little graphics and they're arranged spatially and uh there's visual

[01:16:13] arranged spatially and uh there's visual component to it and um and slides uh

[01:16:17] component to it and um and slides uh don't have this pre-built

[01:16:18] don't have this pre-built infrastructure. Like for example if an

[01:16:19] infrastructure. Like for example if an agent is to make a different uh change

[01:16:21] agent is to make a different uh change to your slides. How does a thing show

[01:16:23] to your slides. How does a thing show you the diff? How do you see the diff?

[01:16:25] you the diff? How do you see the diff? There's no there's no nothing that shows

[01:16:26] There's no there's no nothing that shows diffs for slides. Mhm.

[01:16:27] diffs for slides. Mhm. >> So someone has to build it. Um so it's

[01:16:30] >> So someone has to build it. Um so it's just some of these things are not

[01:16:31] just some of these things are not amendable to AIS as they are which is

[01:16:34] amendable to AIS as they are which is text processors and code surprisingly

[01:16:36] text processors and code surprisingly is.

[01:16:37] is. >> I I actually I'm not sure if that alone

[01:16:39] >> I I actually I'm not sure if that alone explains it because

[01:16:42] explains it because I personally have tried to get LLM to be

[01:16:45] I personally have tried to get LLM to be useful in domains which are just pure

[01:16:48] useful in domains which are just pure language in language out. Um like

[01:16:51] language in language out. Um like rewriting transcripts, like coming up

[01:16:54] rewriting transcripts, like coming up with clips based on transcripts, etc.

[01:16:56] with clips based on transcripts, etc. And you might say, well, I didn't, it's

[01:16:57] And you might say, well, I didn't, it's very plausible that like I didn't do

[01:16:59] very plausible that like I didn't do every single possible thing I could do

[01:17:00] every single possible thing I could do to I put a bunch of, you know, good

[01:17:02] to I put a bunch of, you know, good examples in context, but maybe I should

[01:17:04] examples in context, but maybe I should have done like some kind of fine tuning,

[01:17:05] have done like some kind of fine tuning, whatever. So, our mutual friend Andy

[01:17:07] whatever. So, our mutual friend Andy Matushak told me that he actually tried

[01:17:10] Matushak told me that he actually tried 50 billion things to try to get models

[01:17:13] 50 billion things to try to get models to be good at writing space repetition

[01:17:15] to be good at writing space repetition prompts. Again,

[01:17:16] prompts. Again, >> very much language in, language out

[01:17:19] >> very much language in, language out task. The kind of thing that should be

[01:17:20] task. The kind of thing that should be dead center in the repertoire of these

[01:17:21] dead center in the repertoire of these LLM. And he tried in context learning

[01:17:24] LLM. And he tried in context learning obviously with a few short examples. He

[01:17:26] obviously with a few short examples. He tried I think he told me like a bunch of

[01:17:28] tried I think he told me like a bunch of things like supervised fine-tuning and

[01:17:30] things like supervised fine-tuning and like you know retrieval whatever and he

[01:17:34] like you know retrieval whatever and he just could not get them to make cards to

[01:17:36] just could not get them to make cards to a satisfaction. So I find it striking

[01:17:38] a satisfaction. So I find it striking that even in language out domains

[01:17:40] that even in language out domains >> it's actually very hard to get a lot of

[01:17:42] >> it's actually very hard to get a lot of economic value out of these models

[01:17:44] economic value out of these models >> separate from coding. And I don't know

[01:17:45] >> separate from coding. And I don't know what what explains it.

[01:17:47] what what explains it. >> Yeah I think um I think that makes

[01:17:48] >> Yeah I think um I think that makes sense. I mean I would say um yeah it's

[01:17:51] sense. I mean I would say um yeah it's I'm not saying that anything text is

[01:17:53] I'm not saying that anything text is trivial right u I do think that code is

[01:17:55] trivial right u I do think that code is like it's pretty structured um text is

[01:17:59] like it's pretty structured um text is maybe a lot more flowery and this and

[01:18:01] maybe a lot more flowery and this and there's a lot more like

[01:18:03] there's a lot more like >> uh like entropy in text I would say I

[01:18:05] >> uh like entropy in text I would say I don't know how else to put it um

[01:18:06] don't know how else to put it um >> and also I mean code is hard and so

[01:18:08] >> and also I mean code is hard and so people sort of feel quite empowered by

[01:18:11] people sort of feel quite empowered by LLMs even from like simple simple kind

[01:18:14] LLMs even from like simple simple kind of uh knowledge I basically I don't

[01:18:16] of uh knowledge I basically I don't actually know that I have um a very good

[01:18:18] actually know that I have um a very good I mean obviously like text makes it much

[01:18:20] I mean obviously like text makes it much much easier maybe is maybe why I put it

[01:18:22] much easier maybe is maybe why I put it but it doesn't mean that all text is

[01:18:23] but it doesn't mean that all text is trivial.

[01:18:24] trivial. >> Mhm. How do you think about super

[01:18:26] >> Mhm. How do you think about super intelligence? Do you expect it to feel

[01:18:28] intelligence? Do you expect it to feel qualitatively different from normal

[01:18:32] qualitatively different from normal humans or human companies?

[01:18:35] humans or human companies? >> I guess I think I see it as like a

[01:18:36] >> I guess I think I see it as like a progression of automation in society

[01:18:38] progression of automation in society right and again like extraling the trend

[01:18:40] right and again like extraling the trend of computing. I just feel like there

[01:18:42] of computing. I just feel like there will be a gradual automation of a lot of

[01:18:43] will be a gradual automation of a lot of things and super intelligence will be

[01:18:45] things and super intelligence will be sort of like the extrapolation of that.

[01:18:46] sort of like the extrapolation of that. Uh so I do think we expect more and more

[01:18:48] Uh so I do think we expect more and more autonomous entities over time that are

[01:18:50] autonomous entities over time that are doing a lot of the digital work and then

[01:18:52] doing a lot of the digital work and then eventually even the physical work uh

[01:18:53] eventually even the physical work uh probably some amount of time later but

[01:18:56] probably some amount of time later but basically I see it as just uh automation

[01:18:59] basically I see it as just uh automation >> um roughly speaking

[01:19:00] >> um roughly speaking >> I guess automation includes the things

[01:19:01] >> I guess automation includes the things humans can already do and super

[01:19:03] humans can already do and super intelligence things humans

[01:19:04] intelligence things humans >> well but some of the things that people

[01:19:05] >> well but some of the things that people do is invent new things which I would

[01:19:07] do is invent new things which I would just put into the automation if that

[01:19:09] just put into the automation if that makes sense. Yeah. But you I I guess

[01:19:12] makes sense. Yeah. But you I I guess maybe um less abstractly and more sort

[01:19:15] maybe um less abstractly and more sort of like qualitatively.

[01:19:17] of like qualitatively. >> Do you expect something to feel like

[01:19:19] >> Do you expect something to feel like okay this because this thing can either

[01:19:22] okay this because this thing can either think so fast or has so many copies or

[01:19:26] think so fast or has so many copies or the copies can merge back in themselves

[01:19:28] the copies can merge back in themselves or is quote unquote much smarter. any

[01:19:32] or is quote unquote much smarter. any number of advantages an AI might have.

[01:19:34] number of advantages an AI might have. It will qualitative the civilization in

[01:19:37] It will qualitative the civilization in which these AI exists will just feel

[01:19:38] which these AI exists will just feel qualitatively different from

[01:19:39] qualitatively different from humanization.

[01:19:39] humanization. >> I think it will I mean it is

[01:19:40] >> I think it will I mean it is fundamentally automation but I mean it

[01:19:42] fundamentally automation but I mean it will be like extremely foreign. I do I

[01:19:44] will be like extremely foreign. I do I do think it will look really strange

[01:19:45] do think it will look really strange because um like you mentioned we can run

[01:19:48] because um like you mentioned we can run all of this on a computer cluster etc

[01:19:50] all of this on a computer cluster etc and much faster and all this thing.

[01:19:52] and much faster and all this thing. Yeah, I mean maybe some of the scenarios

[01:19:53] Yeah, I mean maybe some of the scenarios for example that uh I start to get like

[01:19:55] for example that uh I start to get like nervous about with respect with respect

[01:19:57] nervous about with respect with respect to when the world looks like that is

[01:19:58] to when the world looks like that is this kind of like gradual loss of

[01:20:00] this kind of like gradual loss of control and understanding of what's

[01:20:01] control and understanding of what's happening and I think that's actually

[01:20:02] happening and I think that's actually the most likely outcome probably is that

[01:20:04] the most likely outcome probably is that there will be a gradual loss of

[01:20:05] there will be a gradual loss of understanding of

[01:20:06] understanding of >> and we'll gradually layer all this stuff

[01:20:09] >> and we'll gradually layer all this stuff everywhere and there will be fewer and

[01:20:10] everywhere and there will be fewer and fewer people who understand it and that

[01:20:12] fewer people who understand it and that there will be a sort of this like

[01:20:13] there will be a sort of this like scenario of a gradual loss of control

[01:20:15] scenario of a gradual loss of control and understanding of what's happening

[01:20:16] and understanding of what's happening that to me seems most likely outcome of

[01:20:19] that to me seems most likely outcome of how all this stuff will go down. Let me

[01:20:20] how all this stuff will go down. Let me probe on that a bit. It's not clear to

[01:20:22] probe on that a bit. It's not clear to me that loss of control and loss of

[01:20:24] me that loss of control and loss of understanding are the same things.

[01:20:27] understanding are the same things. >> A board of directors at like whatever

[01:20:30] >> A board of directors at like whatever TSMC, Intel, name a random company.

[01:20:34] TSMC, Intel, name a random company. >> Um they're just like prestigious

[01:20:36] >> Um they're just like prestigious 80year-olds. They have very little

[01:20:38] 80year-olds. They have very little understanding and maybe they don't

[01:20:39] understanding and maybe they don't practically actually have control, but

[01:20:42] practically actually have control, but >> or actually maybe a better example is

[01:20:43] >> or actually maybe a better example is the president of the United States.

[01:20:46] the president of the United States. >> President has a lot of [&nbsp;__&nbsp;] power. Um

[01:20:48] >> President has a lot of [&nbsp;__&nbsp;] power. Um I'm not trying to make a good statement

[01:20:50] I'm not trying to make a good statement about the current operant, but maybe I

[01:20:52] about the current operant, but maybe I am. But like the actual level of

[01:20:54] am. But like the actual level of understanding is very different from the

[01:20:55] understanding is very different from the level of control.

[01:20:56] level of control. >> Yeah, I think that's fair. That's a good

[01:20:57] >> Yeah, I think that's fair. That's a good push back. I think like um I guess I

[01:21:01] push back. I think like um I guess I expect loss of uh both.

[01:21:04] expect loss of uh both. >> Yeah.

[01:21:05] >> Yeah. >> How come? I mean loss of understanding

[01:21:07] >> How come? I mean loss of understanding is obvious, but why loss of control? So,

[01:21:09] is obvious, but why loss of control? So, so we're really far into territory of I

[01:21:13] so we're really far into territory of I don't know what this looks like, but if

[01:21:14] don't know what this looks like, but if I was to write sci-fi novels, they would

[01:21:16] I was to write sci-fi novels, they would look along the lines of not even a

[01:21:18] look along the lines of not even a single like entity or something like

[01:21:20] single like entity or something like that. So, that just sort of like takes

[01:21:22] that. So, that just sort of like takes over everything. Uh, but actually like

[01:21:23] over everything. Uh, but actually like multiple competing entities that

[01:21:24] multiple competing entities that gradually become more and more

[01:21:25] gradually become more and more autonomous and uh some of them go rogue

[01:21:28] autonomous and uh some of them go rogue and the others like fight them off and

[01:21:30] and the others like fight them off and all this kind of stuff. And it's like

[01:21:31] all this kind of stuff. And it's like this this hot pot of

[01:21:33] this this hot pot of >> completely autonomous activity that

[01:21:35] >> completely autonomous activity that we've uh delegated to. I I kind of feel

[01:21:38] we've uh delegated to. I I kind of feel like

[01:21:40] like it would have that flavor.

[01:21:42] it would have that flavor. >> It is not the fact that they are smarter

[01:21:43] >> It is not the fact that they are smarter than us that is resulting in the loss of

[01:21:45] than us that is resulting in the loss of control. It's the fact that they are

[01:21:47] control. It's the fact that they are competing with each other and whatever

[01:21:50] competing with each other and whatever um arises out of that competition that

[01:21:52] um arises out of that competition that leads to the loss of control.

[01:21:54] leads to the loss of control. >> Um

[01:21:56] >> Um I mean I basically expect there to be I

[01:21:57] I mean I basically expect there to be I mean um a lot of these things I mean

[01:22:00] mean um a lot of these things I mean they will be tools to people and the

[01:22:01] they will be tools to people and the people could some of the population is

[01:22:03] people could some of the population is like they're acting on behalf of people

[01:22:05] like they're acting on behalf of people or something like that. Maybe those

[01:22:06] or something like that. Maybe those people are in control, but maybe it's a

[01:22:08] people are in control, but maybe it's a loss of control overall for society in

[01:22:09] loss of control overall for society in in the sense that of like outcomes we

[01:22:12] in the sense that of like outcomes we want or something like that. Um where

[01:22:13] want or something like that. Um where you have entities acting on behalf of

[01:22:15] you have entities acting on behalf of individuals that are still kind of uh

[01:22:17] individuals that are still kind of uh roughly seen as out of control.

[01:22:19] roughly seen as out of control. >> Yeah. Yeah.

[01:22:20] >> Yeah. Yeah. >> This is a question I should have asked

[01:22:21] >> This is a question I should have asked earlier. So we were talking about how

[01:22:22] earlier. So we were talking about how currently it feels like when you're

[01:22:24] currently it feels like when you're doing AI engineering or AI research,

[01:22:26] doing AI engineering or AI research, these models are more like in the

[01:22:28] these models are more like in the category of compiler rather than uh in

[01:22:30] category of compiler rather than uh in the category of a replacement.

[01:22:32] the category of a replacement. >> Yeah. At some point, if you have

[01:22:33] >> Yeah. At some point, if you have quoteunquote AGI, it should be able to

[01:22:35] quoteunquote AGI, it should be able to do what you do.

[01:22:36] do what you do. >> And do you feel like having a million

[01:22:38] >> And do you feel like having a million copies of you in parallel results in

[01:22:40] copies of you in parallel results in some huge speed up of AI progress?

[01:22:43] some huge speed up of AI progress? Basically, if that does happen, would

[01:22:44] Basically, if that does happen, would you see do you expect to see an

[01:22:46] you see do you expect to see an intelligence explosion or even once we

[01:22:48] intelligence explosion or even once we have not talking about LLMs today, but

[01:22:50] have not talking about LLMs today, but really

[01:22:50] really >> I guess like what I mean is um I do, but

[01:22:54] >> I guess like what I mean is um I do, but it's business as usual because we're

[01:22:56] it's business as usual because we're we're in an intelligence explosion

[01:22:57] we're in an intelligence explosion already and have been for decades. And

[01:22:59] already and have been for decades. And when you look at GDP, it's basically the

[01:23:00] when you look at GDP, it's basically the GDP curve that is an exponential

[01:23:02] GDP curve that is an exponential weighted sum over so many aspects of the

[01:23:04] weighted sum over so many aspects of the industry. Everything is gradually being

[01:23:06] industry. Everything is gradually being automated has been for hundreds of

[01:23:07] automated has been for hundreds of years. Um, industrial revolution is

[01:23:09] years. Um, industrial revolution is automation and some of the physical

[01:23:10] automation and some of the physical components and the tool building and all

[01:23:12] components and the tool building and all this kind of stuff. Compilers, our early

[01:23:14] this kind of stuff. Compilers, our early software automation, etc. Uh, so I kind

[01:23:16] software automation, etc. Uh, so I kind of feel like we've been recursively

[01:23:17] of feel like we've been recursively self-improving and uh exploding for for

[01:23:20] self-improving and uh exploding for for a long time. Maybe another way to see it

[01:23:22] a long time. Maybe another way to see it is um I mean Earth was a pretty I mean

[01:23:25] is um I mean Earth was a pretty I mean if you don't look at the biio mechanics

[01:23:27] if you don't look at the biio mechanics and so on it was a pretty boring place I

[01:23:28] and so on it was a pretty boring place I think and looked very similar if you

[01:23:30] think and looked very similar if you just look from space and earth is

[01:23:32] just look from space and earth is spinning and then like we're in the

[01:23:33] spinning and then like we're in the middle of this like firecracker event

[01:23:35] middle of this like firecracker event >> right

[01:23:35] >> right >> but we're seeing it in slow motion but

[01:23:38] >> but we're seeing it in slow motion but >> I definitely feel like this is this has

[01:23:41] >> I definitely feel like this is this has already happened for a very long time

[01:23:42] already happened for a very long time and I again like I I don't see AI as

[01:23:44] and I again like I I don't see AI as like a distinct technology with respect

[01:23:46] like a distinct technology with respect to what has already been happening for a

[01:23:48] to what has already been happening for a long time. Is there you think it's

[01:23:50] long time. Is there you think it's continuous with this hyper exponential

[01:23:51] continuous with this hyper exponential trend?

[01:23:52] trend? >> And that's why like this is this was

[01:23:53] >> And that's why like this is this was very interesting to me because I was I

[01:23:55] very interesting to me because I was I was trying to find AI in the GDP for a

[01:23:57] was trying to find AI in the GDP for a while. I thought that GDP should go up

[01:23:59] while. I thought that GDP should go up but then I looked at some of the other

[01:24:01] but then I looked at some of the other technologies that I thought were were

[01:24:02] technologies that I thought were were very transformative like uh maybe

[01:24:04] very transformative like uh maybe computers or mobile phones or etc. You

[01:24:06] computers or mobile phones or etc. You can't find them in GDP. GDP is the same

[01:24:08] can't find them in GDP. GDP is the same exponential and it's just that even for

[01:24:10] exponential and it's just that even for example the early iPhone uh didn't have

[01:24:12] example the early iPhone uh didn't have the app store and it didn't have a lot

[01:24:13] the app store and it didn't have a lot of the bells and whistles that the

[01:24:14] of the bells and whistles that the modern iPhone has. And so even though we

[01:24:16] modern iPhone has. And so even though we think of 2008 was it when iPhone came

[01:24:18] think of 2008 was it when iPhone came out as like some major seismic change,

[01:24:20] out as like some major seismic change, it's actually not. Everything is like so

[01:24:22] it's actually not. Everything is like so spread out and so slowly diffuses that

[01:24:24] spread out and so slowly diffuses that everything ends up being averaged up

[01:24:26] everything ends up being averaged up into the same exponential. And it's the

[01:24:27] into the same exponential. And it's the exact same thing with computers. You

[01:24:28] exact same thing with computers. You can't find them in a GDP is like, oh, we

[01:24:30] can't find them in a GDP is like, oh, we have computers now.

[01:24:31] have computers now. >> That's not what happened because it's

[01:24:32] >> That's not what happened because it's such a slow progression. And with AI,

[01:24:34] such a slow progression. And with AI, we're going to see the exact same thing.

[01:24:35] we're going to see the exact same thing. It's just more automation. It allows us

[01:24:37] It's just more automation. It allows us to write different kinds of programs

[01:24:38] to write different kinds of programs that we couldn't write before. But AI is

[01:24:40] that we couldn't write before. But AI is still fundamentally a program and um

[01:24:42] still fundamentally a program and um it's a new kind of computer and a new

[01:24:44] it's a new kind of computer and a new kind of um kind of computing system, but

[01:24:47] kind of um kind of computing system, but it has all these problems. It's going to

[01:24:48] it has all these problems. It's going to diffuse over over time and it's still

[01:24:50] diffuse over over time and it's still going to add up to the same exponential

[01:24:51] going to add up to the same exponential and we're still going to have an

[01:24:52] and we're still going to have an exponential that's going to get

[01:24:54] exponential that's going to get extremely vertical and it's going to be

[01:24:56] extremely vertical and it's going to be very foreign to live in that kind of an

[01:24:58] very foreign to live in that kind of an environment. Are you saying that like

[01:25:00] environment. Are you saying that like what will happen is so if you go if you

[01:25:02] what will happen is so if you go if you look at the trend before the industrial

[01:25:04] look at the trend before the industrial revolution to currently you have a hyper

[01:25:06] revolution to currently you have a hyper exponential where you go from like 0%

[01:25:08] exponential where you go from like 0% growth to then 10,000 years ago 0.02%

[01:25:12] growth to then 10,000 years ago 0.02% growth and then currently we're at 2%

[01:25:13] growth and then currently we're at 2% growth. So that's a hyper exponential

[01:25:14] growth. So that's a hyper exponential and you're saying if you're charting AI

[01:25:16] and you're saying if you're charting AI on there then it's like AI takes you to

[01:25:17] on there then it's like AI takes you to 20% growth or 200% growth

[01:25:20] 20% growth or 200% growth >> or you could be saying if you look at

[01:25:22] >> or you could be saying if you look at the last 300 years what you've been

[01:25:23] the last 300 years what you've been seeing is you have technology after

[01:25:25] seeing is you have technology after technology computers electrification

[01:25:27] technology computers electrification steam steam engines railways etc

[01:25:30] steam steam engines railways etc >> but the rate of growth is the exact same

[01:25:32] >> but the rate of growth is the exact same it's 2%. So are you saying the rate of

[01:25:34] it's 2%. So are you saying the rate of growth will

[01:25:35] growth will >> directly I expect this the rate of

[01:25:37] >> directly I expect this the rate of growth has also stayed roughly constant

[01:25:39] growth has also stayed roughly constant right

[01:25:39] right >> for only the last 200 300 years but over

[01:25:41] >> for only the last 200 300 years but over the course of human history it's like

[01:25:43] the course of human history it's like exploded right it's like gone from like

[01:25:45] exploded right it's like gone from like 0% basically to like faster faster

[01:25:47] 0% basically to like faster faster faster industrial explosion 2%

[01:25:49] faster industrial explosion 2% >> like basically I guess what I'm saying

[01:25:50] >> like basically I guess what I'm saying is for a while I tried to find AI or

[01:25:52] is for a while I tried to find AI or look for AI in like the GDP curve and

[01:25:54] look for AI in like the GDP curve and I've kind of convinced myself that this

[01:25:55] I've kind of convinced myself that this is false and that even when people talk

[01:25:57] is false and that even when people talk about recursive self-improvement and

[01:25:58] about recursive self-improvement and labs and stuff like that I even don't

[01:26:00] labs and stuff like that I even don't this is business as usual of course it's

[01:26:02] this is business as usual of course it's going to recursively self-improved and

[01:26:03] going to recursively self-improved and it's been recursively self-improving

[01:26:05] it's been recursively self-improving like LLMs allow the engineers to work

[01:26:07] like LLMs allow the engineers to work much more efficiently to build the next

[01:26:10] much more efficiently to build the next round of LLM and a lot more of the

[01:26:12] round of LLM and a lot more of the components are being automated and and

[01:26:13] components are being automated and and tuned and etc. So all the engineers

[01:26:16] tuned and etc. So all the engineers having access to Google search is is

[01:26:18] having access to Google search is is sort of part of it. All the engineers

[01:26:19] sort of part of it. All the engineers having an ID all all of them having

[01:26:21] having an ID all all of them having autocomplete or having cloth code etc.

[01:26:23] autocomplete or having cloth code etc. It's all just part of the same speed up

[01:26:25] It's all just part of the same speed up of the whole thing. So um it's just so

[01:26:28] of the whole thing. So um it's just so smooth.

[01:26:30] smooth. >> But just just to clarify you're saying

[01:26:31] >> But just just to clarify you're saying that the rate of growth will not change

[01:26:33] that the rate of growth will not change like um you know the intelligence

[01:26:35] like um you know the intelligence explosion will show up as like we it

[01:26:37] explosion will show up as like we it just enabled us to continue staying on

[01:26:39] just enabled us to continue staying on the 2% growth trajectory just as the

[01:26:40] the 2% growth trajectory just as the internet helped us stay on the 2% growth

[01:26:42] internet helped us stay on the 2% growth trajectory.

[01:26:42] trajectory. >> Yeah. My expectation is that it stays

[01:26:44] >> Yeah. My expectation is that it stays the same pattern.

[01:26:45] the same pattern. >> Yeah. I mean, um, ju just to throw the

[01:26:48] >> Yeah. I mean, um, ju just to throw the opposite argument against you, my

[01:26:50] opposite argument against you, my expectation is that it like, um, blows

[01:26:54] expectation is that it like, um, blows up because I think true AGI, and I'm not

[01:26:56] up because I think true AGI, and I'm not talking about LLM coding bots, I'm

[01:26:58] talking about LLM coding bots, I'm talking about like actual this is like a

[01:27:00] talking about like actual this is like a replacement of a human in a server

[01:27:02] replacement of a human in a server >> is qualitatively different from these

[01:27:06] >> is qualitatively different from these other productivity improving

[01:27:07] other productivity improving technologies

[01:27:08] technologies >> because it's labor itself, right? I

[01:27:11] >> because it's labor itself, right? I think we live in a very labor

[01:27:12] think we live in a very labor constrained world. Like if you talk to

[01:27:13] constrained world. Like if you talk to any startup founder, any person, you can

[01:27:15] any startup founder, any person, you can just be like, okay, what do you need

[01:27:16] just be like, okay, what do you need more of? You just like need really

[01:27:17] more of? You just like need really talented people. And if you just have

[01:27:20] talented people. And if you just have billions of extra people who are

[01:27:21] billions of extra people who are inventing stuff, integrating themselves,

[01:27:24] inventing stuff, integrating themselves, making companies, bottoms, start to

[01:27:26] making companies, bottoms, start to finish, that feels qualitatively

[01:27:28] finish, that feels qualitatively different from just like a single

[01:27:30] different from just like a single technology. It's sort of like just

[01:27:31] technology. It's sort of like just asking if you like if you get 10 billion

[01:27:32] asking if you like if you get 10 billion extra people on the planet.

[01:27:33] extra people on the planet. >> I mean, maybe a counterpoint. I mean,

[01:27:34] >> I mean, maybe a counterpoint. I mean, number one, I I'm actually pretty um

[01:27:37] number one, I I'm actually pretty um pretty willing to be convinced one way

[01:27:38] pretty willing to be convinced one way or another on this point. But I will

[01:27:40] or another on this point. But I will say, for example, computing is labor.

[01:27:42] say, for example, computing is labor. Computing was labor. Computers like a

[01:27:44] Computing was labor. Computers like a lot of jobs disappeared because

[01:27:45] lot of jobs disappeared because computers are automating a bunch of

[01:27:46] computers are automating a bunch of digital uh information processing that

[01:27:48] digital uh information processing that you now don't need a human for. And so

[01:27:50] you now don't need a human for. And so computers are labor. Um and that has

[01:27:53] computers are labor. Um and that has played out. Um and you know,

[01:27:55] played out. Um and you know, self-driving as an example is also like

[01:27:57] self-driving as an example is also like computers doing labor. Uh so like I

[01:27:59] computers doing labor. Uh so like I guess that's already been playing out.

[01:28:00] guess that's already been playing out. So it's still business as usual. Yeah, I

[01:28:02] So it's still business as usual. Yeah, I guess you have a machine which just

[01:28:03] guess you have a machine which just spitting out more things like that

[01:28:06] spitting out more things like that >> at potentially faster pace. And so we

[01:28:08] >> at potentially faster pace. And so we historically we have examples of the

[01:28:10] historically we have examples of the growth regime changing where like you

[01:28:11] growth regime changing where like you went from you know 2% growth to 2%

[01:28:14] went from you know 2% growth to 2% growth.

[01:28:14] growth. >> So it seems very plausible to me that

[01:28:16] >> So it seems very plausible to me that like

[01:28:17] like >> a machine which is then spitting out the

[01:28:20] >> a machine which is then spitting out the next self-driving car and the next

[01:28:22] next self-driving car and the next internet and whatever.

[01:28:23] internet and whatever. >> I mean I kind of yeah I see where it's

[01:28:26] >> I mean I kind of yeah I see where it's coming from. At the same time, I do feel

[01:28:27] coming from. At the same time, I do feel like people make this assumption of

[01:28:28] like people make this assumption of like, okay, we have

[01:28:29] like, okay, we have >> uh God in a box and now it can do

[01:28:31] >> uh God in a box and now it can do everything and it's just it just won't

[01:28:33] everything and it's just it just won't look like that. It's going to be it's

[01:28:34] look like that. It's going to be it's going to be able to do some of the

[01:28:35] going to be able to do some of the things. It's going to fail at some other

[01:28:37] things. It's going to fail at some other things. It's going to be gradually put

[01:28:38] things. It's going to be gradually put into society and basically we'll end up

[01:28:40] into society and basically we'll end up with the same pattern is my prediction

[01:28:42] with the same pattern is my prediction because because this assumption of

[01:28:43] because because this assumption of suddenly having a completely intelligent

[01:28:45] suddenly having a completely intelligent uh fully flexible, fully general human

[01:28:47] uh fully flexible, fully general human uh in a box and we can dispensed it

[01:28:49] uh in a box and we can dispensed it arbitrary problems in society. I I I

[01:28:52] arbitrary problems in society. I I I don't think that we will have this like

[01:28:54] don't think that we will have this like discreet change and um and so I I think

[01:28:57] discreet change and um and so I I think we'll arrive at the same at the same

[01:29:00] we'll arrive at the same at the same kind of a gradual diffusion of this

[01:29:02] kind of a gradual diffusion of this across the industry. M I I I think what

[01:29:04] across the industry. M I I I think what often ends up being misleading in these

[01:29:07] often ends up being misleading in these um conversations is people I don't like

[01:29:09] um conversations is people I don't like to use the word intelligence in this

[01:29:11] to use the word intelligence in this context because intelligence implies you

[01:29:12] context because intelligence implies you think like oh super int super super

[01:29:14] think like oh super int super super intelligence will be sitting there will

[01:29:16] intelligence will be sitting there will be a single super intelligence sitting

[01:29:17] be a single super intelligence sitting in a server and it'll like divine how to

[01:29:18] in a server and it'll like divine how to come up with new technologies and

[01:29:20] come up with new technologies and inventions that causes this explosion

[01:29:22] inventions that causes this explosion >> and that's not what I'm imagining when

[01:29:24] >> and that's not what I'm imagining when I'm imagining 20% growth

[01:29:25] I'm imagining 20% growth >> I'm imagining that there's billions of

[01:29:30] >> I'm imagining that there's billions of you know basically like very smart human

[01:29:32] you know basically like very smart human minds potentially or that's all that's

[01:29:34] minds potentially or that's all that's required. But the fact that there's

[01:29:36] required. But the fact that there's hundreds of millions of them, billions

[01:29:37] hundreds of millions of them, billions of them, each individually making new

[01:29:40] of them, each individually making new products, figuring out how to integrate

[01:29:42] products, figuring out how to integrate themselves into the economy, just the

[01:29:44] themselves into the economy, just the way if like a highly experienced smart

[01:29:46] way if like a highly experienced smart immigrant came to the country, you

[01:29:47] immigrant came to the country, you wouldn't need to like figure out how we

[01:29:48] wouldn't need to like figure out how we integrate them in the economy. They

[01:29:49] integrate them in the economy. They figured out they could start a company,

[01:29:50] figured out they could start a company, they could like uh make inventions, you

[01:29:52] they could like uh make inventions, you know, or like just increase productivity

[01:29:54] know, or like just increase productivity in the world. And we have examples even

[01:29:56] in the world. And we have examples even in the current regime of places that

[01:29:58] in the current regime of places that have had 10 20% economic growth. you

[01:30:01] have had 10 20% economic growth. you know, if you just have a lot of people

[01:30:02] know, if you just have a lot of people and less capital in comparison to the

[01:30:05] and less capital in comparison to the people, you can have Hong Kong or

[01:30:08] people, you can have Hong Kong or Shenzhen or whatever just had decades of

[01:30:10] Shenzhen or whatever just had decades of 10% plus growth. It and I think it's

[01:30:12] 10% plus growth. It and I think it's just like there's a lot of really smart

[01:30:14] just like there's a lot of really smart people who are ready to like make use of

[01:30:16] people who are ready to like make use of the resources and do this like period of

[01:30:18] the resources and do this like period of catchup because we've had this

[01:30:20] catchup because we've had this discontinuity. And I think yeah, it

[01:30:21] discontinuity. And I think yeah, it might be similar. So, I think um I I

[01:30:24] might be similar. So, I think um I I think I understand, but I still think

[01:30:25] think I understand, but I still think that you're presupposing some discrete

[01:30:27] that you're presupposing some discrete jump. There's some unlock that we're

[01:30:29] jump. There's some unlock that we're waiting to claim

[01:30:30] waiting to claim >> and suddenly we're going to have

[01:30:31] >> and suddenly we're going to have geniuses in data centers. And I I still

[01:30:33] geniuses in data centers. And I I still think you're presupposing some discrete

[01:30:35] think you're presupposing some discrete jump that I think has basically no

[01:30:37] jump that I think has basically no historical precedent that I can't find

[01:30:39] historical precedent that I can't find in any of the statistics and that I

[01:30:41] in any of the statistics and that I think probably won't happen.

[01:30:41] think probably won't happen. >> I mean, the industrial revolution is

[01:30:43] >> I mean, the industrial revolution is such a jump, right? You went from like

[01:30:44] such a jump, right? You went from like 0% grow or 0.2% growth to 2% growth. Um

[01:30:46] 0% grow or 0.2% growth to 2% growth. Um I'm just saying like you'll see another

[01:30:48] I'm just saying like you'll see another jump like that.

[01:30:48] jump like that. >> I I I'm a little bit suspicious. I would

[01:30:50] >> I I I'm a little bit suspicious. I would have to look at it. I I'm a little bit

[01:30:52] have to look at it. I I'm a little bit suspicious and I would have to take a

[01:30:53] suspicious and I would have to take a look. For example, like maybe the some

[01:30:55] look. For example, like maybe the some of the logs are are not very good from

[01:30:56] of the logs are are not very good from before the industrial revolution or

[01:30:58] before the industrial revolution or something like that. Uh so I'm a little

[01:31:00] something like that. Uh so I'm a little bit suspicious of it, but um yeah, maybe

[01:31:02] bit suspicious of it, but um yeah, maybe you're right. I don't I don't have

[01:31:03] you're right. I don't I don't have strong opinions.

[01:31:04] strong opinions. >> Maybe you're saying that this was a

[01:31:05] >> Maybe you're saying that this was a singular event that was extremely

[01:31:06] singular event that was extremely magical and you're saying that maybe

[01:31:08] magical and you're saying that maybe there's going to be another event that's

[01:31:09] there's going to be another event that's going to be just like that, extremely

[01:31:10] going to be just like that, extremely magical. It will break paradigm and so

[01:31:12] magical. It will break paradigm and so on.

[01:31:12] on. >> I actually don't think they I mean the

[01:31:14] >> I actually don't think they I mean the crucial thing about the industrial

[01:31:15] crucial thing about the industrial revolution was that it was not magical,

[01:31:17] revolution was that it was not magical, right? Like if you just zoomed in

[01:31:20] right? Like if you just zoomed in >> what you would see in 1770 or 1870,

[01:31:25] >> what you would see in 1770 or 1870, >> it's not that there like was some key

[01:31:26] >> it's not that there like was some key invention.

[01:31:27] invention. >> Yeah, exactly. But at the same time, you

[01:31:30] >> Yeah, exactly. But at the same time, you did move the economy to a regime where

[01:31:32] did move the economy to a regime where the progress was much faster

[01:31:34] the progress was much faster >> and the exponential 10xed

[01:31:36] >> and the exponential 10xed >> and I expected similar thing from AI

[01:31:37] >> and I expected similar thing from AI where it's not like

[01:31:39] where it's not like >> there's going to be a single moment

[01:31:40] >> there's going to be a single moment where we made the crucial

[01:31:42] where we made the crucial >> overhang that's being unlocked like

[01:31:44] >> overhang that's being unlocked like maybe there's a new energy source

[01:31:45] maybe there's a new energy source there's there's some unlock in this case

[01:31:47] there's there's some unlock in this case some kind of a cognitive capacity and

[01:31:48] some kind of a cognitive capacity and there's an overhang of cognitive

[01:31:50] there's an overhang of cognitive cognitive work to do. That's right.

[01:31:51] cognitive work to do. That's right. >> And you're expecting that overhang to be

[01:31:53] >> And you're expecting that overhang to be filled by this new technology when it

[01:31:54] filled by this new technology when it crosses the threshold.

[01:31:55] crosses the threshold. >> Yeah. And I mean I maybe one way to

[01:31:57] >> Yeah. And I mean I maybe one way to think about it is through history a lot

[01:31:59] think about it is through history a lot of growth I mean growth comes because

[01:32:01] of growth I mean growth comes because people come up with ideas and then

[01:32:03] people come up with ideas and then people are like out there doing stuff to

[01:32:06] people are like out there doing stuff to execute those ideas and make valuable

[01:32:07] execute those ideas and make valuable output

[01:32:08] output >> and through most of this time population

[01:32:10] >> and through most of this time population isn't exploding that has been driving

[01:32:12] isn't exploding that has been driving growth for the last 50 years people have

[01:32:14] growth for the last 50 years people have argued that growth has stagnated

[01:32:16] argued that growth has stagnated population and frontier countries has

[01:32:17] population and frontier countries has also stagnated I think we go back on the

[01:32:20] also stagnated I think we go back on the hyperexonential growth in population and

[01:32:21] hyperexonential growth in population and output

[01:32:22] output >> right sorry exponential growth in

[01:32:24] >> right sorry exponential growth in population that causes hyperextential

[01:32:25] population that causes hyperextential growth and output.

[01:32:27] growth and output. >> Yeah. I mean, um, yeah, it's really hard

[01:32:29] >> Yeah. I mean, um, yeah, it's really hard to tell.

[01:32:30] to tell. >> I understand that viewpoint. I don't

[01:32:32] >> I understand that viewpoint. I don't intuitively feel that viewpoint.

[01:32:34] intuitively feel that viewpoint. >> So, we just got access to Google's VO

[01:32:37] >> So, we just got access to Google's VO 3.1, and it's been really cool to play

[01:32:39] 3.1, and it's been really cool to play around with. The first thing we did was

[01:32:41] around with. The first thing we did was run a bunch of prompts through both V3

[01:32:44] run a bunch of prompts through both V3 and 3.1 to see what's changed in the new

[01:32:47] and 3.1 to see what's changed in the new version. So, here's V3.

[01:32:49] version. So, here's V3. >> Hi, I'm Max and I got stuck in a local

[01:32:52] >> Hi, I'm Max and I got stuck in a local minimum again.

[01:32:53] minimum again. >> It's okay, Max. We've all been there.

[01:32:55] >> It's okay, Max. We've all been there. Took me three epox to get out.

[01:32:57] Took me three epox to get out. >> And here is VO3.1.

[01:32:59] >> And here is VO3.1. >> Hi, I'm Max and I got stuck in a local

[01:33:02] >> Hi, I'm Max and I got stuck in a local minimum again.

[01:33:03] minimum again. >> It's okay, Max. We've all been there.

[01:33:05] >> It's okay, Max. We've all been there. Took me three epox.

[01:33:07] Took me three epox. >> 3.1's output is just consistently more

[01:33:09] >> 3.1's output is just consistently more coherent and the audio is noticeably

[01:33:11] coherent and the audio is noticeably higher quality. We've been using VO for

[01:33:13] higher quality. We've been using VO for a while now. Actually, we released an

[01:33:15] a while now. Actually, we released an essay earlier this year about AI firms

[01:33:18] essay earlier this year about AI firms fully animated by V2, and it's been

[01:33:20] fully animated by V2, and it's been amazing to see how fast these models are

[01:33:22] amazing to see how fast these models are improving. This update makes VO even

[01:33:25] improving. This update makes VO even more useful in terms of animating our

[01:33:27] more useful in terms of animating our ideas and our explainers. You can try VO

[01:33:30] ideas and our explainers. You can try VO right now in the Gemini app with Pro and

[01:33:32] right now in the Gemini app with Pro and Ultra subscriptions. You can also access

[01:33:34] Ultra subscriptions. You can also access it through the Gemini API or through

[01:33:37] it through the Gemini API or through Google Flow. You recommended Nick Lane's

[01:33:39] Google Flow. You recommended Nick Lane's book to me and then on that basis I I

[01:33:42] book to me and then on that basis I I also find it super interesting and I

[01:33:43] also find it super interesting and I interviewed him. Um and so I actually

[01:33:45] interviewed him. Um and so I actually have some questions about sort of

[01:33:46] have some questions about sort of thinking about intelligence and

[01:33:47] thinking about intelligence and evolutionary history. Now that you over

[01:33:50] evolutionary history. Now that you over the last 20 years of doing AI research,

[01:33:51] the last 20 years of doing AI research, you maybe have a more tangible sense of

[01:33:54] you maybe have a more tangible sense of what intelligence is, what it takes to

[01:33:56] what intelligence is, what it takes to develop it. Are you more or less

[01:33:59] develop it. Are you more or less surprised as a result that evolution

[01:34:02] surprised as a result that evolution just sort of spontaneously

[01:34:04] just sort of spontaneously stumbled upon it?

[01:34:07] stumbled upon it? >> Um, I love Nick's books by the way. So,

[01:34:10] >> Um, I love Nick's books by the way. So, um, yeah, I was just listening to to his

[01:34:12] um, yeah, I was just listening to to his podcast on the way up here. With respect

[01:34:14] podcast on the way up here. With respect to intelligence and its evolution, I do

[01:34:16] to intelligence and its evolution, I do claim it came fairly

[01:34:18] claim it came fairly >> I mean it's very very recent, right? Um

[01:34:20] >> I mean it's very very recent, right? Um I am surprised that it evolved. Yeah, I

[01:34:23] I am surprised that it evolved. Yeah, I I find it fascinating to think about all

[01:34:24] I find it fascinating to think about all the worlds out there. Like say there's a

[01:34:25] the worlds out there. Like say there's a thousand planets like Earth and what

[01:34:26] thousand planets like Earth and what they look like. I think Nane was here

[01:34:28] they look like. I think Nane was here talking about some of the early parts,

[01:34:29] talking about some of the early parts, right? Like

[01:34:30] right? Like >> okay, he expects basically very similar

[01:34:33] >> okay, he expects basically very similar life forms roughly speaking and bacteria

[01:34:34] life forms roughly speaking and bacteria like things and most of them.

[01:34:36] like things and most of them. >> Yeah.

[01:34:36] >> Yeah. >> And then there's a few breaks in there.

[01:34:38] >> And then there's a few breaks in there. I would expect that um the evolution of

[01:34:40] I would expect that um the evolution of intelligence intuitively feels to me

[01:34:42] intelligence intuitively feels to me like it should be fairly rare event and

[01:34:44] like it should be fairly rare event and there have been animals for I guess

[01:34:45] there have been animals for I guess maybe you should base it on how long

[01:34:47] maybe you should base it on how long some something has existed. So for

[01:34:49] some something has existed. So for example, if bacteria have been around

[01:34:50] example, if bacteria have been around for 2 billion years and nothing happened

[01:34:51] for 2 billion years and nothing happened then going to your carrier is probably

[01:34:53] then going to your carrier is probably pretty hard cuz um cuz bacteria actually

[01:34:56] pretty hard cuz um cuz bacteria actually um came up quite early in Earth's

[01:34:58] um came up quite early in Earth's evolution or history.

[01:35:00] evolution or history. >> Um

[01:35:01] >> Um >> and so I guess um how long have we had

[01:35:03] >> and so I guess um how long have we had animals? Maybe a couple hundred million

[01:35:04] animals? Maybe a couple hundred million years like multisellular animals that

[01:35:05] years like multisellular animals that like run crawl etc.

[01:35:07] like run crawl etc. um which is maybe 10% of um Earth's

[01:35:11] um which is maybe 10% of um Earth's lifespan or something like that. So I

[01:35:13] lifespan or something like that. So I maybe on that time scale is actually not

[01:35:14] maybe on that time scale is actually not not too tricky. I still feel like

[01:35:18] not too tricky. I still feel like it's still surprising to me I think

[01:35:19] it's still surprising to me I think intuitively that it developed. I would

[01:35:20] intuitively that it developed. I would maybe expect just a lot of like

[01:35:22] maybe expect just a lot of like animallike life forms doing animallike

[01:35:23] animallike life forms doing animallike things. Uh the fact that you can get

[01:35:25] things. Uh the fact that you can get something that creates culture and

[01:35:27] something that creates culture and knowledge Yeah. and accumulates it is is

[01:35:29] knowledge Yeah. and accumulates it is is it is surprising to me that okay so

[01:35:31] it is surprising to me that okay so there's so there's actually a couple of

[01:35:33] there's so there's actually a couple of interesting follow-ups.

[01:35:35] interesting follow-ups. if you buy this uh sun perspective that

[01:35:37] if you buy this uh sun perspective that actually the crux of intelligence is

[01:35:40] actually the crux of intelligence is animal intelligence. What the quote said

[01:35:42] animal intelligence. What the quote said is if you got to the squirrel you'd be

[01:35:43] is if you got to the squirrel you'd be most of the way to AGI. Um

[01:35:45] most of the way to AGI. Um >> then we got to squirrel intelligence I

[01:35:48] >> then we got to squirrel intelligence I guess right after the Cambrian explosion

[01:35:49] guess right after the Cambrian explosion 600 million years ago.

[01:35:51] 600 million years ago. >> It seems like what instigated that was

[01:35:53] >> It seems like what instigated that was the oxygenation event 600 million years

[01:35:55] the oxygenation event 600 million years ago. But immediately the sort of like

[01:35:57] ago. But immediately the sort of like intelligence algorithm was there to like

[01:35:59] intelligence algorithm was there to like make the the squirrel intelligence,

[01:36:01] make the the squirrel intelligence, right? So it's suggestive that animal

[01:36:04] right? So it's suggestive that animal intelligence was like that as soon as

[01:36:06] intelligence was like that as soon as you had the oxygen in the environment

[01:36:07] you had the oxygen in the environment you had the curat you could just like

[01:36:09] you had the curat you could just like get the algorithm. Um I maybe there was

[01:36:13] get the algorithm. Um I maybe there was like sort of an accident that evolution

[01:36:14] like sort of an accident that evolution smell abundant so fast but I don't know

[01:36:16] smell abundant so fast but I don't know if that suggest is like actually quite

[01:36:18] if that suggest is like actually quite uh at the end going to be quite simple.

[01:36:20] uh at the end going to be quite simple. >> Yes basically it's so hard to tell right

[01:36:22] >> Yes basically it's so hard to tell right with any of this stuff. I guess you can

[01:36:23] with any of this stuff. I guess you can base it a little bit on how long

[01:36:24] base it a little bit on how long something has exited or how long it

[01:36:26] something has exited or how long it feels like something has been

[01:36:27] feels like something has been bottlenecked. So very good describing

[01:36:29] bottlenecked. So very good describing this like very apparent bottleneck in

[01:36:31] this like very apparent bottleneck in bacteria for years like extreme

[01:36:35] bacteria for years like extreme diversity of chemical biochemistry and

[01:36:37] diversity of chemical biochemistry and yet nothing that grows to become

[01:36:40] yet nothing that grows to become >> animals two billion years um I I don't

[01:36:43] >> animals two billion years um I I don't know that we've seen exactly that kind

[01:36:45] know that we've seen exactly that kind of an equivalent with animals and

[01:36:46] of an equivalent with animals and intelligence uh to your point right but

[01:36:49] intelligence uh to your point right but I guess maybe we could also look at it

[01:36:50] I guess maybe we could also look at it with respect to how many times we think

[01:36:52] with respect to how many times we think evol intelligence has like individually

[01:36:54] evol intelligence has like individually sprung up

[01:36:55] sprung up >> that's a really good that's a really

[01:36:57] >> that's a really good that's a really good thing investigate.

[01:36:58] good thing investigate. >> Maybe one thought on that is I almost

[01:36:59] >> Maybe one thought on that is I almost feel like um well there's the homminid

[01:37:02] feel like um well there's the homminid intelligence and there's I would say

[01:37:03] intelligence and there's I would say like the bird intelligence right like

[01:37:06] like the bird intelligence right like ravens etc are extremely clever but uh

[01:37:08] ravens etc are extremely clever but uh they their brain brain parts are

[01:37:09] they their brain brain parts are actually quite distinct and we don't

[01:37:11] actually quite distinct and we don't have that much um

[01:37:12] have that much um >> existence so maybe that's an slight

[01:37:14] >> existence so maybe that's an slight event of there's a slight indication of

[01:37:17] event of there's a slight indication of maybe intelligence springing up a few

[01:37:18] maybe intelligence springing up a few times and so in that case you'd maybe

[01:37:19] times and so in that case you'd maybe expect it more frequently or something

[01:37:21] expect it more frequently or something like that. Yeah, a former guest Gw and

[01:37:24] like that. Yeah, a former guest Gw and also Carl Sherman have made made a

[01:37:26] also Carl Sherman have made made a really interesting point about that

[01:37:27] really interesting point about that which is their perspective is that the

[01:37:31] which is their perspective is that the scalable algorithm which humans have and

[01:37:33] scalable algorithm which humans have and primates have

[01:37:35] primates have >> arose in birds as well

[01:37:37] >> arose in birds as well >> and maybe other times as well. But in

[01:37:40] >> and maybe other times as well. But in humans found a evolutionary niche which

[01:37:43] humans found a evolutionary niche which rewarded marginal increases in

[01:37:44] rewarded marginal increases in intelligence.

[01:37:46] intelligence. >> Um and also had a scalable brain

[01:37:49] >> Um and also had a scalable brain algorithm that could achieve those

[01:37:50] algorithm that could achieve those increases in intelligence.

[01:37:52] increases in intelligence. >> The and so for example if a bird had a

[01:37:54] >> The and so for example if a bird had a bigger brain it would just like collapse

[01:37:56] bigger brain it would just like collapse out of the air. So it's very smart for

[01:37:58] out of the air. So it's very smart for the size of its brain but it's like it's

[01:38:00] the size of its brain but it's like it's not in a niche which rewards the brain

[01:38:01] not in a niche which rewards the brain getting bigger.

[01:38:03] getting bigger. >> Um yeah

[01:38:03] >> Um yeah >> maybe similar with some really smart

[01:38:06] >> maybe similar with some really smart >> dolphins etc.

[01:38:06] >> dolphins etc. >> Exactly. Yeah. Whereas humans, you know,

[01:38:08] >> Exactly. Yeah. Whereas humans, you know, like we have hands that like reward

[01:38:10] like we have hands that like reward being able to learn how to do tool use.

[01:38:12] being able to learn how to do tool use. We can externalize digestion, more

[01:38:13] We can externalize digestion, more energy to the brain

[01:38:15] energy to the brain >> and that um kicks off the flywheel.

[01:38:17] >> and that um kicks off the flywheel. >> Oh, yeah. And just stuff to work with. I

[01:38:18] >> Oh, yeah. And just stuff to work with. I mean, I'm guessing it would be harder to

[01:38:20] mean, I'm guessing it would be harder to if I was a dolphin.

[01:38:21] if I was a dolphin. >> Um I mean, how do you do you can't have

[01:38:23] >> Um I mean, how do you do you can't have fire for example and stuff like that? I

[01:38:25] fire for example and stuff like that? I mean, the probably like the universe of

[01:38:26] mean, the probably like the universe of things you can do in water um like

[01:38:28] things you can do in water um like inside water is probably lower than what

[01:38:30] inside water is probably lower than what you can do on land um just chemically,

[01:38:32] you can do on land um just chemically, >> right? Yeah, I do I do agree with this

[01:38:34] >> right? Yeah, I do I do agree with this with this viewpoint of these niches and

[01:38:36] with this viewpoint of these niches and what's what's being incentivized. I

[01:38:37] what's what's being incentivized. I still find it kind of miraculous that uh

[01:38:40] still find it kind of miraculous that uh I don't I I would have maybe expected

[01:38:42] I don't I I would have maybe expected things to get stuck on like animals with

[01:38:45] things to get stuck on like animals with bigger muscles, you know?

[01:38:46] bigger muscles, you know? >> Yeah.

[01:38:46] >> Yeah. >> Like going through intelligence is

[01:38:48] >> Like going through intelligence is actually a really fascinating uh

[01:38:50] actually a really fascinating uh breaking point. The the way Burn put it

[01:38:52] breaking point. The the way Burn put it is the reason it was so hard is is a

[01:38:54] is the reason it was so hard is is a very tight line between being in a

[01:38:56] very tight line between being in a situation where something is so

[01:38:58] situation where something is so important to learn

[01:39:00] important to learn that it's not just worth distilling the

[01:39:03] that it's not just worth distilling the exact right circuits directly back into

[01:39:05] exact right circuits directly back into your DNA

[01:39:07] your DNA >> versus it's not important enough to

[01:39:08] >> versus it's not important enough to learn at all.

[01:39:09] learn at all. >> Yeah.

[01:39:10] >> Yeah. >> It has to be something which is like

[01:39:12] >> It has to be something which is like >> you have to to incentivize building the

[01:39:14] >> you have to to incentivize building the algorithm to learn in lifetime.

[01:39:16] algorithm to learn in lifetime. >> Yeah. Exactly. You have to incentivize

[01:39:18] >> Yeah. Exactly. You have to incentivize some kind of adaptability. You actually

[01:39:19] some kind of adaptability. You actually want something that you actually want

[01:39:20] want something that you actually want environments that are unpredictable. So

[01:39:22] environments that are unpredictable. So evolution can't bake your algorithms

[01:39:23] evolution can't bake your algorithms into your weights. A lot of um a lot of

[01:39:26] into your weights. A lot of um a lot of animals are basically pre-baked in this

[01:39:28] animals are basically pre-baked in this sense and so humans have to figure it

[01:39:29] sense and so humans have to figure it out at test time when they get born. And

[01:39:31] out at test time when they get born. And so maybe there was um you actually want

[01:39:33] so maybe there was um you actually want these kinds of uh environments that

[01:39:35] these kinds of uh environments that actually change really rapidly or

[01:39:36] actually change really rapidly or something like that where you can't

[01:39:37] something like that where you can't foresee um what will work well and so

[01:39:40] foresee um what will work well and so you actually put all that intelligent

[01:39:42] you actually put all that intelligent you create intelligence to figure it out

[01:39:43] you create intelligence to figure it out at test time. Uh so Quentyn Pope had

[01:39:46] at test time. Uh so Quentyn Pope had this interesting blog post where he's

[01:39:47] this interesting blog post where he's saying the Brazilian doesn't expect a

[01:39:48] saying the Brazilian doesn't expect a sharp takeoff is um the so humans had

[01:39:52] sharp takeoff is um the so humans had the sharp takeoff where 60,000 years ago

[01:39:54] the sharp takeoff where 60,000 years ago we seem to have had the cognitive

[01:39:56] we seem to have had the cognitive architectures that we have today

[01:39:58] architectures that we have today >> and 10,000 years ago agricultural

[01:40:00] >> and 10,000 years ago agricultural revolution modernity dot dot dot. What

[01:40:02] revolution modernity dot dot dot. What was happening in that 50,000 years?

[01:40:04] was happening in that 50,000 years? >> Well, you had to build this sort of like

[01:40:06] >> Well, you had to build this sort of like cultural scaffold where you can

[01:40:08] cultural scaffold where you can accumulate knowledge over generations.

[01:40:11] accumulate knowledge over generations. This is an ability that exists for free

[01:40:14] This is an ability that exists for free in the way we do AI training where if

[01:40:16] in the way we do AI training where if you retrain a model it can still I mean

[01:40:19] you retrain a model it can still I mean in many cases they're literally

[01:40:20] in many cases they're literally distilled but they can be trained on

[01:40:21] distilled but they can be trained on each other you know they can be trained

[01:40:22] each other you know they can be trained on the premium pre-training corpus um

[01:40:25] on the premium pre-training corpus um they don't literally have to start from

[01:40:26] they don't literally have to start from scratch so there's a sense in which the

[01:40:29] scratch so there's a sense in which the thing which it took humans a long time

[01:40:31] thing which it took humans a long time to get this cultural loop going just

[01:40:33] to get this cultural loop going just comes for free with the way we do LLM

[01:40:35] comes for free with the way we do LLM training. Um, yes and no because LMs

[01:40:37] training. Um, yes and no because LMs don't really have the equivalent of

[01:40:38] don't really have the equivalent of culture and maybe we're giving them way

[01:40:40] culture and maybe we're giving them way too much and incentivizing not to create

[01:40:42] too much and incentivizing not to create it or something like that. But I guess

[01:40:43] it or something like that. But I guess like the notion of culture and of

[01:40:45] like the notion of culture and of written record and of like passing down

[01:40:47] written record and of like passing down notes between each other. I don't think

[01:40:48] notes between each other. I don't think there's an equivalent of that with LM

[01:40:50] there's an equivalent of that with LM right now. So LM don't really have

[01:40:51] right now. So LM don't really have culture right now and it's kind of like

[01:40:53] culture right now and it's kind of like one of the I think uh impediments I

[01:40:56] one of the I think uh impediments I would say. Can

[01:40:56] would say. Can >> can you give me some sense of what LLM

[01:40:58] >> can you give me some sense of what LLM culture might look like? Uh, so in the

[01:41:00] culture might look like? Uh, so in the simplest case, it would be a giant

[01:41:02] simplest case, it would be a giant scratch pad that the LLM can edit. And

[01:41:04] scratch pad that the LLM can edit. And as it's reading stuff or as it's helping

[01:41:05] as it's reading stuff or as it's helping out with work, it's editing the scratch

[01:41:07] out with work, it's editing the scratch pad for itself.

[01:41:08] pad for itself. >> Why can't an LLM write a book for the

[01:41:10] >> Why can't an LLM write a book for the other LM? That would be cool.

[01:41:12] other LM? That would be cool. >> Yeah.

[01:41:12] >> Yeah. >> Like why can't other LLMs read this

[01:41:14] >> Like why can't other LLMs read this LLM's book and be inspired by it or

[01:41:17] LLM's book and be inspired by it or shocked by it or something like that?

[01:41:18] shocked by it or something like that? There's no equivalence for any of this

[01:41:20] There's no equivalence for any of this stuff.

[01:41:20] stuff. >> Interesting. When would you expect that

[01:41:22] >> Interesting. When would you expect that kind of thing to start happening? And

[01:41:23] kind of thing to start happening? And more general question about like multi-

[01:41:25] more general question about like multi- aent systems and a sort of like

[01:41:27] aent systems and a sort of like independent AI. Yeah, civilization and

[01:41:30] independent AI. Yeah, civilization and culture.

[01:41:30] culture. >> I think there's two powerful ideas in

[01:41:32] >> I think there's two powerful ideas in the realm of multi- aent that have both

[01:41:34] the realm of multi- aent that have both not been like really claimed or or so

[01:41:36] not been like really claimed or or so on. The first one I would say is culture

[01:41:38] on. The first one I would say is culture and LLM's basically a growing uh

[01:41:41] and LLM's basically a growing uh repertoire of knowledge uh for their own

[01:41:43] repertoire of knowledge uh for their own purposes.

[01:41:44] purposes. >> Uh the second one looks a lot more like

[01:41:45] >> Uh the second one looks a lot more like uh the powerful idea of selfplay. Uh in

[01:41:47] uh the powerful idea of selfplay. Uh in my mind it's extremely powerful. So

[01:41:49] my mind it's extremely powerful. So evolution actually is a lot of um

[01:41:52] evolution actually is a lot of um competition basically driving

[01:41:53] competition basically driving intelligence and and evolution. Um and

[01:41:57] intelligence and and evolution. Um and uh for in AlphaGo more algorithmically

[01:41:59] uh for in AlphaGo more algorithmically like Alph Go is playing against itself

[01:42:01] like Alph Go is playing against itself and that's how it learns to get really

[01:42:02] and that's how it learns to get really good at Go and there's no equivalent of

[01:42:04] good at Go and there's no equivalent of selfplaying LMS but I would expect that

[01:42:06] selfplaying LMS but I would expect that to also exist but no one has done it yet

[01:42:08] to also exist but no one has done it yet like why can't an LM for example create

[01:42:10] like why can't an LM for example create a bunch of problems that another LM is

[01:42:12] a bunch of problems that another LM is learning to solve and then the the LM is

[01:42:14] learning to solve and then the the LM is always trying to like serve more and

[01:42:16] always trying to like serve more and more difficult problems stuff like that

[01:42:18] more difficult problems stuff like that you know so like

[01:42:19] you know so like >> I think there's a bunch of ways to

[01:42:20] >> I think there's a bunch of ways to actually organize it um and I think it's

[01:42:22] actually organize it um and I think it's a realm of research uh but I think I

[01:42:24] a realm of research uh but I think I haven't seen anything that convincing ly

[01:42:25] haven't seen anything that convincing ly like claims both of those

[01:42:27] like claims both of those >> like multi- aent uh improvements. I

[01:42:30] >> like multi- aent uh improvements. I still think we're mostly in the realm of

[01:42:31] still think we're mostly in the realm of a single individual agent, but I think I

[01:42:33] a single individual agent, but I think I also think that will change and and um

[01:42:35] also think that will change and and um in the realm of culture also I would

[01:42:37] in the realm of culture also I would bucket also organizations and we haven't

[01:42:39] bucket also organizations and we haven't seen anything like that coming in

[01:42:41] seen anything like that coming in either.

[01:42:41] either. >> Um so that's why we're still early.

[01:42:44] >> Um so that's why we're still early. >> And can you identify the key bottleneck

[01:42:45] >> And can you identify the key bottleneck that's uh preventing this kind of

[01:42:48] that's uh preventing this kind of collaboration between LLMs? Maybe like

[01:42:50] collaboration between LLMs? Maybe like the way I would put it is

[01:42:53] the way I would put it is somehow remarkably again some of these

[01:42:55] somehow remarkably again some of these analogies work and they shouldn't but

[01:42:56] analogies work and they shouldn't but somehow remarkably they do a lot of the

[01:42:58] somehow remarkably they do a lot of the smaller models or the dumber like the

[01:43:00] smaller models or the dumber like the smaller models somehow remarkably

[01:43:01] smaller models somehow remarkably resemble like a kindergarten student or

[01:43:04] resemble like a kindergarten student or then like a elementary school student or

[01:43:06] then like a elementary school student or high school student etc. And somehow we

[01:43:08] high school student etc. And somehow we still haven't like graduated enough

[01:43:09] still haven't like graduated enough where the stuff can take over like it's

[01:43:11] where the stuff can take over like it's still mostly like my cloth code or

[01:43:12] still mostly like my cloth code or codeex they still kind of feel like this

[01:43:15] codeex they still kind of feel like this elementary grade student. I know that

[01:43:17] elementary grade student. I know that they can take PhD quizzes, but they

[01:43:19] they can take PhD quizzes, but they still cognitively feel like a

[01:43:21] still cognitively feel like a kindergarten or an elementary entry

[01:43:22] kindergarten or an elementary entry school student. So, I don't think they

[01:43:24] school student. So, I don't think they can create culture because they're still

[01:43:25] can create culture because they're still kids. Um, you know,

[01:43:27] kids. Um, you know, >> like they're savant kids. Um, they have

[01:43:30] >> like they're savant kids. Um, they have episodic, they have perfect memory of

[01:43:31] episodic, they have perfect memory of all this stuff, etc. And they can, uh,

[01:43:33] all this stuff, etc. And they can, uh, convincingly create all kinds of slop

[01:43:35] convincingly create all kinds of slop that looks really good.

[01:43:36] that looks really good. >> But I still think they don't really know

[01:43:37] >> But I still think they don't really know what they're doing and they don't really

[01:43:38] what they're doing and they don't really have the cognition uh, across all these

[01:43:41] have the cognition uh, across all these little check boxes that we still have to

[01:43:42] little check boxes that we still have to collect.

[01:43:43] collect. >> Yeah. So, you've talked about how you

[01:43:46] >> Yeah. So, you've talked about how you were at Tesla leading self-driving from

[01:43:48] were at Tesla leading self-driving from 2017 to 2022 and then you firsthand saw

[01:43:52] 2017 to 2022 and then you firsthand saw this progress from we went from cool

[01:43:55] this progress from we went from cool demos to now thousands of cars out there

[01:43:58] demos to now thousands of cars out there actually autonomously doing drives. Why

[01:44:00] actually autonomously doing drives. Why did that take a decade? Like what was

[01:44:01] did that take a decade? Like what was happening through that time?

[01:44:02] happening through that time? >> Yeah. Uh so I would say one thing I will

[01:44:05] >> Yeah. Uh so I would say one thing I will almost instantly also push back on is

[01:44:07] almost instantly also push back on is this is not even near done.

[01:44:10] this is not even near done. >> So in a bunch of ways that I'm going to

[01:44:11] >> So in a bunch of ways that I'm going to get to. I do think that uh self-driving

[01:44:13] get to. I do think that uh self-driving is very interesting because uh it's

[01:44:15] is very interesting because uh it's definitely like where I get a lot of my

[01:44:16] definitely like where I get a lot of my intuitions because I spent 5 years on

[01:44:18] intuitions because I spent 5 years on it. Um and it has this entire history

[01:44:21] it. Um and it has this entire history where actually the first demos of

[01:44:22] where actually the first demos of self-driving go all the way to 1980s.

[01:44:25] self-driving go all the way to 1980s. >> You can see a demo from CMU in 1986

[01:44:28] >> You can see a demo from CMU in 1986 there's a truck that's driving itself on

[01:44:29] there's a truck that's driving itself on roads. Um but okay fast forward I think

[01:44:33] roads. Um but okay fast forward I think when I was joining Tesla I had um I had

[01:44:35] when I was joining Tesla I had um I had a very early demo of a Whimo and it

[01:44:37] a very early demo of a Whimo and it basically gave me a perfect drive uh in

[01:44:40] basically gave me a perfect drive uh in 200 2014 or something like that. So

[01:44:43] 200 2014 or something like that. So perfect way drive a decade ago uh gave

[01:44:46] perfect way drive a decade ago uh gave to us around Palo Alto and so on because

[01:44:48] to us around Palo Alto and so on because I had a friend who worked there. Um and

[01:44:50] I had a friend who worked there. Um and I thought it was like very close and

[01:44:52] I thought it was like very close and then still took a long time and I do

[01:44:53] then still took a long time and I do think that some there's for some kinds

[01:44:55] think that some there's for some kinds of um tasks and jobs and so on uh the

[01:44:59] of um tasks and jobs and so on uh the there's a very large demoto product gap

[01:45:01] there's a very large demoto product gap where the demo is very easy but the

[01:45:03] where the demo is very easy but the product is very hard. Um, and it's

[01:45:05] product is very hard. Um, and it's especially the case in cases like

[01:45:07] especially the case in cases like self-driving where the the cost of

[01:45:09] self-driving where the the cost of failure is too high, right? Many ind

[01:45:11] failure is too high, right? Many ind many industries tasks and jobs maybe

[01:45:13] many industries tasks and jobs maybe don't have that property, but when you

[01:45:14] don't have that property, but when you do have that property, that definitely

[01:45:16] do have that property, that definitely increases the timelines. I do think that

[01:45:18] increases the timelines. I do think that for example in software engineering, I

[01:45:20] for example in software engineering, I do actually think that that property

[01:45:21] do actually think that that property does exist. I think for a lot of vibe

[01:45:23] does exist. I think for a lot of vibe coding it doesn't but I think if you're

[01:45:25] coding it doesn't but I think if you're writing actual production grade code I

[01:45:26] writing actual production grade code I think that property should exist because

[01:45:28] think that property should exist because any kind of mistake actually leads to a

[01:45:30] any kind of mistake actually leads to a security vulnerability or something like

[01:45:31] security vulnerability or something like that and millions and hundreds of

[01:45:33] that and millions and hundreds of millions of people's personal social

[01:45:35] millions of people's personal social security numbers etc get leaked or

[01:45:36] security numbers etc get leaked or something like that and so I do think

[01:45:38] something like that and so I do think that it is a case that in software

[01:45:40] that it is a case that in software people should be careful um kind of like

[01:45:42] people should be careful um kind of like in self-driving um like in self-driving

[01:45:44] in self-driving um like in self-driving if you if it things go wrong you might

[01:45:46] if you if it things go wrong you might get injury in um I guess there's worse

[01:45:49] get injury in um I guess there's worse outcomes but I guess in in software I

[01:45:51] outcomes but I guess in in software I almost feel like it's almost unbounded

[01:45:53] almost feel like it's almost unbounded how terrible some things could be.

[01:45:56] how terrible some things could be. >> Interesting.

[01:45:56] >> Interesting. >> So I do think that they share that

[01:45:58] >> So I do think that they share that property. And then I think basically

[01:45:59] property. And then I think basically what takes the long amount of time and

[01:46:01] what takes the long amount of time and the way to think about it is that it's a

[01:46:04] the way to think about it is that it's a march of nines and every single nine is

[01:46:05] march of nines and every single nine is a constant amount of work. So every

[01:46:08] a constant amount of work. So every single nine is the same amount of work.

[01:46:10] single nine is the same amount of work. So when you get a demo and something

[01:46:12] So when you get a demo and something works 90% of the time, that's just uh

[01:46:14] works 90% of the time, that's just uh that's just uh what the first nine and

[01:46:16] that's just uh what the first nine and then you need the second nine and third

[01:46:17] then you need the second nine and third nine, fourth nine, fifth nine. And while

[01:46:19] nine, fourth nine, fifth nine. And while I was at Tesla for was it five years or

[01:46:20] I was at Tesla for was it five years or so. I think we went through maybe three

[01:46:22] so. I think we went through maybe three nines or two nines. I don't know what it

[01:46:23] nines or two nines. I don't know what it is, you know, but like multiple nines of

[01:46:25] is, you know, but like multiple nines of iteration, there's still more nines to

[01:46:26] iteration, there's still more nines to go. And so that's why these things take

[01:46:28] go. And so that's why these things take take so long. Um, and so it's definitely

[01:46:32] take so long. Um, and so it's definitely formative for me like seeing something

[01:46:33] formative for me like seeing something that was a demo. I'm very unimpressed by

[01:46:35] that was a demo. I'm very unimpressed by demos. Um, so whenever I see demos of

[01:46:38] demos. Um, so whenever I see demos of anything, I'm extremely unimpressed by

[01:46:39] anything, I'm extremely unimpressed by that. Um, it works better if you can um

[01:46:43] that. Um, it works better if you can um if it's a demo that someone cooked up

[01:46:44] if it's a demo that someone cooked up and is just showing you it's worse. If

[01:46:46] and is just showing you it's worse. If you can interact with it, it's a bit

[01:46:47] you can interact with it, it's a bit better. But even then, you're not done.

[01:46:48] better. But even then, you're not done. You need actual product. It's going to

[01:46:50] You need actual product. It's going to face all these challenges in when it

[01:46:52] face all these challenges in when it comes in contact with reality and all

[01:46:53] comes in contact with reality and all these different pockets of behavior that

[01:46:54] these different pockets of behavior that need patching. And so I think we're

[01:46:56] need patching. And so I think we're going to see all this stuff play out.

[01:46:57] going to see all this stuff play out. It's a march of nines. Each nine is

[01:46:59] It's a march of nines. Each nine is constant. Uh demos are encouraging.

[01:47:01] constant. Uh demos are encouraging. Still a huge amount of work to do. Uh I

[01:47:03] Still a huge amount of work to do. Uh I do think it is a um kind of a critical

[01:47:06] do think it is a um kind of a critical safety domain unless you're doing bip

[01:47:08] safety domain unless you're doing bip coding, which is all nice and fun and so

[01:47:10] coding, which is all nice and fun and so on. And uh so that's why I think this

[01:47:12] on. And uh so that's why I think this also enforces my timelines from that

[01:47:14] also enforces my timelines from that perspective. Hm. That's that's very

[01:47:16] perspective. Hm. That's that's very interesting to hear you say that the

[01:47:18] interesting to hear you say that the sort of safety guarantees you need from

[01:47:20] sort of safety guarantees you need from software are actually not dissimilar to

[01:47:22] software are actually not dissimilar to self-driving because what people will

[01:47:23] self-driving because what people will often say is that self-driving took so

[01:47:25] often say is that self-driving took so long because the cost of failure is so

[01:47:29] long because the cost of failure is so high. Like a human makes a mistake on

[01:47:31] high. Like a human makes a mistake on the average every 400,000 miles or every

[01:47:33] the average every 400,000 miles or every seven years. And if you had to release a

[01:47:35] seven years. And if you had to release a coding agent that couldn't make a

[01:47:36] coding agent that couldn't make a mistake for at least seven years, it

[01:47:39] mistake for at least seven years, it would be much harder to deploy. But I

[01:47:42] would be much harder to deploy. But I guess your point is that if you made a

[01:47:43] guess your point is that if you made a catastrophic coding mistake like yeah

[01:47:45] catastrophic coding mistake like yeah >> breaking some important system every

[01:47:47] >> breaking some important system every seven years

[01:47:47] seven years >> very easy to do

[01:47:48] >> very easy to do >> and in fact in terms of sort of wall

[01:47:50] >> and in fact in terms of sort of wall clock time it much it would be much less

[01:47:51] clock time it much it would be much less than seven years because you're like

[01:47:52] than seven years because you're like constantly outputting code like that

[01:47:54] constantly outputting code like that right so like per tokens or in terms of

[01:47:57] right so like per tokens or in terms of tokens it would be seven years but in

[01:47:59] tokens it would be seven years but in terms of wall clock time

[01:48:00] terms of wall clock time >> in some way it's a much harder problem I

[01:48:01] >> in some way it's a much harder problem I mean self-driving is just one of

[01:48:02] mean self-driving is just one of thousands of things that people do it's

[01:48:04] thousands of things that people do it's almost like a single vertical I suppose

[01:48:06] almost like a single vertical I suppose um whereas when we're talking about

[01:48:08] um whereas when we're talking about general software engineering it's even

[01:48:09] general software engineering it's even more there's more surface Yeah,

[01:48:11] more there's more surface Yeah, >> there's another uh objection people make

[01:48:14] >> there's another uh objection people make to that analogy,

[01:48:15] to that analogy, >> which is that

[01:48:16] >> which is that >> with self-driving, what took a big

[01:48:19] >> with self-driving, what took a big fraction of that time was solving the

[01:48:21] fraction of that time was solving the problem of building basic uh having

[01:48:23] problem of building basic uh having basic perception that's robust and

[01:48:25] basic perception that's robust and building representations and having a

[01:48:28] building representations and having a model that has some common sense so it

[01:48:30] model that has some common sense so it can generalize to when I see something

[01:48:32] can generalize to when I see something that's slightly out of distribution. If

[01:48:34] that's slightly out of distribution. If somebody's waving down the road this

[01:48:36] somebody's waving down the road this way, you don't need to train for it. the

[01:48:38] way, you don't need to train for it. the thing will uh have some understanding of

[01:48:40] thing will uh have some understanding of how to respond to something like that.

[01:48:42] how to respond to something like that. >> And these are things we're getting for

[01:48:43] >> And these are things we're getting for free with LLMs or VLMs today. So we

[01:48:46] free with LLMs or VLMs today. So we don't have to solve these very basic

[01:48:48] don't have to solve these very basic representation problems. And so now

[01:48:50] representation problems. And so now deploying AI across different domains

[01:48:52] deploying AI across different domains will sort of be like deploying a

[01:48:53] will sort of be like deploying a self-driving car with current models to

[01:48:55] self-driving car with current models to a different city which is hard but not

[01:48:56] a different city which is hard but not like a 10 year long task.

[01:48:58] like a 10 year long task. >> Yeah. Basically I'm not 100% sure if I

[01:49:00] >> Yeah. Basically I'm not 100% sure if I fully agree with that. I don't know that

[01:49:01] fully agree with that. I don't know that we're how much we're getting for free

[01:49:03] we're how much we're getting for free and I still think there's like a lot of

[01:49:04] and I still think there's like a lot of gaps in understanding in what we are

[01:49:06] gaps in understanding in what we are getting. Um I mean we're definitely

[01:49:08] getting. Um I mean we're definitely getting more generalizable intelligence

[01:49:09] getting more generalizable intelligence in a single entity. Uh whereas uh

[01:49:12] in a single entity. Uh whereas uh self-driving is a very special purpose

[01:49:13] self-driving is a very special purpose task that requires in some sense

[01:49:15] task that requires in some sense building a special purpose task is maybe

[01:49:16] building a special purpose task is maybe even harder in a certain sense because

[01:49:18] even harder in a certain sense because it doesn't like fall out from a more

[01:49:20] it doesn't like fall out from a more general thing that you're doing at scale

[01:49:21] general thing that you're doing at scale if that makes sense. So, um, but I still

[01:49:24] if that makes sense. So, um, but I still think that the analogy doesn't, uh, I

[01:49:26] think that the analogy doesn't, uh, I still don't know if it fully resonates

[01:49:27] still don't know if it fully resonates because, um, like the LMS are still

[01:49:30] because, um, like the LMS are still pretty fallible and I still think that

[01:49:31] pretty fallible and I still think that they have a lot of gaps and that it

[01:49:32] they have a lot of gaps and that it still needs to be filled in. And I don't

[01:49:33] still needs to be filled in. And I don't think that we're getting like magical

[01:49:35] think that we're getting like magical generalization completely out of the

[01:49:36] generalization completely out of the box, uh, sort of in in a certain sense.

[01:49:39] box, uh, sort of in in a certain sense. And the other aspect that I wanted to

[01:49:40] And the other aspect that I wanted to also actually return to when I was uh,

[01:49:42] also actually return to when I was uh, in the in the beginning was uh,

[01:49:44] in the in the beginning was uh, self-driving cars are nowhere down

[01:49:46] self-driving cars are nowhere down still.

[01:49:47] still. >> So even though um, so the deployments

[01:49:49] >> So even though um, so the deployments still are pretty minimal, right? Uh so

[01:49:51] still are pretty minimal, right? Uh so even Whimo and so on has very few cars

[01:49:53] even Whimo and so on has very few cars and they're doing that roughly speaking

[01:49:54] and they're doing that roughly speaking because they're not economical, right?

[01:49:56] because they're not economical, right? Um because they've built something that

[01:49:57] Um because they've built something that that lives in the future. Um and so they

[01:50:00] that lives in the future. Um and so they they had to like pull back future but

[01:50:02] they had to like pull back future but they had had to make it uneconomical. So

[01:50:03] they had had to make it uneconomical. So they have all these like um you know

[01:50:05] they have all these like um you know there's all these costs not just

[01:50:07] there's all these costs not just marginal costs for those cars and their

[01:50:09] marginal costs for those cars and their operation and maintenance but also the

[01:50:10] operation and maintenance but also the capex of the entire thing.

[01:50:12] capex of the entire thing. >> Um so making economical is still going

[01:50:14] >> Um so making economical is still going to be a slog I think uh for them. Um,

[01:50:17] to be a slog I think uh for them. Um, and then also I think when you look at

[01:50:19] and then also I think when you look at these cars and there's no one driving,

[01:50:20] these cars and there's no one driving, um, I also think it's a little bit

[01:50:22] um, I also think it's a little bit deceiving because there are actually

[01:50:23] deceiving because there are actually very elaborate operation centers of

[01:50:26] very elaborate operation centers of people actually kind of like in a loop

[01:50:28] people actually kind of like in a loop with these cars. And I don't have the I

[01:50:29] with these cars. And I don't have the I don't know the full extent of it, but I

[01:50:31] don't know the full extent of it, but I think um

[01:50:32] think um >> there's more human in the loop that you

[01:50:33] >> there's more human in the loop that you might expect and there's people

[01:50:34] might expect and there's people somewhere out there basically beaming in

[01:50:37] somewhere out there basically beaming in from the sky.

[01:50:38] from the sky. >> Uh, and uh, I don't actually know

[01:50:39] >> Uh, and uh, I don't actually know they're fully in the loop with the

[01:50:40] they're fully in the loop with the driving. I think some of the times they

[01:50:42] driving. I think some of the times they are but they're certainly involved and

[01:50:43] are but they're certainly involved and there are people and in some sense we

[01:50:44] there are people and in some sense we haven't actually removed the person

[01:50:46] haven't actually removed the person we've like moved them to somewhere where

[01:50:47] we've like moved them to somewhere where we can't see them. I still think there

[01:50:49] we can't see them. I still think there will be some work as you mentioned going

[01:50:50] will be some work as you mentioned going from environment to environment and uh

[01:50:52] from environment to environment and uh so I think like there's still challenges

[01:50:53] so I think like there's still challenges to to make self driving real but I I do

[01:50:56] to to make self driving real but I I do agree that it's definitely cross a

[01:50:58] agree that it's definitely cross a threshold where it kind of feels real

[01:50:59] threshold where it kind of feels real unless it's like retail operated. Um for

[01:51:02] unless it's like retail operated. Um for example Whimo can't go to all the

[01:51:03] example Whimo can't go to all the different parts of the city. My

[01:51:05] different parts of the city. My suspicion is it's like parts of city

[01:51:07] suspicion is it's like parts of city where you don't get good signal

[01:51:09] where you don't get good signal >> anyway. So basically I don't actually

[01:51:11] >> anyway. So basically I don't actually know anything about the stack. I mean

[01:51:12] know anything about the stack. I mean I'm just making up making

[01:51:13] I'm just making up making >> up. You less self love driving for 5

[01:51:16] >> up. You less self love driving for 5 years at Tesla.

[01:51:17] years at Tesla. >> Sorry I don't know anything about the

[01:51:18] >> Sorry I don't know anything about the specifics of Whimo. I feel talk about

[01:51:20] specifics of Whimo. I feel talk about them.

[01:51:20] them. >> I actually by the way I love Whimo and I

[01:51:22] >> I actually by the way I love Whimo and I take it all the time. So I don't want to

[01:51:23] take it all the time. So I don't want to say like

[01:51:24] say like >> I just think that people again are

[01:51:26] >> I just think that people again are sometimes a little bit too naive about

[01:51:28] sometimes a little bit too naive about some of the progress and I still think

[01:51:29] some of the progress and I still think there's a huge amount of work

[01:51:30] there's a huge amount of work >> and I think Tesla took in my mind a lot

[01:51:32] >> and I think Tesla took in my mind a lot more scalable approach and I think the

[01:51:34] more scalable approach and I think the team is doing extremely well and it's

[01:51:35] team is doing extremely well and it's going to um and I I I'm kind of like on

[01:51:38] going to um and I I I'm kind of like on the record for predicting how this thing

[01:51:39] the record for predicting how this thing will go which is like way more like

[01:51:41] will go which is like way more like early start because you can package up

[01:51:43] early start because you can package up so many sensors but I do think Tesla is

[01:51:45] so many sensors but I do think Tesla is taking the more uh scalable strategy and

[01:51:46] taking the more uh scalable strategy and it's going to look a lot more like that

[01:51:48] it's going to look a lot more like that u so I think this will have to still uh

[01:51:50] u so I think this will have to still uh play out and hasn't but basically Like I

[01:51:53] play out and hasn't but basically Like I don't want to talk about self driving as

[01:51:54] don't want to talk about self driving as something that took a decade because it

[01:51:55] something that took a decade because it didn't take it didn't take yet

[01:51:58] didn't take it didn't take yet if that makes sense

[01:51:59] if that makes sense >> because one it's the the start is at

[01:52:01] >> because one it's the the start is at 1980 not 10 years ago and then two the

[01:52:04] 1980 not 10 years ago and then two the end is not here yet.

[01:52:04] end is not here yet. >> Yeah. The end is not not near yet

[01:52:06] >> Yeah. The end is not not near yet because uh when we're talking about

[01:52:07] because uh when we're talking about self-driving usually in my mind it's

[01:52:09] self-driving usually in my mind it's self-driving at scale. Yeah.

[01:52:10] self-driving at scale. Yeah. >> Um people don't have to get a driver's

[01:52:12] >> Um people don't have to get a driver's license etc. I'm I'm curious to bounce

[01:52:15] license etc. I'm I'm curious to bounce two other ways in which the analogy

[01:52:17] two other ways in which the analogy might be different.

[01:52:18] might be different. >> And the reason I'm especially curious

[01:52:20] >> And the reason I'm especially curious about this is because I think the

[01:52:21] about this is because I think the question of how fast AI is deployed, how

[01:52:24] question of how fast AI is deployed, how valuable it is when it's early on is

[01:52:27] valuable it is when it's early on is like potentially the most important

[01:52:28] like potentially the most important question in the world right now, right?

[01:52:29] question in the world right now, right? Like if you're trying to model what the

[01:52:31] Like if you're trying to model what the Euro 20 or 30 looks like, this is the

[01:52:33] Euro 20 or 30 looks like, this is the question you want to have some

[01:52:34] question you want to have some understanding of. So another thing you

[01:52:36] understanding of. So another thing you might think is one you have this latency

[01:52:39] might think is one you have this latency requirement with self-driving where you

[01:52:41] requirement with self-driving where you have I have no idea what the actual

[01:52:43] have I have no idea what the actual models are but I assume like tens of

[01:52:44] models are but I assume like tens of millions of parameters or something

[01:52:45] millions of parameters or something which is not the necessary constraint

[01:52:48] which is not the necessary constraint for um knowledge work with LLMs or maybe

[01:52:51] for um knowledge work with LLMs or maybe it might be with the computer use and

[01:52:52] it might be with the computer use and stuff but anyways the other big one is

[01:52:56] stuff but anyways the other big one is maybe more importantly on this capex

[01:52:58] maybe more importantly on this capex question yes there is additional cost to

[01:53:02] question yes there is additional cost to serving up an additional copy of a model

[01:53:04] serving up an additional copy of a model But the sort of opex of a session

[01:53:09] But the sort of opex of a session >> is quite low and you can amortize the

[01:53:12] >> is quite low and you can amortize the cost of AI into the training run itself

[01:53:15] cost of AI into the training run itself depending on how inference scaling goes

[01:53:16] depending on how inference scaling goes and stuff but it's certainly not as much

[01:53:18] and stuff but it's certainly not as much as like building a whole new car

[01:53:20] as like building a whole new car >> to serve another instance of a model. So

[01:53:24] >> to serve another instance of a model. So it just the economics of deploying more

[01:53:26] it just the economics of deploying more widely

[01:53:27] widely >> are much more favorable.

[01:53:28] >> are much more favorable. >> I think that's right. I think if you're

[01:53:29] >> I think that's right. I think if you're sticking in the realm of bits, bits are

[01:53:31] sticking in the realm of bits, bits are like a million times easier than

[01:53:33] like a million times easier than anything that touches the physical

[01:53:34] anything that touches the physical world.

[01:53:35] world. >> No.

[01:53:35] >> No. >> Uh I definitely grant that. Uh bits are

[01:53:38] >> Uh I definitely grant that. Uh bits are completely changeable, arbitrarily

[01:53:40] completely changeable, arbitrarily reshuffable at very rapid speed. Uh so

[01:53:42] reshuffable at very rapid speed. Uh so you would expect a lot more

[01:53:44] you would expect a lot more >> faster uh adaptation also in the

[01:53:46] >> faster uh adaptation also in the industry and so on.

[01:53:48] industry and so on. >> And then uh what was the first one?

[01:53:50] >> And then uh what was the first one? >> The latency requirements and it

[01:53:52] >> The latency requirements and it implications for model size.

[01:53:53] implications for model size. >> I think that's roughly right. I mean I

[01:53:54] >> I think that's roughly right. I mean I also think that if we are talking about

[01:53:56] also think that if we are talking about knowledge work at scale there will be

[01:53:57] knowledge work at scale there will be some u latency requirements practically

[01:53:59] some u latency requirements practically speaking because we uh you know we're

[01:54:01] speaking because we uh you know we're going to have to make create a huge

[01:54:02] going to have to make create a huge amount of compute instead of that.

[01:54:04] amount of compute instead of that. >> Um and then I think like the last aspect

[01:54:06] >> Um and then I think like the last aspect that I very briefly want to also talk

[01:54:08] that I very briefly want to also talk about is like all the all the rest of it

[01:54:10] about is like all the all the rest of it the

[01:54:11] the >> just all the rest of it. So um what does

[01:54:13] >> just all the rest of it. So um what does society think about it? What is the

[01:54:15] society think about it? What is the legal ramific how is it working legally?

[01:54:17] legal ramific how is it working legally? How is it working insurance-wise? who's

[01:54:19] How is it working insurance-wise? who's really like what is the where what are

[01:54:21] really like what is the where what are those layers of it and aspects of it

[01:54:24] those layers of it and aspects of it what happens with what is the equivalent

[01:54:25] what happens with what is the equivalent of people putting a cone on a whimo

[01:54:28] of people putting a cone on a whimo >> you know uh there's going to be

[01:54:29] >> you know uh there's going to be equivalent of all that and so I I do

[01:54:31] equivalent of all that and so I I do think that I almost feel like

[01:54:32] think that I almost feel like self-driving is a very nice analogy that

[01:54:35] self-driving is a very nice analogy that you can borrow things from yeah what is

[01:54:36] you can borrow things from yeah what is the equivalent of a cone on the car what

[01:54:38] the equivalent of a cone on the car what is the equivalent of a teleoperating

[01:54:39] is the equivalent of a teleoperating worker who's like hidden away um and uh

[01:54:43] worker who's like hidden away um and uh almost like all the aspects of it

[01:54:44] almost like all the aspects of it >> yeah do you have any opinions on whether

[01:54:46] >> yeah do you have any opinions on whether this implies that the current day I

[01:54:48] this implies that the current day I build

[01:54:49] build which would like 10x the amount of

[01:54:52] which would like 10x the amount of available computer in the world in a

[01:54:53] available computer in the world in a year or two and maybe like 100 more than

[01:54:56] year or two and maybe like 100 more than 100x it by the end of the decade. If the

[01:54:59] 100x it by the end of the decade. If the use of AI will be lower than some people

[01:55:01] use of AI will be lower than some people naely predict, does that mean that we're

[01:55:03] naely predict, does that mean that we're overbuilding compute or do you is that a

[01:55:06] overbuilding compute or do you is that a separate question?

[01:55:07] separate question? >> Kind of like what happened with

[01:55:07] >> Kind of like what happened with railroads and all this kind of stuff?

[01:55:09] railroads and all this kind of stuff? Sorry.

[01:55:10] Sorry. >> Was it railroads? Oh, sorry. It was um

[01:55:11] >> Was it railroads? Oh, sorry. It was um yeah,

[01:55:12] yeah, >> there there is like historical precedent

[01:55:14] >> there there is like historical precedent or was it with telecommunication

[01:55:15] or was it with telecommunication industry, right? Like prepaving the

[01:55:16] industry, right? Like prepaving the internet that only came like a decade

[01:55:18] internet that only came like a decade later, you know,

[01:55:19] later, you know, >> and creating like a whole bubble in the

[01:55:21] >> and creating like a whole bubble in the telecommunications industry in the late

[01:55:23] telecommunications industry in the late 90s kind of thing. Yeah.

[01:55:25] 90s kind of thing. Yeah. >> Um so I don't know. I mean, I I

[01:55:28] >> Um so I don't know. I mean, I I understand I'm sounding very pessimistic

[01:55:30] understand I'm sounding very pessimistic here.

[01:55:30] here. >> I'm only doing that I'm actually

[01:55:32] >> I'm only doing that I'm actually optimistic. I think this will work. I

[01:55:33] optimistic. I think this will work. I think it's tractable. I'm only sounding

[01:55:35] think it's tractable. I'm only sounding pessimistic because when I go on my

[01:55:36] pessimistic because when I go on my Twitter timeline, I see all this stuff

[01:55:38] Twitter timeline, I see all this stuff that makes no sense to me. And um and I

[01:55:42] that makes no sense to me. And um and I think there's a lot of reasons for why

[01:55:44] think there's a lot of reasons for why that exists. And I think a lot of it is

[01:55:45] that exists. And I think a lot of it is I think honestly just uh fundraising.

[01:55:47] I think honestly just uh fundraising. It's just incentive structures. A lot of

[01:55:49] It's just incentive structures. A lot of it may be fundraising. A lot of it is

[01:55:51] it may be fundraising. A lot of it is just attention um you know, converting

[01:55:53] just attention um you know, converting attention to money on the internet, you

[01:55:55] attention to money on the internet, you know, stuff like that. Um, so I think

[01:55:58] know, stuff like that. Um, so I think there's u there's a lot of that going on

[01:56:00] there's u there's a lot of that going on and I think I'm only reacting to that.

[01:56:02] and I think I'm only reacting to that. Um, but I'm still like overall very

[01:56:04] Um, but I'm still like overall very bullish on technology. I think we're

[01:56:06] bullish on technology. I think we're going to work through all this stuff and

[01:56:07] going to work through all this stuff and I think there's been a rapid amount of

[01:56:08] I think there's been a rapid amount of progress. Um, I don't actually know that

[01:56:10] progress. Um, I don't actually know that there's overbuilding. I think that

[01:56:12] there's overbuilding. I think that there's going to be we're going to be

[01:56:14] there's going to be we're going to be able to gobble up what in my

[01:56:15] able to gobble up what in my understanding is being built. Uh,

[01:56:16] understanding is being built. Uh, because I do think that for example

[01:56:18] because I do think that for example cloud code or open codex and stuff like

[01:56:20] cloud code or open codex and stuff like that, they didn't even exist a year ago,

[01:56:22] that, they didn't even exist a year ago, right? Is that right? I think it's

[01:56:23] right? Is that right? I think it's roughly right. um this is miraculous

[01:56:25] roughly right. um this is miraculous technology that didn't exist. I think um

[01:56:28] technology that didn't exist. I think um uh there's going to be a huge amount of

[01:56:29] uh there's going to be a huge amount of demand as there as we see the demand in

[01:56:31] demand as there as we see the demand in Chaship PT already and so on. So uh

[01:56:33] Chaship PT already and so on. So uh yeah, I don't actually know that there's

[01:56:35] yeah, I don't actually know that there's overbuilding. Um but I guess I'm just

[01:56:38] overbuilding. Um but I guess I'm just reacting to like some of the very fast

[01:56:39] reacting to like some of the very fast timelines that people continue to say

[01:56:41] timelines that people continue to say incorrectly and I've heard many many

[01:56:43] incorrectly and I've heard many many times over the course of my 15 years in

[01:56:44] times over the course of my 15 years in AI where very reputable people keep

[01:56:47] AI where very reputable people keep getting this wrong all the time.

[01:56:50] getting this wrong all the time. And I think I want this to be properly

[01:56:52] And I think I want this to be properly calibrated and I think some of this also

[01:56:54] calibrated and I think some of this also it does have like geopolitical

[01:56:55] it does have like geopolitical ramifications and things like that when

[01:56:57] ramifications and things like that when uh like some of these questions and I

[01:56:59] uh like some of these questions and I think I don't want people to make

[01:57:01] think I don't want people to make mistakes on that on that sphere of

[01:57:02] mistakes on that on that sphere of things. So um I do want us to be

[01:57:05] things. So um I do want us to be grounded in reality of what technology

[01:57:06] grounded in reality of what technology is and isn't. So

[01:57:08] is and isn't. So >> let's let's talk about education in

[01:57:10] >> let's let's talk about education in Eureka and stuff.

[01:57:11] Eureka and stuff. >> One thing you could do is uh start

[01:57:14] >> One thing you could do is uh start another AI lab and then try to solve

[01:57:17] another AI lab and then try to solve those problems. Um, yeah. C curious what

[01:57:20] those problems. Um, yeah. C curious what you're up to now.

[01:57:21] you're up to now. >> Yeah.

[01:57:21] >> Yeah. >> And then, yeah, why not AI research

[01:57:23] >> And then, yeah, why not AI research itself?

[01:57:24] itself? >> Uh, I guess maybe like the way I would

[01:57:26] >> Uh, I guess maybe like the way I would put it is I feel some amount of like

[01:57:28] put it is I feel some amount of like determinism around the things that AI

[01:57:31] determinism around the things that AI labs are doing. Um, and I feel like I

[01:57:33] labs are doing. Um, and I feel like I could help out there, but I don't know

[01:57:35] could help out there, but I don't know that I would uh like uniquely um I don't

[01:57:38] that I would uh like uniquely um I don't know that I would like uniquely uh

[01:57:40] know that I would like uniquely uh improve it. But I I think like my

[01:57:42] improve it. But I I think like my personal big fear is that a lot of this

[01:57:44] personal big fear is that a lot of this stuff happens on the side of humanity

[01:57:46] stuff happens on the side of humanity and that humanity gets disempowered by

[01:57:48] and that humanity gets disempowered by it. And I don't I I kind of like I care

[01:57:51] it. And I don't I I kind of like I care not just about all the Dyson spheres

[01:57:52] not just about all the Dyson spheres that we're going to build and that AI is

[01:57:54] that we're going to build and that AI is going to build in a fully autonomous

[01:57:55] going to build in a fully autonomous way. I care about what happens to

[01:57:56] way. I care about what happens to humans.

[01:57:57] humans. >> Yeah.

[01:57:57] >> Yeah. >> And I want humans to be well off in this

[01:57:59] >> And I want humans to be well off in this future. And I feel like that's where I

[01:58:01] future. And I feel like that's where I can a lot more uniquely add value than

[01:58:03] can a lot more uniquely add value than uh like an incremental improvement in a

[01:58:05] uh like an incremental improvement in a frontier lab. And so, um, I guess I'm

[01:58:07] frontier lab. And so, um, I guess I'm most afraid of something maybe like, um,

[01:58:10] most afraid of something maybe like, um, depicted in movies like Wall-E or

[01:58:11] depicted in movies like Wall-E or Idiocracy or something like that where

[01:58:13] Idiocracy or something like that where humanity is sort of on the side of this

[01:58:15] humanity is sort of on the side of this stuff. Um, and I want humans to be much

[01:58:18] stuff. Um, and I want humans to be much much better in this future. And so I

[01:58:20] much better in this future. And so I guess uh to me uh this is kind of like

[01:58:22] guess uh to me uh this is kind of like through education that you can actually

[01:58:24] through education that you can actually achieve this

[01:58:24] achieve this >> and and uh so what are you working on

[01:58:26] >> and and uh so what are you working on there?

[01:58:27] there? >> Oh yeah. So Eureka is trying to build I

[01:58:28] >> Oh yeah. So Eureka is trying to build I think maybe the easiest way I can

[01:58:30] think maybe the easiest way I can describe it is we're trying to build the

[01:58:31] describe it is we're trying to build the Starfleet Academy.

[01:58:33] Starfleet Academy. >> Um I don't know if you watched Star

[01:58:34] >> Um I don't know if you watched Star Trek. I haven't. But yeah.

[01:58:35] Trek. I haven't. But yeah. >> Okay. Starfleet Academy is this like

[01:58:38] >> Okay. Starfleet Academy is this like elite institution for frontier

[01:58:40] elite institution for frontier technology building spaceships and

[01:58:42] technology building spaceships and graduating cadetses to be like you know

[01:58:43] graduating cadetses to be like you know the pilots of these spaceships and

[01:58:45] the pilots of these spaceships and whatnot. So I just imagine like an elite

[01:58:47] whatnot. So I just imagine like an elite institution for technical knowledge and

[01:58:49] institution for technical knowledge and um and basically a kind of school that's

[01:58:53] um and basically a kind of school that's um very upto-date and very like premier

[01:58:55] um very upto-date and very like premier institution. A category of questions I

[01:58:57] institution. A category of questions I have for you is just explaining how one

[01:59:01] have for you is just explaining how one teaches technical or scientific content

[01:59:05] teaches technical or scientific content >> well because you are one of the world

[01:59:07] >> well because you are one of the world masters at it and then I'm curious both

[01:59:09] masters at it and then I'm curious both about how you think about it for content

[01:59:10] about how you think about it for content you've already put out there on YouTube.

[01:59:12] you've already put out there on YouTube. >> Yeah.

[01:59:12] >> Yeah. >> But also to the extent it's any

[01:59:14] >> But also to the extent it's any different how you think about it for

[01:59:15] different how you think about it for Eureka.

[01:59:16] Eureka. >> Yeah. Yeah. With respect to Eureka, I

[01:59:18] >> Yeah. Yeah. With respect to Eureka, I think like one thing that is very

[01:59:19] think like one thing that is very fascinating to me about education is

[01:59:20] fascinating to me about education is like I do think education will pretty

[01:59:22] like I do think education will pretty fundamentally change with AIS on the

[01:59:23] fundamentally change with AIS on the side and I think it has to be rewired

[01:59:26] side and I think it has to be rewired and changed um to some extent. I still

[01:59:28] and changed um to some extent. I still think that we're pretty early. I think

[01:59:30] think that we're pretty early. I think there's going to be a lot of people who

[01:59:31] there's going to be a lot of people who are going to try to do the obvious

[01:59:32] are going to try to do the obvious things which is like oh have an LLM and

[01:59:35] things which is like oh have an LLM and uh ask it questions and get you know do

[01:59:37] uh ask it questions and get you know do all the basic things that you would do

[01:59:38] all the basic things that you would do via prompting right now. I I think it's

[01:59:39] via prompting right now. I I think it's helpful but it still feels to me a bit

[01:59:41] helpful but it still feels to me a bit slop like slop. I I'd like to do it

[01:59:43] slop like slop. I I'd like to do it properly and I think the capability is

[01:59:44] properly and I think the capability is not there for what I would want. What

[01:59:46] not there for what I would want. What I'd want is uh like an actual uh tutor

[01:59:49] I'd want is uh like an actual uh tutor experience. Um maybe a prominent example

[01:59:52] experience. Um maybe a prominent example in my mind is um I was recently learning

[01:59:55] in my mind is um I was recently learning Korean. So language learning

[01:59:56] Korean. So language learning >> and I went through a phase where I was

[01:59:58] >> and I went through a phase where I was learning Korean by myself on the

[01:59:59] learning Korean by myself on the internet. I went through a phase where I

[02:00:01] internet. I went through a phase where I was actually part of a small class uh in

[02:00:03] was actually part of a small class uh in Korea. Uh taking a taking a Korean with

[02:00:05] Korea. Uh taking a taking a Korean with a bunch of other people which was really

[02:00:06] a bunch of other people which was really funny. But we had a teacher and like 10

[02:00:08] funny. But we had a teacher and like 10 people or so taking Korean. And then I

[02:00:10] people or so taking Korean. And then I switched to a one-on-one tutor. And um I

[02:00:13] switched to a one-on-one tutor. And um I guess what was fascinating to me is I

[02:00:15] guess what was fascinating to me is I think I had a really good tutor. Uh but

[02:00:17] think I had a really good tutor. Uh but um I mean just thinking through like

[02:00:20] um I mean just thinking through like what this tutor was doing for me and how

[02:00:22] what this tutor was doing for me and how incredible that experience was and how

[02:00:24] incredible that experience was and how high the bar is for like what I actually

[02:00:26] high the bar is for like what I actually want to build eventually.

[02:00:27] want to build eventually. >> Um because uh I mean she was extremely

[02:00:30] >> Um because uh I mean she was extremely so she instantly from a very short

[02:00:31] so she instantly from a very short conversation understood like where I am

[02:00:33] conversation understood like where I am as a student, what I know and don't know

[02:00:35] as a student, what I know and don't know and she was able to like probe exactly

[02:00:37] and she was able to like probe exactly like the kinds of questions or things to

[02:00:39] like the kinds of questions or things to understand my world model. M no LLM will

[02:00:41] understand my world model. M no LLM will do that for you 100% right now. Not even

[02:00:43] do that for you 100% right now. Not even close, right?

[02:00:44] close, right? >> But a tutor will do that if if they're

[02:00:46] >> But a tutor will do that if if they're good.

[02:00:46] good. >> Once she understands um she actually

[02:00:48] >> Once she understands um she actually like really served me all the things

[02:00:49] like really served me all the things that I needed at my current sliver of

[02:00:52] that I needed at my current sliver of capability. I need to be always

[02:00:53] capability. I need to be always appropriately challenged. I can't be

[02:00:55] appropriately challenged. I can't be faced with something too hard or too

[02:00:57] faced with something too hard or too trivial. And a tutor is really good at

[02:00:59] trivial. And a tutor is really good at serving you just the right stuff. And so

[02:01:01] serving you just the right stuff. And so basically I felt like I was the only

[02:01:02] basically I felt like I was the only constraint to learning like my own. I

[02:01:04] constraint to learning like my own. I was the only constraint. I was always

[02:01:05] was the only constraint. I was always given the perfect information.

[02:01:07] given the perfect information. >> I'm the only constraint. And I felt good

[02:01:09] >> I'm the only constraint. And I felt good because I'm the only impediment that

[02:01:10] because I'm the only impediment that exists. It's not that I can't find

[02:01:11] exists. It's not that I can't find knowledge or that it's not properly

[02:01:13] knowledge or that it's not properly explained or etc. Like it's just my

[02:01:14] explained or etc. Like it's just my ability to memorize and so on. And this

[02:01:16] ability to memorize and so on. And this is what I want for people. How do you

[02:01:19] is what I want for people. How do you automate that?

[02:01:20] automate that? >> So very good question about the current

[02:01:21] >> So very good question about the current capability. You don't

[02:01:22] capability. You don't >> but I do think that with uh as um and

[02:01:25] >> but I do think that with uh as um and that's why I think u it's not actually

[02:01:26] that's why I think u it's not actually the right right time to actually build

[02:01:28] the right right time to actually build this kind of an AI tutor. I still think

[02:01:29] this kind of an AI tutor. I still think it's a useful product um and lots of

[02:01:32] it's a useful product um and lots of people will build it but I still feel

[02:01:33] people will build it but I still feel like um the bar is so high uh and the

[02:01:37] like um the bar is so high uh and the capability is not there. Um uh but I

[02:01:40] capability is not there. Um uh but I mean even today I would say chachin is

[02:01:41] mean even today I would say chachin is an extremely um valuable educational

[02:01:44] an extremely um valuable educational product but I think for me it was so

[02:01:46] product but I think for me it was so fascinating to see how high the bar is

[02:01:48] fascinating to see how high the bar is and when I was with her I almost felt

[02:01:50] and when I was with her I almost felt like there's no way I can build this.

[02:01:53] like there's no way I can build this. >> But you are building it right?

[02:01:54] >> But you are building it right? >> Anyone who's had a really good tutor is

[02:01:55] >> Anyone who's had a really good tutor is like how are you going to build this?

[02:01:58] like how are you going to build this? Um so I guess I I'm waiting for that

[02:02:00] Um so I guess I I'm waiting for that capability. I I do think that in a lot

[02:02:02] capability. I I do think that in a lot of ways in the industry, for example, I

[02:02:04] of ways in the industry, for example, I did some AI consulting for computer

[02:02:05] did some AI consulting for computer vision. Yeah. Um

[02:02:07] vision. Yeah. Um >> a lot of my times the value that I

[02:02:08] >> a lot of my times the value that I brought to the company was telling them

[02:02:10] brought to the company was telling them not to use AI.

[02:02:11] not to use AI. >> It wasn't like I was the AI expert and

[02:02:12] >> It wasn't like I was the AI expert and they described a problem and I said

[02:02:14] they described a problem and I said don't use AI.

[02:02:15] don't use AI. >> This was my value ad. And I feel like

[02:02:17] >> This was my value ad. And I feel like it's in the same in education right now

[02:02:19] it's in the same in education right now where I kind of feel like for what I

[02:02:21] where I kind of feel like for what I have in mind, it's not yet the time, but

[02:02:22] have in mind, it's not yet the time, but the time will come. But for now, I'm

[02:02:24] the time will come. But for now, I'm building something that looks maybe a

[02:02:26] building something that looks maybe a bit more conventional. um that has a

[02:02:28] bit more conventional. um that has a physical and digital component and so

[02:02:29] physical and digital component and so on. But I think there's obvious there's

[02:02:32] on. But I think there's obvious there's obvious it's obvious how this should

[02:02:33] obvious it's obvious how this should look like in the future.

[02:02:35] look like in the future. >> Do they extend you're willing to say

[02:02:36] >> Do they extend you're willing to say what is the thing you hope will be

[02:02:38] what is the thing you hope will be released this year or next year?

[02:02:40] released this year or next year? >> Well, so I'm building the first course

[02:02:42] >> Well, so I'm building the first course and I want to have a really really good

[02:02:44] and I want to have a really really good course uh state-of-the-art obvious

[02:02:46] course uh state-of-the-art obvious state-of-the-art destination you go to

[02:02:48] state-of-the-art destination you go to learn AI in this case because that's

[02:02:50] learn AI in this case because that's just what I'm familiar with. So I think

[02:02:51] just what I'm familiar with. So I think it's a really good first product to get

[02:02:52] it's a really good first product to get to be really good. Um and so that's what

[02:02:54] to be really good. Um and so that's what I'm building and nano chat which you

[02:02:56] I'm building and nano chat which you briefly mentioned is a capstone project

[02:02:57] briefly mentioned is a capstone project of uh LLM 101n which is a class that I'm

[02:02:59] of uh LLM 101n which is a class that I'm building.

[02:03:00] building. >> So um that's a really big piece of it

[02:03:03] >> So um that's a really big piece of it but now I have to build out a lot of the

[02:03:04] but now I have to build out a lot of the intermediates and then I have to

[02:03:06] intermediates and then I have to actually like hire a small team of you

[02:03:07] actually like hire a small team of you know TAs and so on and actually like uh

[02:03:09] know TAs and so on and actually like uh build the entire course. And maybe one

[02:03:11] build the entire course. And maybe one more thing that I would say is like many

[02:03:13] more thing that I would say is like many times when people think about education

[02:03:15] times when people think about education they think about sort of like the more

[02:03:17] they think about sort of like the more what I would say is like kind of a

[02:03:18] what I would say is like kind of a softer component of like diffusing

[02:03:19] softer component of like diffusing knowledge or like um but I actually have

[02:03:22] knowledge or like um but I actually have something very hard and technical in

[02:03:24] something very hard and technical in mind and so in my mind education is kind

[02:03:25] mind and so in my mind education is kind of like the very difficult technical

[02:03:27] of like the very difficult technical like uh process of building ramps to

[02:03:29] like uh process of building ramps to knowledge.

[02:03:31] knowledge. >> So in my mind nano chat is a ramp to

[02:03:33] >> So in my mind nano chat is a ramp to knowledge because it's a very simple

[02:03:34] knowledge because it's a very simple it's like the super simplified full

[02:03:37] it's like the super simplified full stack thing. If you give this artifact

[02:03:39] stack thing. If you give this artifact to someone and they like look through

[02:03:40] to someone and they like look through it, they're learning a ton of stuff.

[02:03:41] it, they're learning a ton of stuff. >> Yeah.

[02:03:42] >> Yeah. >> And so, uh, it's giving you a lot of

[02:03:44] >> And so, uh, it's giving you a lot of what I call Eurekas per second. Yeah.

[02:03:46] what I call Eurekas per second. Yeah. Which is like understanding per second.

[02:03:48] Which is like understanding per second. That's what I want. Lots of Eurekas per

[02:03:49] That's what I want. Lots of Eurekas per second. Um and so to me this is a

[02:03:51] second. Um and so to me this is a technical problem of how do we build

[02:03:53] technical problem of how do we build these ramps to knowledge and uh so I

[02:03:55] these ramps to knowledge and uh so I almost think of Eureka as almost like a

[02:03:57] almost think of Eureka as almost like a it's not like maybe that different maybe

[02:03:59] it's not like maybe that different maybe through through some of the for frontier

[02:04:01] through through some of the for frontier labs or some of the work that's going to

[02:04:02] labs or some of the work that's going to be going on because I want to figure out

[02:04:03] be going on because I want to figure out how to build these frontier these ramps

[02:04:05] how to build these frontier these ramps very efficiently so that people are

[02:04:07] very efficiently so that people are never stuck um and everything is always

[02:04:10] never stuck um and everything is always not too hard or not too not too trivial

[02:04:12] not too hard or not too not too trivial and uh you can you have just the right

[02:04:14] and uh you can you have just the right material to actually progress.

[02:04:16] material to actually progress. >> Yeah. So you're imagining the short term

[02:04:18] >> Yeah. So you're imagining the short term that instead of a tutor being able to

[02:04:20] that instead of a tutor being able to like probe your understanding, if you

[02:04:23] like probe your understanding, if you have enough self-awareness to be able to

[02:04:24] have enough self-awareness to be able to probe yourself

[02:04:26] probe yourself >> there, you're never going to be stuck.

[02:04:27] >> there, you're never going to be stuck. You can like find the right answer

[02:04:29] You can like find the right answer between talking to the TA or talking to

[02:04:30] between talking to the TA or talking to an LLM and looking at the reference

[02:04:32] an LLM and looking at the reference implementation. It sounds like

[02:04:35] implementation. It sounds like automation or AI is actually not a

[02:04:37] automation or AI is actually not a significant like so far it's actually

[02:04:39] significant like so far it's actually the the big alpha here is your ability

[02:04:43] the the big alpha here is your ability to explain AI codified in the source

[02:04:47] to explain AI codified in the source material of the class right that's like

[02:04:49] material of the class right that's like fundamentally what the course is

[02:04:50] fundamentally what the course is >> I mean I think you always have to be

[02:04:52] >> I mean I think you always have to be calibrated to what the capability what

[02:04:53] calibrated to what the capability what capability exists in the industry and I

[02:04:55] capability exists in the industry and I think a lot of people are going to

[02:04:56] think a lot of people are going to pursue like oh just ask chasha etc uh

[02:04:59] pursue like oh just ask chasha etc uh but I I think like right now for example

[02:05:00] but I I think like right now for example if you go to chasha and you say oh teach

[02:05:02] if you go to chasha and you say oh teach me AI there's no way it's I mean it's

[02:05:04] me AI there's no way it's I mean it's going to give you some slop right

[02:05:06] going to give you some slop right >> like when I AI is never going to write

[02:05:08] >> like when I AI is never going to write nano chat right now but nano chat is a

[02:05:10] nano chat right now but nano chat is a really useful I think intermediate point

[02:05:12] really useful I think intermediate point >> so I still I'm collaborating with AI to

[02:05:14] >> so I still I'm collaborating with AI to create all this material so AI is still

[02:05:16] create all this material so AI is still fundamentally very helpful

[02:05:18] fundamentally very helpful >> um earlier on I built a CS231N at

[02:05:20] >> um earlier on I built a CS231N at Stanford which was one of the earlier

[02:05:22] Stanford which was one of the earlier actually sorry I think it was the first

[02:05:23] actually sorry I think it was the first deep learning class at Stanford which

[02:05:25] deep learning class at Stanford which became very popular

[02:05:26] became very popular >> um and the difference in building out

[02:05:28] >> um and the difference in building out 231N and L101N now is quite stark

[02:05:32] 231N and L101N now is quite stark uh because I'm I feel really empowered

[02:05:34] uh because I'm I feel really empowered by the LMS as they exist right now but

[02:05:36] by the LMS as they exist right now but I'm very much in the loop

[02:05:37] I'm very much in the loop >> so they're helping me build all the

[02:05:38] >> so they're helping me build all the materials I go much faster u they're

[02:05:41] materials I go much faster u they're doing a lot of the boring stuff etc uh

[02:05:43] doing a lot of the boring stuff etc uh so I feel like I'm developing the course

[02:05:44] so I feel like I'm developing the course much faster and there's LLM infused in

[02:05:47] much faster and there's LLM infused in it but it's not yet at a place where I

[02:05:48] it but it's not yet at a place where I can creatively create the content I'm

[02:05:50] can creatively create the content I'm still there to do that

[02:05:51] still there to do that >> so like I think the trickiness is always

[02:05:53] >> so like I think the trickiness is always calibrating yourself to what exists

[02:05:54] calibrating yourself to what exists >> and so when you imagine what is

[02:05:56] >> and so when you imagine what is available through Eureka in a couple of

[02:05:58] available through Eureka in a couple of years it seems like the big bottleneck

[02:06:00] years it seems like the big bottleneck is going to

[02:06:02] is going to finding corpse in field after field who

[02:06:04] finding corpse in field after field who can

[02:06:05] can >> convert their understanding into these

[02:06:08] >> convert their understanding into these ramps right

[02:06:08] ramps right >> so I think it would change over time so

[02:06:10] >> so I think it would change over time so I think right now it would be uh hiring

[02:06:12] I think right now it would be uh hiring faculty

[02:06:13] faculty >> Mhm.

[02:06:14] >> Mhm. >> to help work handin-hand with AI and a

[02:06:16] >> to help work handin-hand with AI and a team of people probably uh to build

[02:06:19] team of people probably uh to build state-of-the-art courses.

[02:06:20] state-of-the-art courses. >> Yeah.

[02:06:20] >> Yeah. >> And then I think over time it can maybe

[02:06:22] >> And then I think over time it can maybe some of the TAs can actually become AIs

[02:06:24] some of the TAs can actually become AIs because some of the TAS like okay you

[02:06:26] because some of the TAS like okay you just take all the course materials

[02:06:27] just take all the course materials >> and then I think you could serve a very

[02:06:29] >> and then I think you could serve a very good like automated TA. for the student

[02:06:31] good like automated TA. for the student when they have more basic questions or

[02:06:33] when they have more basic questions or something like that, right? But I think

[02:06:35] something like that, right? But I think you'll need faculty for the overall

[02:06:36] you'll need faculty for the overall architecture of a course and making sure

[02:06:39] architecture of a course and making sure that it fits. And so I kind of see a

[02:06:41] that it fits. And so I kind of see a progression of how this will evolve and

[02:06:42] progression of how this will evolve and maybe at some future point, you know,

[02:06:44] maybe at some future point, you know, I'm not even that useful in AI is doing

[02:06:45] I'm not even that useful in AI is doing most of the design much better than I

[02:06:47] most of the design much better than I could.

[02:06:47] could. >> But I still think that that's going to

[02:06:48] >> But I still think that that's going to take some time to play out. But are you

[02:06:51] take some time to play out. But are you imagining that like uh people who have

[02:06:54] imagining that like uh people who have expertise in other fields are then

[02:06:55] expertise in other fields are then contributing courses or do you feel like

[02:06:57] contributing courses or do you feel like it's actually quite essential to the

[02:06:59] it's actually quite essential to the vision that you given your understanding

[02:07:02] vision that you given your understanding of how you want to teach are the one

[02:07:04] of how you want to teach are the one designing the content

[02:07:06] designing the content >> like I don't know Salon is like

[02:07:07] >> like I don't know Salon is like narrating all the videos on Khan Academy

[02:07:09] narrating all the videos on Khan Academy are you imagining something like that or

[02:07:11] are you imagining something like that or >> no I will hire faculty I think because

[02:07:12] >> no I will hire faculty I think because there are domains in which I'm not an

[02:07:14] there are domains in which I'm not an expert um and I think uh that's the only

[02:07:17] expert um and I think uh that's the only way to offer the state-of-the-art

[02:07:18] way to offer the state-of-the-art experience for the student ultimately.

[02:07:20] experience for the student ultimately. So um

[02:07:21] So um >> yeah I do expect that I would hire

[02:07:23] >> yeah I do expect that I would hire faculty but I will probably stick around

[02:07:24] faculty but I will probably stick around in AI for some time but in I do have

[02:07:27] in AI for some time but in I do have something I think more conventional in

[02:07:29] something I think more conventional in mind for the current capability I think

[02:07:31] mind for the current capability I think than what people would probably

[02:07:32] than what people would probably anticipate. Um, and when I'm building

[02:07:34] anticipate. Um, and when I'm building Starfleet Academy, I do probably imagine

[02:07:36] Starfleet Academy, I do probably imagine a physical uh institution and maybe a

[02:07:38] a physical uh institution and maybe a tier below that, a digital offering that

[02:07:41] tier below that, a digital offering that um is not the same not the

[02:07:42] um is not the same not the state-of-the-art experience you would

[02:07:43] state-of-the-art experience you would get when someone comes in physically

[02:07:45] get when someone comes in physically full-time and we work through material

[02:07:47] full-time and we work through material from start to end and make sure you

[02:07:48] from start to end and make sure you understand it. Uh that's the physical

[02:07:50] understand it. Uh that's the physical offering.

[02:07:51] offering. >> Um the digital offering is yeah, a bunch

[02:07:53] >> Um the digital offering is yeah, a bunch of stuff on the internet and maybe some

[02:07:54] of stuff on the internet and maybe some LLM assistant and it's a bit more

[02:07:56] LLM assistant and it's a bit more gimmicky in a tier below, but uh at

[02:07:57] gimmicky in a tier below, but uh at least it's accessible to like eight

[02:07:59] least it's accessible to like eight billion people. So

[02:08:00] billion people. So >> yeah, I think you're basically inventing

[02:08:04] >> yeah, I think you're basically inventing college from first principles for the

[02:08:07] college from first principles for the tools that are available today and then

[02:08:09] tools that are available today and then just like for just like selecting for

[02:08:12] just like for just like selecting for people who have the motivation and the

[02:08:13] people who have the motivation and the interest of actually

[02:08:15] interest of actually >> really engaging with material.

[02:08:17] >> really engaging with material. >> Yeah. And I think there's going to have

[02:08:18] >> Yeah. And I think there's going to have to be a lot of not just education but

[02:08:20] to be a lot of not just education but also re-education. And I would love to

[02:08:22] also re-education. And I would love to uh help out uh there uh because I think

[02:08:24] uh help out uh there uh because I think the jobs will probably change quite a

[02:08:25] the jobs will probably change quite a bit. Um and so for example today a lot

[02:08:28] bit. Um and so for example today a lot of people are trying to upskill in AI

[02:08:29] of people are trying to upskill in AI specifically. So I think it's a really

[02:08:30] specifically. So I think it's a really good course to teach in this in this

[02:08:31] good course to teach in this in this respect. Um and yeah I think the

[02:08:34] respect. Um and yeah I think the motivation wise uh before AGI uh

[02:08:38] motivation wise uh before AGI uh motivation is very simple to solve

[02:08:39] motivation is very simple to solve because uh people want to make money and

[02:08:41] because uh people want to make money and this is how you make money in the

[02:08:42] this is how you make money in the industry today.

[02:08:43] industry today. >> I think post AGI it's a lot more

[02:08:44] >> I think post AGI it's a lot more interesting um possibly because yeah if

[02:08:47] interesting um possibly because yeah if everything is automated and there's

[02:08:48] everything is automated and there's nothing to do for anyone why would

[02:08:50] nothing to do for anyone why would anyone go to a school etc. Um so I think

[02:08:54] anyone go to a school etc. Um so I think uh I guess like I often say that

[02:08:56] uh I guess like I often say that pre-aggi education is useful, post AGI

[02:08:59] pre-aggi education is useful, post AGI education is fun

[02:09:01] education is fun >> and uh in a similar way as people for

[02:09:03] >> and uh in a similar way as people for example uh people go to gym today.

[02:09:05] example uh people go to gym today. >> Yeah.

[02:09:05] >> Yeah. >> Uh but we don't need their physical

[02:09:07] >> Uh but we don't need their physical strength to manipulate uh heavy objects

[02:09:10] strength to manipulate uh heavy objects because we have machines that do that.

[02:09:11] because we have machines that do that. >> They still go to gym. Why do they go to

[02:09:12] >> They still go to gym. Why do they go to gym? Well, because it's fun. It's

[02:09:14] gym? Well, because it's fun. It's healthy. It's uh and it's and you look

[02:09:16] healthy. It's uh and it's and you look hot when you have a six-pack. I don't

[02:09:18] hot when you have a six-pack. I don't know. I guess like um so it's I guess

[02:09:21] know. I guess like um so it's I guess what I'm saying is um it's attractive

[02:09:22] what I'm saying is um it's attractive for people to do that in a certain like

[02:09:24] for people to do that in a certain like very deep psychological evolutionary

[02:09:27] very deep psychological evolutionary sense for humanity.

[02:09:28] sense for humanity. >> And so I kind of uh think that education

[02:09:30] >> And so I kind of uh think that education will kind of play out in the same way

[02:09:31] will kind of play out in the same way like you'll go to school like you go to

[02:09:33] like you'll go to school like you go to gym um and you'll and I think that right

[02:09:36] gym um and you'll and I think that right now I think not that many people learn

[02:09:38] now I think not that many people learn uh because learning is hard. You bounce

[02:09:40] uh because learning is hard. You bounce from material because and some people

[02:09:42] from material because and some people overcome that barrier but for most

[02:09:43] overcome that barrier but for most people it's hard. But I do think that we

[02:09:45] people it's hard. But I do think that we should it's a technical problem to

[02:09:47] should it's a technical problem to solve. It's a technical problem to do

[02:09:49] solve. It's a technical problem to do what my uh tutor did for me when I was

[02:09:51] what my uh tutor did for me when I was learning Korean.

[02:09:52] learning Korean. >> I think it's tractable and buildable and

[02:09:53] >> I think it's tractable and buildable and someone should build it and I think it's

[02:09:54] someone should build it and I think it's going to make learning anything like

[02:09:56] going to make learning anything like trivial and desirable and people will do

[02:09:58] trivial and desirable and people will do it for fun because it's trivial.

[02:10:00] it for fun because it's trivial. >> If I had a tutor like that for any

[02:10:01] >> If I had a tutor like that for any arbitrary piece of like knowledge, I

[02:10:04] arbitrary piece of like knowledge, I think it's going to be so much easier to

[02:10:05] think it's going to be so much easier to to learn anything and people will do it

[02:10:06] to learn anything and people will do it and they'll do it for the same reasons

[02:10:07] and they'll do it for the same reasons they go to gym.

[02:10:08] they go to gym. >> I mean that sounds different from

[02:10:12] >> I mean that sounds different from using this. So post Asia you're using

[02:10:15] using this. So post Asia you're using this to um basically as entertainment or

[02:10:19] this to um basically as entertainment or as like a self- betterment but it

[02:10:21] as like a self- betterment but it sounded like you had a vision also that

[02:10:23] sounded like you had a vision also that this education is relevant to keeping

[02:10:25] this education is relevant to keeping humanity in control of AI.

[02:10:27] humanity in control of AI. >> And they sound different and I'm curious

[02:10:29] >> And they sound different and I'm curious is it like it's entertaining for some

[02:10:30] is it like it's entertaining for some people but then empowerment for some

[02:10:31] people but then empowerment for some others. How do you think about that?

[02:10:32] others. How do you think about that? >> I think this um so I do definitely feel

[02:10:34] >> I think this um so I do definitely feel like people will be um I do think like

[02:10:37] like people will be um I do think like eventually it's a bit of a losing game

[02:10:39] eventually it's a bit of a losing game if that makes sense. I do think that it

[02:10:41] if that makes sense. I do think that it is in long term long term which I think

[02:10:44] is in long term long term which I think is longer than I think maybe most people

[02:10:45] is longer than I think maybe most people in the industry it's a losing game. I I

[02:10:48] in the industry it's a losing game. I I do think that people can go so far and

[02:10:50] do think that people can go so far and that we barely scratched the surface of

[02:10:51] that we barely scratched the surface of much a person can can go and that's just

[02:10:53] much a person can can go and that's just because people are bouncing off of

[02:10:55] because people are bouncing off of material that's too easy or too hard and

[02:10:56] material that's too easy or too hard and they and and I I I actually kind of feel

[02:10:59] they and and I I I actually kind of feel that people will be able to go much

[02:11:00] that people will be able to go much further like anyone speaks five

[02:11:02] further like anyone speaks five languages because why not because it's

[02:11:03] languages because why not because it's so trivial.

[02:11:04] so trivial. >> Um anyone um knows you know all the

[02:11:07] >> Um anyone um knows you know all the basic curriculum of undergrad etc. Now,

[02:11:09] basic curriculum of undergrad etc. Now, now that I'm understanding the vision,

[02:11:12] now that I'm understanding the vision, that that's very interesting. Like, I

[02:11:13] that that's very interesting. Like, I think it actually has a perfect analog

[02:11:15] think it actually has a perfect analog in gym culture. I don't think a 100

[02:11:17] in gym culture. I don't think a 100 years ago anybody would be like ripped

[02:11:19] years ago anybody would be like ripped like nobody would have, you know, be

[02:11:21] like nobody would have, you know, be able to like just spontaneously bench

[02:11:23] able to like just spontaneously bench two plays or three plays or something.

[02:11:24] two plays or three plays or something. It's actually very common now.

[02:11:26] It's actually very common now. >> And you're because this idea of

[02:11:29] >> And you're because this idea of systematically training and lifting

[02:11:31] systematically training and lifting weights in the gym or systematically

[02:11:32] weights in the gym or systematically training to be able to run a marathon,

[02:11:34] training to be able to run a marathon, which is capability spontaneously you

[02:11:36] which is capability spontaneously you would not have or most humans would not

[02:11:37] would not have or most humans would not have. And you're imagining similar

[02:11:39] have. And you're imagining similar things for

[02:11:41] things for learning across very many different

[02:11:43] learning across very many different domains, much more intensely, deeply,

[02:11:44] domains, much more intensely, deeply, faster.

[02:11:45] faster. >> Yeah, exactly. And I kind of feel like I

[02:11:46] >> Yeah, exactly. And I kind of feel like I am betting a little bit implicitly on

[02:11:48] am betting a little bit implicitly on some of the timelessness of human

[02:11:49] some of the timelessness of human nature.

[02:11:49] nature. >> Yeah.

[02:11:50] >> Yeah. >> And I think um

[02:11:52] >> And I think um >> I think it will be desirable to be to to

[02:11:55] >> I think it will be desirable to be to to do all these things. Um

[02:11:58] do all these things. Um >> and I think people will look up to it

[02:11:59] >> and I think people will look up to it and as they have for for millennia

[02:12:02] and as they have for for millennia because uh and I think this will

[02:12:03] because uh and I think this will continue to be true. And actually also

[02:12:05] continue to be true. And actually also maybe there's some evidence of that

[02:12:06] maybe there's some evidence of that historically because if you look at for

[02:12:07] historically because if you look at for example aristocrats or you look at maybe

[02:12:09] example aristocrats or you look at maybe ancient Greece or something like that

[02:12:11] ancient Greece or something like that whenever you had little pocket

[02:12:12] whenever you had little pocket environments that were post AGI in a

[02:12:14] environments that were post AGI in a certain sense I do feel like people have

[02:12:15] certain sense I do feel like people have spent a lot of their time uh flourishing

[02:12:17] spent a lot of their time uh flourishing in a certain way uh either physically or

[02:12:19] in a certain way uh either physically or or cognitively and so I think um I I

[02:12:22] or cognitively and so I think um I I feel okay about the prospects of that

[02:12:24] feel okay about the prospects of that >> and I think if this is false and I'm

[02:12:26] >> and I think if this is false and I'm wrong and we end up in like

[02:12:28] wrong and we end up in like >> you know um Wall-E or idiocracy future

[02:12:30] >> you know um Wall-E or idiocracy future then I think it's very I don't even care

[02:12:32] then I think it's very I don't even care if there's like Dyson spheres this is a

[02:12:34] if there's like Dyson spheres this is a terrible outcome.

[02:12:35] terrible outcome. >> Mhm.

[02:12:36] >> Mhm. >> Yeah.

[02:12:36] >> Yeah. >> Like I actually really do care about

[02:12:38] >> Like I actually really do care about humanity. Like everyone has to just be

[02:12:41] humanity. Like everyone has to just be superhuman in a certain sense.

[02:12:43] superhuman in a certain sense. >> I I I guess it's still a world in which

[02:12:45] >> I I I guess it's still a world in which that is not enabling us to

[02:12:48] that is not enabling us to it's it's like the culture world, right?

[02:12:50] it's it's like the culture world, right? Like you're not fundamentally going to

[02:12:51] Like you're not fundamentally going to be able to like transform the trajectory

[02:12:53] be able to like transform the trajectory of

[02:12:54] of >> Yeah.

[02:12:54] >> Yeah. >> uh technology or

[02:12:56] >> uh technology or >> Yeah.

[02:12:57] >> Yeah. >> influence decisions by your own labor or

[02:12:59] >> influence decisions by your own labor or cognition alone. Maybe you can influence

[02:13:02] cognition alone. Maybe you can influence decisions because the AI is like for

[02:13:03] decisions because the AI is like for your approval, but you're not like it's

[02:13:06] your approval, but you're not like it's not because I've like I can in because

[02:13:07] not because I've like I can in because I've invented something or I've like

[02:13:09] I've invented something or I've like come up with a new design, I'm like

[02:13:10] come up with a new design, I'm like really influencing the future.

[02:13:12] really influencing the future. >> Um yeah, maybe. I don't actually think

[02:13:13] >> Um yeah, maybe. I don't actually think that uh I I think there will be

[02:13:15] that uh I I think there will be transitional period where we are going

[02:13:16] transitional period where we are going to be able to be in the loop and you

[02:13:18] to be able to be in the loop and you know advance things if we actually

[02:13:19] know advance things if we actually understand a lot of stuff.

[02:13:20] understand a lot of stuff. >> Um I do think that long term that

[02:13:22] >> Um I do think that long term that probably goes away, right? But um maybe

[02:13:25] probably goes away, right? But um maybe it's going to even become a sport. But

[02:13:26] it's going to even become a sport. But right now you have powerlifters who go

[02:13:28] right now you have powerlifters who go extreme on this direction. So what is

[02:13:30] extreme on this direction. So what is powerlifting in a cognitive era?

[02:13:33] powerlifting in a cognitive era? >> Um maybe it's people who are really

[02:13:34] >> Um maybe it's people who are really trying to make Olympics out of knowing

[02:13:36] trying to make Olympics out of knowing stuff.

[02:13:36] stuff. >> Yeah.

[02:13:37] >> Yeah. >> Uh like and and if you have a perfect AI

[02:13:40] >> Uh like and and if you have a perfect AI tutor, um maybe you can get extremely

[02:13:43] tutor, um maybe you can get extremely far. Yeah.

[02:13:43] far. Yeah. >> I almost feel like we're just barely the

[02:13:45] >> I almost feel like we're just barely the the geniuses of today are barely

[02:13:48] the geniuses of today are barely scratching the surface of what a human

[02:13:49] scratching the surface of what a human mind can do. I think

[02:13:50] mind can do. I think >> Yeah. I I I love this vision. I also um

[02:13:53] >> Yeah. I I I love this vision. I also um it's like I feel like the person you

[02:13:56] it's like I feel like the person you have like most product market fit with

[02:13:57] have like most product market fit with is like me because like my job involves

[02:14:00] is like me because like my job involves having to

[02:14:01] having to >> learn different subjects every week and

[02:14:04] >> learn different subjects every week and I I am I am like very excited if you can

[02:14:08] I I am I am like very excited if you can >> I'm similar for that matter. I mean I

[02:14:09] >> I'm similar for that matter. I mean I you know a lot of people for example uh

[02:14:11] you know a lot of people for example uh hate school and want to get out of it. I

[02:14:12] hate school and want to get out of it. I was I was actually I really liked

[02:14:14] was I was actually I really liked school. I loved learning things etc. I

[02:14:16] school. I loved learning things etc. I wanted to stay in school. I stayed all

[02:14:17] wanted to stay in school. I stayed all the way until PhD and then they wouldn't

[02:14:18] the way until PhD and then they wouldn't let me stay longer so I went to the

[02:14:20] let me stay longer so I went to the industry. But I mean I basically it's

[02:14:22] industry. But I mean I basically it's roughly speaking I love uh I love

[02:14:24] roughly speaking I love uh I love learning uh even for the sake of

[02:14:25] learning uh even for the sake of learning but I also um love learning

[02:14:28] learning but I also um love learning because it's a form of empowerment and

[02:14:29] because it's a form of empowerment and being useful and productive. I I think

[02:14:30] being useful and productive. I I think you also made a point that uh was subtle

[02:14:33] you also made a point that uh was subtle so just to spell it out. I think what's

[02:14:35] so just to spell it out. I think what's happened so far with online courses is

[02:14:37] happened so far with online courses is that why haven't they already enabled us

[02:14:39] that why haven't they already enabled us to

[02:14:41] to enable every single human to know

[02:14:42] enable every single human to know everything

[02:14:44] everything >> and I think they're just so motivation

[02:14:46] >> and I think they're just so motivation laden because there's not obvious

[02:14:48] laden because there's not obvious on-ramps and it's like so easy to get

[02:14:50] on-ramps and it's like so easy to get stuck. Um, and if you had

[02:14:54] stuck. Um, and if you had instead this re this thing basically

[02:14:57] instead this re this thing basically like a really good human tutor, it it

[02:14:59] like a really good human tutor, it it would just be such an unluck from a

[02:15:00] would just be such an unluck from a motivation perspective.

[02:15:01] motivation perspective. >> I think so because it feels bad to

[02:15:03] >> I think so because it feels bad to bounce from material. Feels bad. You get

[02:15:05] bounce from material. Feels bad. You get negative reward from

[02:15:07] negative reward from >> uh sinking amount of time in something

[02:15:09] >> uh sinking amount of time in something and this doesn't pan out or like being

[02:15:11] and this doesn't pan out or like being completely bored because what you're

[02:15:12] completely bored because what you're getting is too easy or too hard. So I

[02:15:14] getting is too easy or too hard. So I think uh yeah I think it feel when you

[02:15:16] think uh yeah I think it feel when you actually do it properly learning feels

[02:15:18] actually do it properly learning feels good.

[02:15:18] good. >> Yeah. And I think it's a technical

[02:15:19] >> Yeah. And I think it's a technical problem to get there. And I think um for

[02:15:22] problem to get there. And I think um for a while it's going to be AI plus human

[02:15:23] a while it's going to be AI plus human collab. And at some point maybe it's

[02:15:25] collab. And at some point maybe it's just AI. I don't know.

[02:15:27] just AI. I don't know. >> Can I ask some questions about teaching?

[02:15:28] >> Can I ask some questions about teaching? Well,

[02:15:29] Well, >> if you had to like sort of like give

[02:15:30] >> if you had to like sort of like give advice to another educator in another

[02:15:33] advice to another educator in another field that you're curious about

[02:15:36] field that you're curious about >> to make the kinds of YouTube tutorials

[02:15:38] >> to make the kinds of YouTube tutorials you've made. Um

[02:15:40] you've made. Um >> maybe it may be especially interesting

[02:15:42] >> maybe it may be especially interesting to talk about domains where you can't

[02:15:43] to talk about domains where you can't just like you can't test somebody's

[02:15:44] just like you can't test somebody's technical understanding by having them

[02:15:45] technical understanding by having them code something up or something. what

[02:15:47] code something up or something. what advice would you give them?

[02:15:49] advice would you give them? >> Uh, so I think that's a pretty broad

[02:15:50] >> Uh, so I think that's a pretty broad topic. I do feel like there's basically

[02:15:52] topic. I do feel like there's basically I almost feel like there are 10 20 tips

[02:15:54] I almost feel like there are 10 20 tips and tricks that I kind of

[02:15:55] and tricks that I kind of semi-consciously probably do, but um

[02:15:58] semi-consciously probably do, but um I guess like on a high level, I always

[02:16:00] I guess like on a high level, I always try to I think a lot of this comes from

[02:16:03] try to I think a lot of this comes from my physics background. I really really

[02:16:04] my physics background. I really really did enjoy my physics background. I have

[02:16:06] did enjoy my physics background. I have a whole rant on I think how everyone

[02:16:08] a whole rant on I think how everyone should learn physics uh in the in early

[02:16:10] should learn physics uh in the in early school education because I think early

[02:16:12] school education because I think early school education is not about

[02:16:14] school education is not about cremulating knowledge or memory for

[02:16:16] cremulating knowledge or memory for tasks later in the industry. It's about

[02:16:17] tasks later in the industry. It's about booting up a brain and I think physics

[02:16:19] booting up a brain and I think physics uniquely boots up the brain the best. Uh

[02:16:21] uniquely boots up the brain the best. Uh because some of the things that they get

[02:16:23] because some of the things that they get you to do in your brain during physics

[02:16:24] you to do in your brain during physics is is extremely valuable later. the idea

[02:16:26] is is extremely valuable later. the idea of building models and abstractions and

[02:16:28] of building models and abstractions and understanding that there are there's a

[02:16:29] understanding that there are there's a first order um approximation that

[02:16:32] first order um approximation that describes most of the system but then

[02:16:33] describes most of the system but then there's a second order, third order,

[02:16:34] there's a second order, third order, fourth order terms that may or may not

[02:16:36] fourth order terms that may or may not be present. And the idea that you're

[02:16:38] be present. And the idea that you're observing like a very noisy system, but

[02:16:39] observing like a very noisy system, but actually there's like these fundamental

[02:16:41] actually there's like these fundamental frequencies that you can abstract away.

[02:16:43] frequencies that you can abstract away. Like when a physicist walks into the

[02:16:44] Like when a physicist walks into the class and they say, "Oh, assume there's

[02:16:46] class and they say, "Oh, assume there's a spherical cow and dot dot dot." And

[02:16:48] a spherical cow and dot dot dot." And everyone laughs at that, but actually

[02:16:49] everyone laughs at that, but actually this is brilliant. It's brilliant

[02:16:51] this is brilliant. It's brilliant thinking that's very generalizable

[02:16:53] thinking that's very generalizable across the industry because

[02:16:54] across the industry because >> yeah cow is can be approximated as a

[02:16:56] >> yeah cow is can be approximated as a sphere I guess in a bunch of ways. Um

[02:16:58] sphere I guess in a bunch of ways. Um there's a really good book for example

[02:17:00] there's a really good book for example scale uh it's basically from a physicist

[02:17:02] scale uh it's basically from a physicist talking about biology and maybe this is

[02:17:04] talking about biology and maybe this is also a book I would recommend reading

[02:17:06] also a book I would recommend reading but you can actually get a lot of really

[02:17:08] but you can actually get a lot of really interesting approximations and chart

[02:17:09] interesting approximations and chart scaling laws of animals and you can look

[02:17:11] scaling laws of animals and you can look at their heartbeats and things like that

[02:17:13] at their heartbeats and things like that and they actually line up and with the

[02:17:15] and they actually line up and with the size of the animal and things like that.

[02:17:17] size of the animal and things like that. You can talk about an animal as a volume

[02:17:18] You can talk about an animal as a volume and you can actually drive a lot of um

[02:17:20] and you can actually drive a lot of um you can talk about the heat dissipation

[02:17:22] you can talk about the heat dissipation uh of that because your your heat

[02:17:24] uh of that because your your heat dissipation grows as the surface area

[02:17:26] dissipation grows as the surface area which is growing a square but your heat

[02:17:28] which is growing a square but your heat creation or generation um is growing as

[02:17:30] creation or generation um is growing as a cube.

[02:17:31] a cube. >> And so I just feel like physicists have

[02:17:33] >> And so I just feel like physicists have all the right cognitive tools to

[02:17:34] all the right cognitive tools to approach problem solving in the world.

[02:17:36] approach problem solving in the world. So I think because of that training I

[02:17:37] So I think because of that training I always try to find the first order terms

[02:17:39] always try to find the first order terms or the second order terms of everything.

[02:17:41] or the second order terms of everything. When I'm observing a system or or a

[02:17:42] When I'm observing a system or or a thing, I have a tangle of a web of ideas

[02:17:44] thing, I have a tangle of a web of ideas or knowledge in my world in my mind and

[02:17:46] or knowledge in my world in my mind and I'm trying to find what is the what is

[02:17:48] I'm trying to find what is the what is the thing that actually matters. What is

[02:17:49] the thing that actually matters. What is the first order component? How can I

[02:17:51] the first order component? How can I simplify it? How can I have a simple

[02:17:52] simplify it? How can I have a simple thing that actually shows that thing,

[02:17:54] thing that actually shows that thing, right? Show shows it in action and then

[02:17:56] right? Show shows it in action and then I can tack on the other terms.

[02:17:57] I can tack on the other terms. >> Yeah,

[02:17:57] >> Yeah, >> maybe maybe an example from my from one

[02:18:00] >> maybe maybe an example from my from one of my repos that I think illustrates it

[02:18:01] of my repos that I think illustrates it well is called microrad. I don't know if

[02:18:03] well is called microrad. I don't know if you're familiar with this, but

[02:18:04] you're familiar with this, but >> so microrad is 100 lines of code that

[02:18:06] >> so microrad is 100 lines of code that shows back propagation. It can uh you

[02:18:09] shows back propagation. It can uh you can create neural networks out of simple

[02:18:10] can create neural networks out of simple operations like plus and times etc Lego

[02:18:12] operations like plus and times etc Lego blocks of neural networks and you build

[02:18:14] blocks of neural networks and you build up a computational graph and you do a

[02:18:16] up a computational graph and you do a forward pass and a backward pass to get

[02:18:17] forward pass and a backward pass to get the gradients. Um now this is at the

[02:18:20] the gradients. Um now this is at the heart of all neural network learning. So

[02:18:22] heart of all neural network learning. So microrad is a 100 lines of

[02:18:23] microrad is a 100 lines of pre-interpretable python code and it can

[02:18:25] pre-interpretable python code and it can do forward and backward arbitrary neural

[02:18:27] do forward and backward arbitrary neural networks but not efficiently. So

[02:18:29] networks but not efficiently. So microrad these hundred lines of python

[02:18:31] microrad these hundred lines of python are everything you need to understand

[02:18:32] are everything you need to understand how neural networks train. Everything

[02:18:34] how neural networks train. Everything else is just efficiency.

[02:18:36] else is just efficiency. >> Yeah.

[02:18:36] >> Yeah. >> Everything else is efficiency. And

[02:18:38] >> Everything else is efficiency. And there's a huge amount of work to do

[02:18:39] there's a huge amount of work to do efficiency. You know, you need your

[02:18:40] efficiency. You know, you need your tensors. You lay them out and you stride

[02:18:41] tensors. You lay them out and you stride them. You make sure your kernels are

[02:18:43] them. You make sure your kernels are orchestrating memory movement correctly,

[02:18:44] orchestrating memory movement correctly, etc. It's all just efficiency roughly

[02:18:46] etc. It's all just efficiency roughly speaking. But the core intellectual sort

[02:18:48] speaking. But the core intellectual sort of piece of neural network training is

[02:18:49] of piece of neural network training is microat. It's 100 lines. You can easily

[02:18:51] microat. It's 100 lines. You can easily understand it. You're chaining. It's a

[02:18:53] understand it. You're chaining. It's a recursive application of chain rule to

[02:18:54] recursive application of chain rule to derive the gradient which allows you to

[02:18:55] derive the gradient which allows you to optimize any arbitrary differential

[02:18:57] optimize any arbitrary differential function. So it's a I love finding these

[02:19:01] function. So it's a I love finding these like you know the smaller the terms and

[02:19:04] like you know the smaller the terms and serving them in a very on a platter and

[02:19:06] serving them in a very on a platter and discovering them and I feel like

[02:19:07] discovering them and I feel like education is like the most

[02:19:08] education is like the most intellectually interesting thing because

[02:19:10] intellectually interesting thing because >> you have a tangle of understanding and

[02:19:12] >> you have a tangle of understanding and you're trying to lay it out in a way

[02:19:14] you're trying to lay it out in a way that creates a ramp where everything

[02:19:17] that creates a ramp where everything only depends on the thing before it and

[02:19:19] only depends on the thing before it and I find that this like you know

[02:19:20] I find that this like you know untangling of knowledge is just so

[02:19:22] untangling of knowledge is just so intellectually interesting as a as a

[02:19:23] intellectually interesting as a as a cognitive task.

[02:19:24] cognitive task. >> Yeah. And so I love doing it personally,

[02:19:26] >> Yeah. And so I love doing it personally, but I just find have fascination with

[02:19:28] but I just find have fascination with trying to lay things out in a certain

[02:19:29] trying to lay things out in a certain way. And maybe that that helps me.

[02:19:31] way. And maybe that that helps me. >> It also just makes a learning experience

[02:19:33] >> It also just makes a learning experience so much more motivated. You your um

[02:19:36] so much more motivated. You your um tutorial on the transformer begins with

[02:19:39] tutorial on the transformer begins with biogram. It's literally like a lookup

[02:19:41] biogram. It's literally like a lookup table from

[02:19:42] table from >> here's the word right now

[02:19:44] >> here's the word right now >> or here's the previous word, here's the

[02:19:46] >> or here's the previous word, here's the next word. And it's literally just a

[02:19:47] next word. And it's literally just a lookup table.

[02:19:47] lookup table. >> Yeah. That's the essence of it. Yeah. I

[02:19:49] >> Yeah. That's the essence of it. Yeah. I mean such a brilliant way like okay

[02:19:50] mean such a brilliant way like okay start with a lookup table and then go to

[02:19:52] start with a lookup table and then go to a transformer and then each piece is

[02:19:54] a transformer and then each piece is motivated why would you add that why

[02:19:55] motivated why would you add that why would you add the next thing you

[02:19:57] would you add the next thing you couldn't memorize a sort of attention

[02:19:58] couldn't memorize a sort of attention formula but it's like having an

[02:19:59] formula but it's like having an understanding of why this is every

[02:20:01] understanding of why this is every single piece is relevant what problem it

[02:20:03] single piece is relevant what problem it solves

[02:20:03] solves >> yeah yeah you're presenting the pain

[02:20:05] >> yeah yeah you're presenting the pain before you present a solution and how

[02:20:06] before you present a solution and how clever is that and you want to take the

[02:20:08] clever is that and you want to take the student through that progression so um

[02:20:10] student through that progression so um there's a lot of like other small things

[02:20:12] there's a lot of like other small things like that that I think make it make it

[02:20:13] like that that I think make it make it nice and engaging interesting and and

[02:20:15] nice and engaging interesting and and you know always prompting the student

[02:20:17] you know always prompting the student there's there's a lot of small things

[02:20:18] there's there's a lot of small things like that that I think are, you know,

[02:20:19] like that that I think are, you know, important and a lot of good educators

[02:20:21] important and a lot of good educators will do. Uh like how would you solve

[02:20:23] will do. Uh like how would you solve this? Like I'm not going to present a

[02:20:25] this? Like I'm not going to present a solution before you're going to guess.

[02:20:27] solution before you're going to guess. >> That would be wasteful. That would be

[02:20:28] >> That would be wasteful. That would be that's that's a little bit of a

[02:20:31] that's that's a little bit of a >> I don't want to swear, but like it's a

[02:20:32] >> I don't want to swear, but like it's a it's a it's a dick move towards you to

[02:20:34] it's a it's a dick move towards you to present you with the solution before I

[02:20:36] present you with the solution before I give you a shot to try to um

[02:20:37] give you a shot to try to um >> Right.

[02:20:38] >> Right. >> Uh to come up with it yourself.

[02:20:39] >> Uh to come up with it yourself. >> Yeah. Yeah. And because because if you

[02:20:41] >> Yeah. Yeah. And because because if you try to come with yourself, you you I

[02:20:42] try to come with yourself, you you I guess you get a better understanding of

[02:20:43] guess you get a better understanding of like what is the action space.

[02:20:47] like what is the action space. >> Yeah. And then what is the sort of like

[02:20:49] >> Yeah. And then what is the sort of like objective then like why does only this

[02:20:51] objective then like why does only this action fulfill that objective right?

[02:20:52] action fulfill that objective right? >> Yeah. Well, you have a chance to like

[02:20:54] >> Yeah. Well, you have a chance to like try yourself and you you have an

[02:20:56] try yourself and you you have an appreciation when I give you the

[02:20:57] appreciation when I give you the solution and uh it maximizes the amount

[02:20:59] solution and uh it maximizes the amount of knowledge per new fact added.

[02:21:01] of knowledge per new fact added. >> That's right. Yeah. Yeah.

[02:21:02] >> That's right. Yeah. Yeah. >> Why do you think by default people who

[02:21:05] >> Why do you think by default people who are genuine experts in their field are

[02:21:10] are genuine experts in their field are often bad at explaining it to somebody

[02:21:12] often bad at explaining it to somebody ramping up?

[02:21:14] ramping up? >> Well, it's the curse of knowledge and

[02:21:15] >> Well, it's the curse of knowledge and expertise. Yeah,

[02:21:16] expertise. Yeah, >> this is a real phenomenon and I actually

[02:21:17] >> this is a real phenomenon and I actually suffered from it myself as much as I try

[02:21:19] suffered from it myself as much as I try to not not suffer from it. But

[02:21:21] to not not suffer from it. But >> you take certain things for granted and

[02:21:22] >> you take certain things for granted and you can't put yourself in the shoes of

[02:21:24] you can't put yourself in the shoes of new of people who are just starting out

[02:21:26] new of people who are just starting out and uh this is pervasive and happens to

[02:21:27] and uh this is pervasive and happens to me as well.

[02:21:28] me as well. >> One thing that I actually think is

[02:21:29] >> One thing that I actually think is extremely helpful as an example someone

[02:21:31] extremely helpful as an example someone was trying to show me a paper in biology

[02:21:33] was trying to show me a paper in biology recently

[02:21:34] recently >> and I just had instantly so many

[02:21:36] >> and I just had instantly so many terrible questions. Um,

[02:21:37] terrible questions. Um, >> so what I did was I used chacht to ask

[02:21:39] >> so what I did was I used chacht to ask the questions with the with the paper in

[02:21:41] the questions with the with the paper in the context window and then uh it worked

[02:21:44] the context window and then uh it worked through some of the simple things and

[02:21:45] through some of the simple things and then I actually shared the thread to the

[02:21:47] then I actually shared the thread to the person who shared it uh who actually

[02:21:48] person who shared it uh who actually like wrote that paper or like worked on

[02:21:50] like wrote that paper or like worked on that work and I almost feel like it was

[02:21:51] that work and I almost feel like it was like um like if they can see the dumb

[02:21:54] like um like if they can see the dumb questions I had it might help them

[02:21:56] questions I had it might help them explain it better in the future or

[02:21:57] explain it better in the future or something like that because um so for

[02:21:59] something like that because um so for example for my material I would love if

[02:22:02] example for my material I would love if people shared their dumb conversations

[02:22:04] people shared their dumb conversations with Chachi PT about the stuff that I've

[02:22:05] with Chachi PT about the stuff that I've created because it really helps me put

[02:22:07] created because it really helps me put myself again in the shoes of someone

[02:22:08] myself again in the shoes of someone who's starting out.

[02:22:09] who's starting out. >> Another trick like that that I just

[02:22:12] >> Another trick like that that I just works astoundingly well.

[02:22:15] works astoundingly well. Um, if somebody writes a paper or a blog

[02:22:19] Um, if somebody writes a paper or a blog post or an announcement, it is in 100%

[02:22:22] post or an announcement, it is in 100% of cases true that just the narration or

[02:22:26] of cases true that just the narration or the transcription of how they would

[02:22:28] the transcription of how they would explain it to you over lunch

[02:22:30] explain it to you over lunch >> is way more uh not only understandable,

[02:22:35] >> is way more uh not only understandable, >> but actually also more accurate and

[02:22:38] >> but actually also more accurate and scientific in the sense that people have

[02:22:40] scientific in the sense that people have a bias to explain things in the most

[02:22:43] a bias to explain things in the most abstract act jargon filled way possible

[02:22:46] abstract act jargon filled way possible and to clear their throat for four

[02:22:47] and to clear their throat for four paragraphs before they explain the

[02:22:49] paragraphs before they explain the central idea.

[02:22:50] central idea. >> But there's something about

[02:22:51] >> But there's something about communicating one-on-one with a person

[02:22:53] communicating one-on-one with a person which compels you to just say the thing.

[02:22:57] which compels you to just say the thing. >> Just say the thing.

[02:22:58] >> Just say the thing. >> Yeah. Actually, I saw that tweet. I

[02:23:00] >> Yeah. Actually, I saw that tweet. I thought it was really good. I shared it

[02:23:01] thought it was really good. I shared it with a bunch of people actually. I think

[02:23:02] with a bunch of people actually. I think it was really good. And I noticed this

[02:23:04] it was really good. And I noticed this many many times. Um maybe the most

[02:23:06] many many times. Um maybe the most prominent example is I remember uh back

[02:23:08] prominent example is I remember uh back in my PhD days doing research etc. Uh

[02:23:11] in my PhD days doing research etc. Uh you read someone's paper, right? and you

[02:23:12] you read someone's paper, right? and you work you to understand what it's doing

[02:23:14] work you to understand what it's doing etc. And then you catch them, you're

[02:23:16] etc. And then you catch them, you're having beers at the conference later and

[02:23:18] having beers at the conference later and you ask them so like this paper like so

[02:23:20] you ask them so like this paper like so what were you doing like what is the

[02:23:21] what were you doing like what is the paper about and they will just tell you

[02:23:22] paper about and they will just tell you these like three sentences that like

[02:23:24] these like three sentences that like perfectly capture the essence of that

[02:23:25] perfectly capture the essence of that paper and totally give you the idea and

[02:23:26] paper and totally give you the idea and you didn't have to read the paper yet.

[02:23:28] you didn't have to read the paper yet. >> Yeah. And like it's only at when you're

[02:23:30] >> Yeah. And like it's only at when you're sitting at the table with a beer or

[02:23:31] sitting at the table with a beer or something like that and like oh yeah the

[02:23:32] something like that and like oh yeah the paper is just oh you take this idea you

[02:23:34] paper is just oh you take this idea you take that idea and you try this

[02:23:35] take that idea and you try this experiment and uh and you try out this

[02:23:36] experiment and uh and you try out this thing and they have a way of just

[02:23:38] thing and they have a way of just putting it conversationally

[02:23:39] putting it conversationally >> right

[02:23:39] >> right >> and just like perfectly like why isn't

[02:23:41] >> and just like perfectly like why isn't that the actual

[02:23:41] that the actual >> exactly

[02:23:45] >> um this is coming from the perspective

[02:23:47] >> um this is coming from the perspective of how somebody who's trying to explain

[02:23:48] of how somebody who's trying to explain an idea should formulate it better. What

[02:23:51] an idea should formulate it better. What is your advice as a student to other

[02:23:54] is your advice as a student to other students where if you don't have a

[02:23:56] students where if you don't have a carpathy who is doing the exposition of

[02:23:59] carpathy who is doing the exposition of an idea if you're reading a paper for

[02:24:00] an idea if you're reading a paper for somebody or reading a book,

[02:24:02] somebody or reading a book, >> what strategies do you employ

[02:24:05] >> what strategies do you employ >> to learn material you're interested in

[02:24:07] >> to learn material you're interested in in fields you're not an expert in?

[02:24:09] in fields you're not an expert in? >> Um I don't actually know that I have um

[02:24:12] >> Um I don't actually know that I have um like unique tips and tricks to be

[02:24:14] like unique tips and tricks to be honest. Um basically it's a it's it's

[02:24:16] honest. Um basically it's a it's it's kind of a painful process. Um but you

[02:24:19] kind of a painful process. Um but you know like reddraft one. Um I think like

[02:24:22] know like reddraft one. Um I think like one thing that has always helped me

[02:24:23] one thing that has always helped me quite a bit is um

[02:24:26] quite a bit is um I had a small tweet about this actually.

[02:24:28] I had a small tweet about this actually. So like learning things on demand is is

[02:24:30] So like learning things on demand is is pretty nice. Learning depthwise. I do

[02:24:32] pretty nice. Learning depthwise. I do feel like you need a bit of alternation

[02:24:33] feel like you need a bit of alternation of learning depth wise. On demand you're

[02:24:34] of learning depth wise. On demand you're trying to achieve a certain project that

[02:24:36] trying to achieve a certain project that you're going to get a reward from.

[02:24:37] you're going to get a reward from. >> And learning breathwise which is just oh

[02:24:39] >> And learning breathwise which is just oh let's do whatever one and here's all the

[02:24:41] let's do whatever one and here's all the things you might need. Which is a lot of

[02:24:42] things you might need. Which is a lot of school does a lot of breath wise

[02:24:44] school does a lot of breath wise learning like oh trust me you'll need

[02:24:45] learning like oh trust me you'll need this later. you know that kind of a

[02:24:46] this later. you know that kind of a stuff

[02:24:47] stuff >> like okay I trust you I'll learn it

[02:24:49] >> like okay I trust you I'll learn it because I guess I need it.

[02:24:50] because I guess I need it. >> But I love the kind of learning where

[02:24:52] >> But I love the kind of learning where you'll actually get a reward out of

[02:24:53] you'll actually get a reward out of doing something and you're learning on

[02:24:55] doing something and you're learning on demand.

[02:24:55] demand. >> The other thing that I've found is

[02:24:57] >> The other thing that I've found is extremely helpful is um maybe this is an

[02:25:00] extremely helpful is um maybe this is an aspect where education is a bit more

[02:25:01] aspect where education is a bit more selfless because uh explaining things to

[02:25:04] selfless because uh explaining things to people is a beautiful way to learn

[02:25:06] people is a beautiful way to learn something more deeply. Uh this uh

[02:25:07] something more deeply. Uh this uh happens to me all the time. I think it

[02:25:09] happens to me all the time. I think it probably happens to other people too

[02:25:10] probably happens to other people too because

[02:25:11] because >> I realize if I don't really understand

[02:25:13] >> I realize if I don't really understand something I can't explain it, you know,

[02:25:14] something I can't explain it, you know, and and um I'm trying and I'm like

[02:25:17] and and um I'm trying and I'm like actually actually I don't understand

[02:25:18] actually actually I don't understand this and it's so annoying to come to

[02:25:20] this and it's so annoying to come to terms with that and then you can go back

[02:25:21] terms with that and then you can go back and make sure you understood it and so

[02:25:23] and make sure you understood it and so it fills these gaps of your

[02:25:24] it fills these gaps of your understanding. It forces you to come to

[02:25:26] understanding. It forces you to come to terms with them and uh to reconcile

[02:25:28] terms with them and uh to reconcile them. I love to reexlain and things like

[02:25:30] them. I love to reexlain and things like that and I think people should be doing

[02:25:32] that and I think people should be doing that more as well. I think that forces

[02:25:33] that more as well. I think that forces you to manipulate the knowledge and make

[02:25:35] you to manipulate the knowledge and make sure that you you know what you're

[02:25:36] sure that you you know what you're talking about when you're explaining it.

[02:25:37] talking about when you're explaining it. Oh yeah, I think that's an excellent

[02:25:38] Oh yeah, I think that's an excellent note to close on.

[02:25:39] note to close on. >> Yeah,

[02:25:40] >> Yeah, >> Andre, that was great.

[02:25:41] >> Andre, that was great. >> Yeah, thank you. Thanks. Take your time.

[02:25:43] >> Yeah, thank you. Thanks. Take your time. >> Hey everybody, I hope you enjoyed that

[02:25:45] >> Hey everybody, I hope you enjoyed that episode. If you did, the most helpful

[02:25:47] episode. If you did, the most helpful thing you can do is just share it with

[02:25:49] thing you can do is just share it with other people who you think might enjoy

[02:25:50] other people who you think might enjoy it. It's also helpful if you leave a

[02:25:53] it. It's also helpful if you leave a rating or a comment on whatever platform

[02:25:55] rating or a comment on whatever platform you're listening on. If you're

[02:25:57] you're listening on. If you're interested in sponsoring the podcast,

[02:25:59] interested in sponsoring the podcast, you can reach out at

[02:26:00] you can reach out at dwarcash.com/advertise.

[02:26:04] Otherwise, I'll see you on the next one.
