註冊並分享邀請連結,可獲得影片播放與邀請獎勵。

码良
@cxjwin
📱 国内大厂 iOS 技术负责人 🧗 百万行 OC 老仓库 × AI Coding 🛠️ CLI · 🧠 Harness · 🤖 Agent ✍️ 踩坑 · 跑通 · 重构
加入 March 2012
115 正在關注    321 粉絲
Karpathy 这条"让 LLM 输出 HTML"的帖子,技术圈在讨论提示词技巧,但我更想从团队协作的角度聊一聊。 过去一年我观察到一个很有意思的现象:团队里 AI 用得好的人和用得差的人,差距正在从"会不会写 prompt"转向"输出物能不能被别人消费"。 写代码的同学拿 Claude 跑出一份架构分析,自己看懂了,转给产品同事——对方看到一屏 Markdown 直接关掉。换成一个带折叠、带跳转、带表格的 HTML 页面,结论是一样的,但讨论真的发生了。这件事在 Markdown 时代很难发生。 这背后是 Karpathy 讲的那个底层逻辑:人脑三分之一专门处理视觉,文本是单车道,视觉是十车道。但放到团队语境里,它还多了一层含义——AI 产出物的"可传播性"正在变成新的生产力指标。 一个人独享的 AI 价值是有限的。真正撬动团队效率的,是那些能被截图、被转发、被钉在会议室屏幕上的产物。Markdown 在 IDE 里是好东西,在 Lark/飞书/钉钉群里是噪音。HTML 不是,HTML 是工件(artifact),是可以脱离上下文存在的东西。 对 Tech Lead 来说这意味着几件事: 一是评估候选人和团队成员的 AI 能力,不能只看他自己产出多少,要看他产出的东西有多少被别人真正用上了。前者是个人效率,后者是组织效率。 二是团队的 AI 工作流应该把"输出形态"作为一等公民来设计。和"用什么模型""怎么写 prompt"同等重要。我们花了大量精力在输入端(上下文工程、harness、知识库),输出端却一直默认 Markdown,这是不平衡的。 三是HTML 只是过渡形态。Karpathy 提到的终点是生成式交互界面——团队成员不是"读"一份分析,而是"探索"一个仪表盘。这对管理也是新挑战:当 AI 产出的不再是文档而是可交互的工作空间,团队的知识沉淀方式、评审方式、对齐方式都要重新设计。 一个朴素的判断:未来 12 个月,团队里那个最早把 AI 输出从"文本流"切换到"可交付工件"的人,会拿走最多的协作红利。
顯示更多
This works really well btw, at the end of your query ask your LLM to "structure your response as HTML", then view the generated file in your browser. I've also had some success asking the LLM to present its output as slideshows, etc. More generally, imo audio is the human-preferred input to AIs but vision (images/animations/video) is the preferred output from them. Around a ~third of our brains are a massively parallel processor dedicated to vision, it is the 10-lane superhighway of information into brain. As AI improves, I think we'll see a progression that takes advantage: 1) raw text (hard/effortful to read) 2) markdown (bold, italic, headings, tables, a bit easier on the eyes) <-- current default 3) HTML (still procedural with underlying code, but a lot more flexibility on the graphics, layout, even interactivity) <-- early but forming new good default ...4,5,6,... n) interactive neural videos/simulations Imo the extrapolation (though the technology doesn't exist just yet) ends in some kind of interactive videos generated directly by a diffusion neural net. Many open questions as to how exact/procedural "Software 1.0" artifacts (e.g. interactive simulations) may be woven together with neural artifacts (diffusion grids), but generally something in the direction of the recently viral There are also improvements necessary and pending at the input. Audio nor text nor video alone are not enough, e.g. I feel a need to point/gesture to things on the screen, similar to all the things you would do with a person physically next to you and your computer screen. TLDR The input/output mind meld between humans and AIs is ongoing and there is a lot of work to do and significant progress to be made, way before jumping all the way into neuralink-esque BCIs and all that. For what's worth exploring at the current stage, hot tip try ask for HTML.
顯示更多