AI-native software engineering teams operate very differently than traditional teams. The obvious difference is that AI-native teams use coding agents to build products much faster, but this leads to many other changes in how we operate. For example, some great engineers now play broader roles than just writing code. They are partly product managers, designers, sometimes marketers. Further, small teams who work in the same office, where they can communicate face-to-face, can move incredibly quickly.
Because we can now build fast, a greater fraction of time must be spent deciding what to build. To deal with this project-management bottleneck, some teams are pushing engineer:product manager (PM) some teams are pushing engineer:product manager (PM) ratios downward from, say, 8:1 to as low as 1:1. But we can do even better: If we have one PM who decides what to build and one engineer who builds it, the communication between them becomes a bottleneck. This is why the fastest-moving teams I see tend to have engineers who know how to do some product work (and, optionally, some PMs who know how to do some engineering work). When an engineer understands users and can make decisions on what to build and build it directly, they can execute incredibly quickly.
I’ve seen engineers successfully expand their roles to including making product decisions, and PMs expand their roles to building software. The tech industry has more engineers than PMs, but both are promising paths. If you are an engineer, you’ll find it useful to learn some product management skills, and if you’re a PM, please learn to build!
Looking beyond the product-management bottleneck, I also see bottlenecks in design, marketing, legal compliance, and much more. When we speed up coding 10x or 100x, everything else becomes slow in comparison. For example, some of my teams have built great features so quickly that the marketing organization was left scrambling to figure out how to communicate them to users — a marketing bottleneck. Or when a team can build software in a day that the legal department needs a week to review, that’s a legal compliance bottleneck. In this way, agentic coding isn’t just changing the workflow of software engineering, it’s also changing all the teams around it.
When smaller, AI-enabled teams can get more done, generalists excel. Traditional companies need to pull together people from many specialties — engineering, product management, design, marketing, legal, etc. — to execute projects and create value. This has resulted in large teams of specialists who work together. But if a team of 2 persons is to get work done that require 5 different specialities, then some of those individuals must play roles outside a single speciality. In some small teams, individuals do have deep specializations. For example, one might be a great engineer and another a great PM. But they also understand the other key functions needed to move a project forward, and can jump into thinking through other kinds of problems as needed. Of course, proficiency with AI tools is a big help, since it helps us to think through problems that involve different roles.
Even in a two-person team, to move fast, communication bottlenecks also must be minimized. This is why I value teams that work in the same location. Remote teams can perform well too, but the highest speed is achieved by having everyone in the room, able to communicate instantaneously to solve problems.
This post focuses on AI-native teams with around 2-10 persons, but not everything can be done by a small team. I'll address the coordination of larger teams in the future.
I realize these shifts to job roles are tough to navigate for many people. At the same time, I am encouraged that individuals and small teams who are willing to learn the relevant skills are now able to get far more done than was possible before. This is the golden age of learning and building!
[Original text: ]
Show more
Our AI infrastructure releases have focused on exposing wallet, exchange and onchain functionality through agent-compatible interfaces.
Our repositories now include,
1. Agentic Wallet with TEE-secured signing
2. MCP integrations for AI-native workflows
3. CLI + Skills tooling via Onchain OS
4. Agent Trade Kit components for trading automation
5. Transaction simulation and risk grading before execution
6. Multi-chain support across Ethereum, Solana, X Layer and others
7. x402-compatible payment tooling
8. DEX routing, wallet operations and transaction broadcasting APIs
The current architecture exposes these capabilities through, MCP servers, CLI tooling, Open APIs and installable Skills repositories
This is the vision set by Star to develop AI infrastructure while preserving execution controls around signing, permissions and transaction risk. There is more coming in the near future.
Show more
Cournot AI Oracle Launches on #
BNBChain# Mainnet: Bringing Verifiable AI Reasoning Onchain
Cournot is building an AI-native oracle infrastructure on
@BNBCHAIN, enabling applications to verify real-world outcomes through evidence collection, rule interpretation, and auditable reasoning.
Supporting multiple verticals such as prediction markets, onchain collectibles/RWAs, parametric insurance and agentic commerce, Cournot is acting as an independent evaluator agent within workflows like the BNB Agent SDK (ERC-8183/APEX) to verify task outcomes and enable trustworthy automated settlement.
Why BNB Chain? 🧵
Show more
The future of AI Agents needs #
ICP#
AI agents are starting to act on our behalf:
- making deals
- sending messages
- handling sensitive data
But there is no trust layer telling you who or what you're actually dealing with ❌
@zCloakNetwork is building exactly that: the trust, identity and privacy infrastructure for the AI-native economy, and they built it on top of Internet Computer Protocol
In this podcast I sit down with Xiao Zhang (
@xiao_zcloak), founder of to break down what they're building, why they chose ICP over every other blockchain, and what the world looks like when AI agents can finally be trusted.
Show more
Three more things are still blocking mainstream #
AI# adoption:
1. Reliability
AI systems are impressive in demos, but inconsistent in the real world. They hallucinate, misinterpret context, and occasionally fail at basic tasks.
That’s fine for experimentation. Not fine when you’re handling real operations.
Until outcomes are predictable, businesses will hesitate to fully rely on them.
2. Evolving regulations and change management
The rules are still being written. Governments are actively shaping policies, and companies don’t want to bet big on something that might be restricted tomorrow.
Adopting AI isn’t a plug-and-play upgrade, it reshapes workflows, roles, and accountability. Most organizations aren’t ready for that level of change yet.
3. Integration complexity
AI doesn’t live in isolation, it needs to connect with existing systems, data pipelines, #
privacy# and #
security# layers.
These hurdles explain why AI feels everywhere in conversation, but still not fully embedded in everyday business operations.
Also there is an AI native vs #
Saas# AI competition as well. It will take sometime.
Show more