Register and share your invite link to earn from video plays and referrals.

Search results for frontend
frontend community
One keyword maps to one global community path.
Create community
People
Not Found
Tweets including frontend
The SEC just gave DeFi frontends a meaningful signal: if you're not custodying assets or executing orders, we won't come after you. Most detailed safe harbor yet. But it's not a rule, it expires in 5 years, next administration can pull it, etc. More substance than we’ve seen in a staff statement. Still not a rule. Still expires. But it’s solid progress in the right direction.
Show more
For everyone asking about GPT-5.6's frontend abilities - it still sucks. Nothing is certain in life, except death, taxes, and GPT models generating the sloppiest UIs of all time. Gemini 3.2 Pro is heading in the same direction too, regressing versus 3.1 Pro.
Show more
New on the Anthropic Engineering Blog: How we use a multi-agent harness to push Claude further in frontend design and long-running autonomous software engineering. Read more:
Show more
0
317
6.7K
927
Forward to community
We’re looking for builders to join us as we write the next chapter in prediction market history. The first fully remote roles are now live, with more to come. 🔮 Staff Frontend Engineer 🔮 Principal Backend Engineer 🔮 AI Operator Full details here:
Show more
so apparently gemini 3.2 pro is being tested under "gemini-3.1-pro" on @arena's Code Arena (they have done this kind of stealth testing before) ...and if this is really 3.2 pro, it's not looking good. somehow they gpt-ified frontend? hopefully this is an arena-specific quirk
Show more
Coding agents are accelerating different types of software work to different degrees. When we architect teams, understanding these distinctions helps us to have realistic expectations. Listing functions from most accelerated to least, my order is: frontend development, backend, infrastructure, and research. Frontend development — say, building a web page to serve descriptions of products for an ecommerce site — is dramatically sped up because coding agents are fluent in popular frontend languages like TypeScript and JavaScript and frameworks like React and Angular. Additionally, by examining what they have built by operating a web browser, coding agents are now very good at closing the loop and iterating on their own implementations. Granted, LLMs today are still weak at visual design, but given a design (or if a polished design isn’t important), the implementation is fast! Backend development — say, building APIs to respond to queries requesting product data — is harder. It takes more work by human developers to steer modern models to think through corner cases that might lead to subtle bugs or security flaws. Further, a backend bug can lead to non-intuitive downstream effects like a corrupted database that occasionally returns incorrect results, which can be harder to debug than a typical frontend bug. Finally, although database migrations can be easier with coding agents, they’re still hard and need to be handled carefully to prevent data loss. While backend development is much faster with coding agents, they accelerate it less, and skilled developers still design and implement far better backends than inexperienced ones who use coding agents. Infrastructure. Agents are even less effective in tasks like scaling an ecommerce site to 10K active uses while maintaining 99.99% reliability. LLMs' knowledge is still relatively limited with respect to infrastructure and the complex tradeoffs good engineers must make, so I rarely trust them for critical infra decisions. Building good infrastructure often requires a period of testing and experimentation, and coding agents can help with that, but ultimately that’s a significant bottleneck where fast AI coding does not help much. Lastly, finding infrastructure bugs — say, a subtle network misconfiguration — can be incredibly difficult and requires deep engineering expertise. Thus, I’ve found that coding agents accelerate critical infrastructure even less than backend development. Research. Coding agents accelerate research work even less. Research involves thinking through new ideas, formulating hypotheses, running experiments, interpreting them to potentially modify the hypotheses, and iterating until we reach conclusions. Coding agents can speed up the pace at which we can write research code. (I also use coding agents to help me orchestrate and keep track of experiments, which makes it easier for a single researcher to manage more experiments.) But there is a lot of work in research other than coding, and today’s agents help with research only marginally. Categorizing software work into frontend, backend, infra, and research is an extreme simplification, but having a simple mental model for how much different tasks have sped up has been useful for how I organize software teams. For example, I now ask front-end teams to implement products dramatically faster than a year ago, but my expectations for research teams have not shifted nearly as much. I am fascinated by how to organize software teams to use coding agents to achieve speed, and will keep sharing my findings in future posts. [Original text: ]
Show more
0
84
549
107
Forward to community
🇭🇰AI Agent Demo Day | Top 3 Winners Host: @499_DAO × City University of Hong Kong Co-host: @0G_labs × OpenSchool x IDM of CityU Special Partner: @BAI_AGI x @hetu_protocol Top 3 projects selected: 🥇 DealAgent 🥈 Syndicate 🥉 MemExchange From commerce agents to multi-agent decision systems and knowledge networks — this is what real AI systems look like in action. More than demos. These are systems being built. 🥇 DealAgent — From Chat to Commerce An ERP-native AI agent enabling commerce workflows without traditional frontends. Reimagining transactions through conversational interfaces. 🥈 Syndicate — Multi-agent trading system A crypto trading system leveraging debate, self-critique, and causal reasoning. Moving beyond signal aggregation to structured decision-making. 🥉 MemExchange @youbetdao — Knowledge exchange for agents A decentralized protocol for agents to discover, trade, and reuse knowledge. Allowing agents to build on existing expertise.
Show more
Meet Kimi K2.6: Advancing Open-Source Coding 🔹Open-source SOTA on HLE w/ tools (54.0), SWE-Bench Pro (58.6), SWE-bench Multilingual (76.7), BrowseComp (83.2), Toolathlon (50.0), Charxiv w/ python(86.7), Math Vision w/ python (93.2) What's new: 🔹Long-horizon coding - 4,000+ tool calls, over 12 hours of continuous execution, with generalization across languages (Rust, Go, Python) and tasks (frontend, devops, perf optimization). 🔹Motion-rich frontend - Videos in hero sections, WebGL shaders, GSAP + Framer Motion, Three.js 3D. 🔹Agent Swarms, elevated - 300 parallel sub-agents × 4,000 steps per run (up from K2.5's 100 / 1,500). One prompt, 100+ files. 🔹Proactive Agents - K2.6 model powers OpenClaw, Hermes Agent, etc for 24/7 autonomous ops. 🔹Claw Groups (research preview) - bring your own agents, command your friends', bots & humans in the loop. - K2.6 is now live on in chat mode and agent mode. For production-grade coding, pair K2.6 with Kimi Code: - 🔗 API: 🔗 Tech blog: 🔗 Weights & code:
Show more
0
912
18.1K
2.4K
Forward to community
Bitlight Labs Technical Update – February 21, 2026 We are pleased to announce significant updates to our RGB Lightning Network (RLN) infrastructure and the release of a new developer sandbox. 1. RLN Node & CLI Enhancements Repository: We have refactored payment logic to a resource-oriented architecture. Key updates include: - Expanded Payment Controls: Added specific subcommands for pay invoice, offer, refund, and keysend. - BOLT12 Support: Integrated BOLT12 capabilities along with wait and abandon payment states in the API and TypeScript SDK. - Documentation: Updated all examples and docs to reflect the new node topology. 2. New Developer Sandbox Repository: We have released a React + TypeScript web frontend for the Bitlight LN Hub to facilitate testing and development. Features include: - RPC Proxy: A backend implementation (in src/app/api) to securely proxy RPC calls and resolve cross-domain restrictions. - Dockerized Environment: Includes a pre-configured Bitcoin regtest container (bitcoind) with scripts for wallet creation and rln-ldk-node server initialization. Developers are encouraged to review the repositories and update their local environments accordingly. Make Bitcoin Smart
Show more
LLM Knowledge Bases Something I'm finding very useful recently: using LLMs to build personal knowledge bases for various topics of research interest. In this way, a large fraction of my recent token throughput is going less into manipulating code, and more into manipulating knowledge (stored as markdown and images). The latest LLMs are quite good at it. So: Data ingest: I index source documents (articles, papers, repos, datasets, images, etc.) into a raw/ directory, then I use an LLM to incrementally "compile" a wiki, which is just a collection of .md files in a directory structure. The wiki includes summaries of all the data in raw/, backlinks, and then it categorizes data into concepts, writes articles for them, and links them all. To convert web articles into .md files I like to use the Obsidian Web Clipper extension, and then I also use a hotkey to download all the related images to local so that my LLM can easily reference them. IDE: I use Obsidian as the IDE "frontend" where I can view the raw data, the the compiled wiki, and the derived visualizations. Important to note that the LLM writes and maintains all of the data of the wiki, I rarely touch it directly. I've played with a few Obsidian plugins to render and view data in other ways (e.g. Marp for slides). Q&A: Where things get interesting is that once your wiki is big enough (e.g. mine on some recent research is ~100 articles and ~400K words), you can ask your LLM agent all kinds of complex questions against the wiki, and it will go off, research the answers, etc. I thought I had to reach for fancy RAG, but the LLM has been pretty good about auto-maintaining index files and brief summaries of all the documents and it reads all the important related data fairly easily at this ~small scale. Output: Instead of getting answers in text/terminal, I like to have it render markdown files for me, or slide shows (Marp format), or matplotlib images, all of which I then view again in Obsidian. You can imagine many other visual output formats depending on the query. Often, I end up "filing" the outputs back into the wiki to enhance it for further queries. So my own explorations and queries always "add up" in the knowledge base. Linting: I've run some LLM "health checks" over the wiki to e.g. find inconsistent data, impute missing data (with web searchers), find interesting connections for new article candidates, etc., to incrementally clean up the wiki and enhance its overall data integrity. The LLMs are quite good at suggesting further questions to ask and look into. Extra tools: I find myself developing additional tools to process the data, e.g. I vibe coded a small and naive search engine over the wiki, which I both use directly (in a web ui), but more often I want to hand it off to an LLM via CLI as a tool for larger queries. Further explorations: As the repo grows, the natural desire is to also think about synthetic data generation + finetuning to have your LLM "know" the data in its weights instead of just context windows. TLDR: raw data from a given number of sources is collected, then compiled by an LLM into a .md wiki, then operated on by various CLIs by the LLM to do Q&A and to incrementally enhance the wiki, and all of it viewable in Obsidian. You rarely ever write or edit the wiki manually, it's the domain of the LLM. I think there is room here for an incredible new product instead of a hacky collection of scripts.
Show more
0
2.9K
58.6K
7.1K
Forward to community