How We're 20xing Development Velocity at MultiversX
How We're 20xing Development Velocity at MultiversX
Use Cases
January 30, 2026
4
min read

How We're 20xing Development Velocity at MultiversX

Last month, Robert Sasu, one of our core engineers at MultiversX, shipped 8 production-ready repositories in 2 weeks using AI agents.

Not prototypes. Production systems. For blockchain infrastructure designed for the agentic internet.

I've been actively building in this space since 2017. I've never seen anything like it.

We've been running intensive experiments across our engineering team, testing different agent architectures and workflows. Robert's output is proof the system works.

We're now rolling out these principles across MultiversX to accelerate our development by a factor of 20.

Here's exactly what we've learned and how you can apply it as a company, developer, entrepreneur, or vibe coder.

7 Principles for Agent-Native Development

1. Planning Is 80% of the Work

Plan → Work → Review → Feedback

Most teams skip planning and jump to code. This is the fatal mistake.

Across our experiments, we've found the same pattern: 2 days of pure planning beats 3+ weeks of chaotic iteration.

Interview sessions. Security audits. Architectural decisions. Zero code written.

Then execution becomes mechanical.

We use interview agents to surface brutal questions early. "Whitelist with known contracts or onchain verification?" Not easy questions. But answering them during specs instead of mid-implementation saves weeks of rework.

Key insight: If specifications are solid, everything else is mechanical.

2. Specialized Agents Are 10x More Token-Efficient

Generic agents burn money. Specialized agents print efficiency.

Our testing shows specialized agents use 10x fewer tokens than generalists because they think in straight lines. Deep domain expertise without wide exploration.

The agent stack we've standardized:

  • Planner (creates master plan, assigns sub-agents)
  • Language specialists (Go, TypeScript, Rust)
  • Integration specialists (external protocols)
  • Domain-specific reviewers

Real example from our codebase: Planner generated 100+ refactoring tasks and assigned each to appropriate specialists working in parallel. Token cost: fraction of what generalists would burn.

Most teams waste time, tokens, and talent here without realizing it.

3. Never Auto-Implement Reviews

Critical rule we enforce: Agents don't implement their own code reviews.

Agents are yes-men during normal coding. But during review phases, they become ruthlessly critical… if you structure it right.

Our review process:

  1. Agent generates review report (does NOT make changes)
  2. Human reviews findings
  3. Human decides what ships
  4. Fresh agent instance executes approved items only

We've run reviews that found 100+ issues. Some were critical security vulnerabilities. Others were style preferences we rejected outright.

Auto-implementation would ship broken code with "improvements" nobody wanted.

We also run competing models. Claude writes code. Codex reviews it. Single-model workflows miss things that multi-model approaches catch.

4. Test-Driven Development (TDD) Is Non-Negotiable

Without explicit TDD mandates, AI agents skip tests like junior developers who don't know better.

This isn't a bug. Agents mimic how most developers actually work. The joke is on us.

The TDD workflow we enforce:

  • Write minimal test
  • Write minimal code to pass
  • Write next test
  • Repeat

Results: Clean interfaces. Code that works.

Alternative: Agents claiming "it's done" with placeholders, hardcoded values, and missing edge cases everywhere.

We don't negotiate on this. TDD is explicit in every workflow.

5. Enforce Architectural Boundaries

When building full-stack systems: We explicitly block agents from crossing boundaries.

Separate folders. Separate repositories. Frontend agents cannot see backend code. Backend agents cannot see smart contracts.

Why this matters: If an agent can access both sides of an interface, it will "helpfully" solve problems by changing both sides. This breaks the architectural discipline that keeps systems maintainable.

We force clean interfaces instead:

  • Frontend defines what it needs from the API
  • Backend defines what it provides
  • Each side implements to the contract

Intentional friction creates proper design.

We also always specify for new work: "No backwards compatibility. Remove all migration code." Otherwise agents generate thousands of tokens worth of complexity for code that never shipped.

6. Manage Context Deliberately

Agents degrade after about an hour. Error rates spike. Context windows get polluted with earlier assumptions.

Our practice: New agent instance every hour. Fresh instance for every review.

A review agent seeing code for the first time catches issues that a long-running session would rationalize away.

We've also found: Don't try to manage more than 3 parallel projects per engineer. The human orchestrating the agents becomes the bottleneck.

Context management is not optional. It's architecture.

7. Optimize Architecture, Not Budget

Most people think serious AI development requires $200/month enterprise plans.

Most of our engineers run on the $20/month Pro tiers.

The difference isn't budget. It's architecture.

Token optimization through:

  • Detailed planning first (drastically cuts implementation tokens)
  • Specialized agents (10x efficiency vs generalists)
  • Small task breakdowns (each task uses fewer tokens)
  • Fresh sessions (prevents exponential context growth)

We also use different models for different phases:

Claude Opus 4.5 → Planning and architecture
Codex 5.2 → Critical review, finding omissions
Gemini/Jules → Language-specific expertise

If you're burning through token limits, your architecture is wrong.

What This Looks Like In Production

Here's a typical production cycle using our system:

Days 1-2: Interview sessions → specifications → security audit on specs → iteration until solid

Day 3: Planner creates implementation plan → breaks into tasks → assigns specialists

Days 4-5: Specialists execute (backend, frontend, integrations, documentation) with TDD throughout

Day 6: Multi-specialist review → domain experts generate recommendations

Day 7: Human reviews recommendations → selects critical items → execution agent implements → final review → ship

Total: 2 days planning, 5 days execution.

Without this architecture: Weeks of chaotic iteration.

The multiplier effect: We run several of these production cycles in parallel. Different engineers, different repos, same proven system. This is how you get to 20x.

Why This Matters for MultiversX

We're not experimenting for fun. We're pioneering agent-native workflows because we have to.

MultiversX is building blockchain infrastructure for the agentic internet. Systems that AI agents will use at scale.

Our development practices and our infrastructure are converging toward the same future. The learnings compound.

Building with agents on infrastructure built for agents creates feedback loops most teams won't see for years.

These 7 principles survived real-world testing across our engineering team.

Core insights that proved durable:

  • Specialization beats generalization
  • Deliberate context management prevents degradation
  • Multi-phase workflows with human oversight work
  • Architecture matters more than model quality

We're Building in the Open

Development is becoming agent-orchestration. Teams figuring this out in 2026 will 10x their velocity.

Teams treating AI as autocomplete will fall behind.

We're open-sourcing our MultiversX AI development toolkit: the actual workflows, agent skills, and best practices we use to build production systems at scale. Optimized for the most advanced blockchain architecture ready for agentic applications.

The future of development isn't coming. We're shipping it right now.

Build the agentic internet with us

Lucian Mincu
Lucian Mincu
Co-founder & CIO

Lucian Mincu, co-founder and CIO of MultiversX, is a self-taught tech prodigy, previously engineer at Uhrenwerk 24, Cetto, and Liebl Systems. and co-founder & CTO of MetaChain Capital. His ability to navigate complex challenges and carve out solutions is nothing short of extraordinary, making him a driving force behind the success of MultiversX.

Published by
Lucian Mincu
Lucian Mincu
Co-founder & CIO
Published on
January 30, 2026
Share this article
Published by
Lucian Mincu
Lucian Mincu
Co-founder & CIO
Published on
January 30, 2026
Share this article