Microservices vs. Monolith: Architecting for Scale in 2026
notes
The microservices pendulum has swung back toward the center. After a decade of conference talks promoting microservices as the default architecture, the industry collectively acknowledged what experienced engineers have been saying since the beginning: distributed systems are hard, and the complexity is not free.
This note covers where the consensus actually sits in 2026, when microservices genuinely make sense, and the practical signals that tell you it is time to split a monolith.
What Actually Happened
The 2015-2020 microservices rush was driven by companies looking at Netflix, Amazon, and Google and concluding that “if large-scale companies use microservices, we should too.” The flaw in this reasoning was ignoring that those companies adopted microservices to solve problems that came with having hundreds of engineering teams and millions of users — problems that a 20-person startup does not have.
The consequences were predictable. Teams that split prematurely spent enormous effort on service discovery, distributed tracing, network reliability, data consistency, and deployment orchestration — infrastructure work that consumed the engineering capacity that should have gone into building the product.
By 2023, high-profile companies started publicly moving back toward monolithic or “modular monolith” architectures. Amazon Prime Video’s team published a case study showing a 90% cost reduction by consolidating microservices into a monolith. Shopify talked about their modular monolith approach. The industry started having an honest conversation about when distribution is worth the cost.
The Monolith Advantage
A well-structured monolith has properties that microservices architectures spend enormous effort trying to recreate:
Local function calls. Calling a function within the same process is nanoseconds. Calling a function across a network is milliseconds at best. When your “services” are actually modules within a monolith, you get the boundary benefits without the network cost.
Transactional consistency. A database transaction in a monolith is ACID. Distributed transactions across microservices require sagas, compensating actions, and eventual consistency patterns that are complex to implement and harder to debug.
Simple deployment. Deploy one artifact. If it works, everything works. If it breaks, roll back one thing. Microservices introduce deployment ordering, API version compatibility, and the possibility that service A deploys successfully but breaks service B.
Easy refactoring. Moving code between modules in a monolith is a refactoring task. Moving code between microservices is a migration project that involves API changes, data migration, and coordination across teams.
When Microservices Actually Help
Microservices solve three specific problems that monoliths cannot:
Independent scaling. If one part of your system handles 100x more traffic than another, running them as separate services lets you scale them independently. But “independent scaling” means you actually need different resource profiles — not that your traffic is growing generally.
Independent deployment by independent teams. When you have 10 teams working on different parts of the system and they cannot deploy without coordinating with each other, service boundaries aligned to team boundaries reduce coordination overhead. This is Conway’s Law in action. But if you have one team, or two teams that communicate well, this benefit does not apply.
Technology heterogeneity. If one component genuinely benefits from a different language, runtime, or database than the rest of the system, a service boundary lets you use the best tool for each job. This is a real but rare requirement. Most systems work fine with one language and one database.
The Modular Monolith Pattern
The approach that has gained the most traction in 2026 is the modular monolith: a single deployable artifact with strict internal boundaries.
The key practices:
Enforce module boundaries in code. Use language-level access control (packages, modules, visibility modifiers) to prevent modules from reaching into each other’s internals. Each module exposes a defined API and hides its implementation.
Separate databases logically. Each module owns its tables. Cross-module data access goes through the module’s API, not direct database queries. This prepares for future extraction without requiring it now.
Test boundaries explicitly. Write integration tests at module boundaries. If module A calls module B’s API, test that interaction. This catches boundary violations early and documents the actual contracts between modules.
Monitor module performance independently. Instrument each module with its own metrics. If module C is slow, you can see that without distributed tracing. If you later extract module C into a service, you already have the performance baseline.
The result: you get most of the organizational benefits of microservices (clear ownership, defined interfaces, independent development) without the operational cost of distributed systems. When you actually need to extract a service — because you have a concrete scaling or deployment problem — the module boundary gives you a natural extraction point.
Decision Signals
Extract a service when:
- A specific module has dramatically different scaling requirements
- Teams are blocked on each other’s deployments despite clean module boundaries
- A component requires a fundamentally different runtime or technology
- Regulatory requirements mandate physical separation of certain data processing
Keep it in the monolith when:
- “We might need to scale this independently someday”
- “Microservices are the modern way”
- “Netflix does it”
- The team is small enough to coordinate deployments without friction
The honest answer for most teams in 2026: start with a modular monolith, extract services when you feel the pain, and resist the urge to distribute things prematurely. The effects of stress on programmers are bad enough without adding distributed systems debugging to the mix.