The Monolith That Grew Too Big
Our legacy system started small and focused. Over the years, it accumulated features, integrations, and responsibilities until it became a monolith that everyone was afraid to touch. Deploy times measured in hours. A bug in one module could bring down the entire system. Scaling meant scaling everything, even the parts that didn't need it.
When we got the mandate to modernize EUCARIS for multiple European countries, we knew the monolithic architecture wouldn't cut it. Different countries had different deployment schedules, different scale requirements, and different regulatory constraints. We needed modularity, independent deployability, and the ability to scale components independently.
The Strangler Fig Pattern
We didn't have the luxury of a complete rewrite. The system was in active use, processing critical cross-border vehicle data daily. Our approach was the Strangler Fig Pattern—gradually building new microservices around the edges of the monolith, redirecting traffic to them, and letting the old code slowly wither away.
Our first extraction was the authentication service. It was relatively isolated, had clear boundaries, and if we got it wrong, we could roll back without affecting core functionality. This became our learning ground for microservice patterns: API contracts, service discovery, distributed logging, and inter-service communication.
Defining Service Boundaries
The hardest part wasn't writing microservices—it was deciding where to draw the boundaries. Too fine-grained, and you end up with a distributed monolith and network overhead. Too coarse, and you haven't solved the original problem.
We used Domain-Driven Design principles to identify bounded contexts. The Vehicle Registry was a natural service boundary. The Notification System was another. The Payment Processing module had clear boundaries. But some areas were messier—functionality that touched multiple domains and had dependencies everywhere.
Our rule became: if two pieces of functionality change for different reasons or at different rates, they should be separate services. If they always change together, maybe they belong in the same service.
The Communication Challenge
In a monolith, service-to-service communication is just method calls. In microservices, it's HTTP requests with latency, network failures, and versioning concerns. We had to rethink how components talked to each other.
For synchronous communication, we used REST APIs with clear contracts and versioning strategies. For asynchronous workflows, we introduced message queues (Azure Service Bus). Events like "Vehicle Registered" or "Data Updated" were published to topics, and interested services subscribed. This decoupled services and made the system more resilient.
But it also introduced new complexity: distributed transactions, eventual consistency, and the need for saga patterns to coordinate multi-service operations. The CAP theorem became very real—we had to choose between consistency and availability in failure scenarios.
Observability: The New Requirement
Debugging a monolith meant setting breakpoints and following the code. Debugging microservices meant tracing requests across multiple services, correlating logs, and understanding system behavior from distributed events.
We invested heavily in observability: structured logging with correlation IDs, distributed tracing with Application Insights, health check endpoints on every service, and comprehensive metrics dashboards. When something went wrong, we needed to quickly answer: which service failed, what was the request path, and what was the state of downstream dependencies?
The Trade-offs
Microservices aren't a silver bullet. We gained independent deployability, scalability, and team autonomy. But we paid for it with increased operational complexity, more sophisticated monitoring requirements, and the challenges of distributed systems.
- Wins: Deploy one service without affecting others. Scale bottlenecks independently. Teams own their services end-to-end
- Costs: Network latency. More infrastructure to manage. Distributed debugging. Eventual consistency challenges
- Surprises: Version management across services. Data duplication and synchronization. The importance of contract testing
Lessons for the Journey
- Start with why: Don't go microservices because it's trendy. Have clear reasons tied to business needs
- Strangle don't rewrite: Incremental migration reduces risk and maintains business continuity
- Invest in DevOps: Microservices require strong automation for building, testing, and deployment
- Embrace observability early: You can't debug what you can't see. Logging and monitoring aren't optional
- Accept eventual consistency: Not everything needs to be immediately consistent. Design for it
- Team boundaries matter: Align services with team structure. Conway's Law is real
Where We Are Now
Two years into the journey, we have a hybrid architecture. Core services are microservices. Some legacy functionality remains in the monolith, gradually being extracted. New features are built as services from day one. Deploy frequency increased from monthly to daily. Incidents are localized to individual services instead of taking down the whole system.
Was it worth it? For our use case—a system serving multiple countries with different requirements—absolutely. Would I recommend it for a small team with a simple domain? Probably not. Architecture should serve your constraints, not the other way around.