How to Break a Monolith into Microservices

Alex Hyett - May 7 '23 - - Dev Community

I recently wrote about the differences between monoliths and microservices and which you should pick when designing a new system. In most cases, companies will start with an existing monolithic application that they want to break down.

As engineering challenges go, breaking apart a monolith into microservices can be a daunting one. I have been in charge of breaking apart a few legacy applications in my career. In this article, I will share all the lessons I have learned.

🔎 Analysing the current system

The first step is to analyse everything that your application is doing. It is important to understand all the responsibilities that your monolith has accumulated over time.

📝 White-boarding

This is usually done best as a white-boarding exercise with both product and engineering. If you are working remotely then Miro is also a good choice.

It can be easy to spend too much time on this exercise by outlining every single endpoint and the various services and databases that it touches. At this stage, I would advise against that, as you will experience diminishing returns by going into too much detail at this point.

What you come up with should be understandable by the whole team. Stick to high-level functionality and outline the key parts of the application.

Don't forget to include supporting functionality as well, such as auditing.

Auditing is especially important if you work in a regulated industry that has strict auditing requirements. You don't want to break apart your monolith and realise the new services are not auditing properly.

📍 Identify key domains

One of the key goals of splitting apart a monolith is to be able to release them independently.

The last thing you want is to have a spaghetti mess of dependencies, that has teams working across many repositories and releasing multiple components with every release.

Using Domain-Driven Design can be helpful when working out what a good domain looks like, so I would advise doing some reading on the topic. It is a difficult subject but having at least some knowledge will be beneficial.

Start by outlining the supporting functionality that most applications have such as:

  • Auditing
  • Logging
  • Authentication
  • Document Storage

We often follow the Don't Repeat Yourself (DRY) Principle when writing code for an application but we don't often follow it across multiple applications.

Moving these supporting functions into their own microservices can allow other teams to use them, saving them from having to reinvent the wheel each time. There are often a host of other benefits that come from having these centralised as well.

A key concept of microservices is that every service is responsible for its own database. This can be one of the hardest concepts to get your head around when moving from a centralised database structure.

It is important to think about data ownership when outlining your domains.

Two microservices can't share the same database, access to another microservices data has to be via that microservice's API. If you take a look at your database tables you might be able to outline some domains from tables that are regularly joined together with others.

When one microservice does need to reference data from another service it should use an ID. For example, services can reference a UserId but the "source of truth" of the user data will remain in the user microservice.

🎯 Picking the right candidates

When moving from a monolith to a microservice architecture you should do everything possible to avoid a big bang 💥. This is where you completely switch from one architecture to another.

The better approach is to make incremental changes towards your ideal architecture.

"There is only one way to eat an elephant: a bite at a time" - Desmond Tutu

I recommend starting with a proof of concept.

Pick a small part of your system that isn't critical to the business and is well isolated. Should you have difficulties with the migration, this should be something that won't cause the business to start yelling as soon as it goes offline.

A good candidate would be a new feature that you are developing. If you are planning on changing architectures then there is little benefit in adding it to the existing monolith. Any new feature that you are developing should be a microservice if possible.

I have seen companies try and migrate to a new system while adding new features to the old system and it drags out the timelines, as you are constantly trying to achieve feature parity but the goalposts keep moving.

For your first microservice, there will be a lot of infrastructure that needs to be set up. Once this has been tackled, subsequent microservices should be easier to deploy either by reusing parts of the infrastructure or by using templates created in the process.

Pick something simple that is well-defined and doesn't rely on too many other components. Once you are confident that everything works you can start moving on to bigger parts of the system.

Services that are subject to frequent change are good candidates for microservices. This is where you will see the most benefit as developers will be able to deliver value faster.

Businesses often get frustrated by big migration projects as they will inevitably slow down the development of new features. If you can show early on that the changes you are making are having a positive impact on deliverables, then the business is more likely to be sympathetic towards the project.

When splitting up a monolith for the first time, it is better to create mini-monoliths around logical domains rather than true microservices.

Think "macroservice" rather than microservice.

Each microservice comes with its own infrastructure and operational overheads. If you try and go for the ideal architecture from day one, you will often overwhelm the team. It is better to have larger services and then split them up once the teams are operationally ready.

✨ New or Reuse

In the name of speed, it can be tempting to copy and paste code verbatim from the monolith to the microservice. This might work for small well-defined classes containing business logic but it isn't always the best approach.

Take this as an opportunity to go over the requirements for the service and see if all the functionality is still needed or whether it can be refined. It is likely that the application has grown to an unmanageable monolith over time and would have picked up some unnecessary baggage as a result. You don't want to carry that baggage with you to the shiny new system.

Depending on how old the code is, it will likely benefit from a technology refresh. It is unlikely that the tech stack that was picked 5 years ago when the code was written is still the best choice today.

While it is tempting to completely rewrite everything it can be beneficial to maintain some compatibility with the original implementation. You will likely need to rewrite the unit tests but if the contract remains the same, most of the integration tests and end-to-end tests should still work. This can also be a great way to test you haven't broken anything in the process.

Sometimes, you may need to call on functionality that still exists inside the monolith. The best way to do this is to expose a new endpoint in the monolith and then call this endpoint from the microservice.

You will need to add an anti-corruption layer between the microservice and the monolith to ensure you don't couple them together. Once that functionality is also removed from the monolith it should just be a case of updating the endpoint URL.

🚚 Migration Strategies

To avoid the big bang, it is important to have a well-thought-out migration strategy in place.

Migrations can be tricky especially when data is involved. There are 3 main data areas that you need to think about:

👴 1. Old Data

Data written by the monolith to the central database will need to be migrated. How you do this will depend on whether users need to access data that has been written in the past. If they only need to access new data then your job becomes a lot easier.

Depending on the complexity and size of the data you may need to develop a few migration tools in order to do the migration. So make sure you include that in your estimates.

👶 2. New Data

Any new data will be written to the microservice database. If you are using incremental IDs then you will need to make sure that the new IDs start at a number higher than the last ID in the old database. This is to ensure that there are no conflicts when you import the old data into your system.

Make sure to include a buffer between the old ID and the new ID in case old records still get written or you need to revert the change back to the old system.

✈️ 3. In-Flight Data

It is easy to forget about in-flight data. Most data written to your application will be associated with a process. If that process is still ongoing then your application will likely break when it can't access the old data.

If you are working on an e-commerce system there could be issues if someone attempts a refund on an order from the old system that can't be found on the new system.

It is important to map out all the scenarios where this can happen and have a plan in place to deal with them. This could be as simple as checking the new system first and if the data can't be found checking the old system until the migration has been completed.

🚦Controlling Traffic

There are a few approaches you can take to migrate traffic from the monolith to the microservice.

The simplest option is to call the microservice from the monolith. This is sometimes the only option when there is a data migration involved.

  1. Decouple functionality into a new microservice.
  2. Add code to the monolith to call the microservice. This can include the option to fall back to the original data store when data can't be found or control the flow using a feature flag.
  3. Migrate old data from monolith to microservice.
  4. Remove the old code path from the monolith.

For business-critical functionality, you often need more control than what can be achieved with the above steps.

In these cases, it is better to move to an API gateway that can control the flow of individual endpoints. This will likely be the end state you will want to achieve when you no longer have a monolith anyway.

A conservative approach could be:

  1. Shadow requests to the microservice while still serving the users from the monolith. This allows you to "test in production" without impacting the users.
  2. Divert a small percentage of traffic to the new microservice. Depending on the type of business you may want to segment the traffic. For example, only users on the free plan get diverted so you don't annoy your paying customers if there are problems. This can also be a way to get around the old data problem by only diverting requests associated with new processes.
  3. Increase the amount of traffic while monitoring the progress until you get to 100%.
  4. Migrate any data from the old database.
  5. Remove the old code path from the monolith.

Whichever approach you take it is important that you see it through to its completion. I have seen too many teams get stuck halfway through and end up maintaining 2 systems instead of 1.

💭 Final Thoughts

Migrations can take years and it is not uncommon for businesses to give up halfway through. Priorities change and key supporters can end up leaving. This is why the piecemeal approach is often better than trying to change everything at once.

In most cases, partial decomposition of the monolith is all that is needed to see benefits. A full microservice architecture comes with a lot of infrastructure and operational costs that aren't always worth the effort. Amazon Prime recently switched back to a monolith and saved themselves 90% on their infrastructure bill.

In Amazon's case, they relied too heavily on serverless architecture and were reaching the scaling limits. For low-traffic systems, there are considerable savings in moving to serverless architecture, especially when considering the free tiers. For high-frequency systems dedicated resources are usually the more cost-effective option.

Amazon's other cost savings came from no longer writing large amounts of data to an intermediary S3 bucket. This is a great example of going too far with microservices. If you need to share a large amount of data between microservices, they may be better as one service.

It is easy in hindsight to say what they should have done but it is incredibly hard to get it right the first time. What may work for your system now could become a bottleneck in 6 months' time when traffic has increased. It is always best to take a cautious approach and reevaluate throughout.

❤️ Picks of the Week

📝 Article - Writing is a game-changer for every software developer. Writing has helped me a lot in my career; at some point, it may become my full-time job. This is a great article about the benefits of writing as a software developer that is worth a read.

📝 Article - Managing the Overload of New Tech: My Own Quest to Keep Up. Trying to keep up with all the new technologies is a constant battle for software developers. This article has some great tips worth reading.

📚 Book - This week I am reading The Tipping Point by Malcolm Gladwell (affiliate link) an interesting book on the social dynamics involved that cause ideas, trends and diseases to spread. I am hoping to pick up some tips to grow my YouTube channel.

📝 Article - Scaling up the Prime Video audio/video monitoring service and reducing costs by 90%. As mentioned in the article, Amazon reduced costs by 90% by moving from microservices to a monolith. Yes, that isn't a typo. Microservices aren't always the best option especially if there is a lot of data moving around or high infrastructure costs.

👨‍💻 Latest from me

Everyone seemed to like last week's issue on, how to increase your chances of getting promoted and I have had 50 more people subscribe as a result. So welcome if you are new here!

🎬 YouTube - 5 Uncomfortable Truths About Software Engineering. Working as a software developer isn't always as inclusive and great as people make it out to be. I can't with an honest conscience teach people about software engineering without covering some of the negatives as well.

Out of the 5 I cover in this video, it is diversity which I feel is the biggest and the hardest to solve. According to some surveys as much as 91.88% of software developers identify as male.

As a father to 2 girls, I do feel a duty to try and at least make them aware of tech and the possibilities beyond, "Daddy does techy techy stuff".

💬 Quote of the Week

Find people on the Internet who love the same things as you and connect with them.

From Steal Like an Artist (affiliate link) by Austin Kleon. Resurfaced with Readwise.


📨 Are you looking to level up your skills in the tech industry?

My weekly newsletter is written for engineers like you, providing you with the tools you need to excel in your career. Join here for free →

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .