In the ever-evolving landscape of modern applications and cloud native architectures, the need for efficient, scalable, and secure communication between services is paramount.
If you still have a doubt for your own organization, just take a look at your workloads and if you are deploying more and more services and observing thoses services is a bit challenging, for sure your organization probably need a service Mesh.
My purpose is to showcase the capabilities of service mesh concept on Amazon Web Services Cloud (AWS) with Terraform. AWS App Mesh is AWS implementation of the mesh concept and his primary purpose is to allow developers to focus on innovation rather than infrastructure. But before diving into terraform code, let's explore some core knowledge to better understanging of the service Mesh interesting concept.
Why an organization needs a service mesh?
A monolithic architecture is a traditional approach to designing software where an entire application is built as a single, indivisible unit. In this architecture, all the different components of the application, such as the user interface, business logic, and data access layer, are tightly integrated and deployed together.
As a monolithic application grows, it becomes more complex and harder to manage.
This complexity can make it difficult for developers to understand how different parts of the application interact, leading to longer development times and increased risk of errors.
In modern application architecture, you can build applications as a collection of small, independently deployable microservices. Different teams may build individual microservices and choose their coding languages and tools. However, the microservices must communicate for the application code to work correctly.
Application performance depends on the speed and resiliency of communication between services. Developers must monitor and optimize the application across services, but it’s hard to gain visibility due to the system's distributed nature. As applications scale, it becomes even more complex to manage communications.
There are two main drivers to service mesh adoption :
Service-level observability : As more workloads and services are deployed, developers find it challenging to understand how everything works together. For example, service teams want to know what their downstream and upstream dependencies are. They want greater visibility into how services and workloads communicate at the application layer.
Service-level control : Administrators want to control which services talk to one another and what actions they perform. They want fine-grained control and governance over the behavior, policies, and interactions of services within a microservices architecture. Enforcing security policies is essential for regulatory compliance.
Those drivers leeds to a Service Mesh Architecture as a response. In facts, a service mesh provides a centralized, dedicated infrastructure layer that handles the intricacies of service-to-service communication within a distributed application.
What are the benefits of a service mesh?
- Service discovery : Service meshes provide automated service discovery, which reduces the operational load of managing service endpoints.
- Load balancing : Service meshes use various algorithms—such to distribute requests across multiple service instances intelligently.
- Traffic management : Service meshes offer advanced traffic management features, which provide fine-grained control over request routing and traffic behavior.
How does a service mesh work?
A service mesh removes the logic governing service-to-service communication from individual services and abstracts communication to its own infrastructure layer. It uses several network proxies to route and track communication between services.
A proxy acts as an intermediary gateway between your organization’s network and the microservice. All traffic to and from the service is routed through the proxy server. Individual proxies are sometimes called sidecars, because they run separately but are logically next to each service.
That's all for this first part, and as you can see, it is just about Service Mesh "Theory". In Part 2 we will get our hands dirty going deep in AWS Service Mesh Demo.
Stay focus and click here to try this exciting Part 2.
Resources
- AppMesh - Service Mesh & Beyond : https://tech.forums.softwareag.com/t/appmesh-service-mesh-beyond/
- AWS App Mesh: Hosted Service Mesh Control Plane for Envoy Proxy : https://www.infoq.com/news/2019/01/aws-app-mesh/
- The Istio service mesh : https://istio.io/latest/about/service-mesh/
- AWS App Mesh ingress and route enhancements : https://aws.amazon.com/blogs/containers/app-mesh-ingress-route-enhancements/
- How to use OAuth 2.0 in Amazon Cognito: Learn about the different OAuth 2.0 grants: https://aws.amazon.com/blogs/security/how-to-use-oauth-2-0-in-amazon-cognito-learn-about-the-different-oauth-2-0-grants/
- AWS App Mesh — Deep Dive : https://medium.com/@iyer.hareesh/aws-app-mesh-deep-dive-60c9ad227c9d
- Circuit breaking : https://www.envoyproxy.io/docs/envoy/latest/intro/arch_overview/upstream/circuit_breaking#arch-overview-circuit-break
- Envoy defaults set by App Mesh : https://docs.aws.amazon.com/app-mesh/latest/userguide/envoy-defaults.html#default-circuit-breaker
- Monolithic vs Microservices Architecture : https://www.geeksforgeeks.org/monolithic-vs-microservices-architecture/