Navigating the Fine Line of Microservices: When a pattern is an anti-pattern

Introduction

The world of microservices architecture is intricate, often presenting a razor-thin line between efficient design patterns and risky anti-patterns. Two notable instances of this dichotomy are the “Backend for Frontend (BFF) Pattern” and the “Communication through Common Data” pattern. Both of these are highly touted as good design patterns, however, they are both examples of common anti-patterns that should be avoided when designing systems that can scale. This article aims to dissect these patterns and present their associated anti-patterns, offering insights into their appropriate uses and potential pitfalls.

Backend for Frontend (BFF) Pattern

Concept and Implementation

The Backend for Frontend pattern involves creating a backend layer specifically designed to serve the needs of a frontend application. This backend then acts as an aggregator, collecting and formatting data from two or more downstream microservices into a frontend-friendly format.

Example:

Consider a mobile banking application that requires data like account balance, recent transactions, and investment portfolios. Instead of the mobile client directly calling multiple services, a BFF can aggregate these data points from separate microservices, ensuring a streamlined and efficient client-side operation.

Service Fan-Out Anti-Pattern

The BFF pattern is an example of the Service Fan-Out anti-pattern. When the backend service (service A) calls downstream services (Services B, C and D), aggregating their data to form a response to a client, a failure in any one of these services can result in a failure of all services. When service A experiences a failure, services B and C can’t be called. If either service B or C were to fail, service A is unable to aggregate the required data and provide a response to the client.

Reduced Availability Scores

The more services a BFF depends on, the lower its availability score tends to be. The compounded probability of failure from each dependent service significantly impacts the overall reliability of the entire system.

An availability score is worked out by calculating the availability of everything the service depends upon. The infrastructure required to run the service, the availability of the runtime environment, libraries, operating system, virtualisation environment, the server, network and so on.

Lets say, with all of that in mind, and by running multiple instances of your service all geographically dispersed utilising availability zones and regions, you manage to achieve an availability score for service A of 99.999% up time or “Five-9s”. And let’s say you can do that consistently and achieve the same score for services B & C.

If a request from a client calls service A, which in turn calls service B then the availability of the entire system is a result of the two services availability scores multiplied together. This will always be lower than the availability of a single service in isolation. In this instance the availability score would be 99.998%. Add in another service and this is reduced to 99.997

Overall Availability=0.99999 × 0.99999 × 0.99999 = 0.9999700001

The more services you add, the more this is compounded.

Responsiveness

And it is not just about up-time. We have to consider the responsiveness of such a pattern too. Even if Service A makes all the calls to the downstream services asynchronous, the speed with which service A can respond to the client is always only going to be as fast as the slowest downstream service. And if you can’t make all of the calls asynchronously (lets say you first need to look up a user and pass that to the other services) then the problem is only going to get worse. This form of service orchestration, in my opinion is to be avoided and is why we should prefer choreographed systems.

When to use BFF

I would only consider the BFF pattern in scenarios where the frontend requires a very specific data format or a streamlined aggregation of data from a very limited number of services. However, it’s crucial to monitor the complexity and dependencies to avoid turning it into a bottleneck or a cascading failure. In general my preference would be to avoid this pattern all together and ensure that services can answer client run-time requests without the need to call other services at all.

I feel a much better approach to the problem is to use an API gateway. Instead of having a backend service for each client, and API gateway still provides a single point of entry for the client application, but allows you to keep to the principles of a Microservice architecture avoiding unnecessary coupling and dependencies, by allowing the gatway to route to the appropriate backend service. An API gateway also has the added advantage of providing a single place to handle things like authentication and rate limiting. Another option might be client-side service discovery

Communication through Common Data Pattern

Concept and Implementation

This pattern involves microservices communicating indirectly through a shared data store. One service writes data to the store, and another reads it, often used for large file transfers or complex data structures.

Example:

A video processing application where one service uploads the video and another analyses it. Instead of sending large video files through a message bus or over the network, the first service can write the processed file to a common data store, such as a shared container volume and the second service can access it from there with the services passing only a file path, a reference to the file between them.

Shared Data Store/Data-Fuse Anti-Pattern

The danger here lies in the dependency on the common data store. A service should never share a data store with another service, as one service could cause issues with the datastore bringing down the other service. This is true regardless of whether the datastore is a database, file system, Storage Area Network or an S3 bucket. If the storage system fails, it will take down both services with it. In practice this is one of the most common anti-patterns I see, especially when it comes to hosting a separate database for each service but on the same database server or instance, usually to reduce costs.

Another risk is where one service places intensive load on the shared data store, perhaps getting stuck in an infinite loop and begins consuming all of the resources of the shared component, again impacting other services dependant on the same data store.

Implications for Service Availability

Heavy reliance on a shared data store can create a single point of failure, significantly affecting the availability scores of all dependent services. This coupling can lead to cascading failures where the downtime of one service impacts several others.

Appropriate Use

This pattern is beneficial for handling large data transfers or when direct service-to-service communication is impractical. However, it is often more appropriate to just combine the services into a single service. Many other alternatives are ultimately either other forms shared data stores of service to service coupling and/or orchestration.

Conclusion: Balancing Act in Microservices

In navigating the nuanced landscape of microservices, distinguishing between effective patterns and potential anti-patterns is a delicate yet crucial task. The Backend for Frontend and Communication through Common Data patterns, while offering significant benefits in certain contexts, also bear the risk of morphing into their anti-pattern counterparts under sub-optimal conditions. This transformation often leads to reduced service availability and unwelcome coupling of services, underscoring the importance of careful, context-aware implementation.

The key to harnessing the full potential of these patterns lies in a balanced, pragmatic approach. Architects and developers must weigh the specific needs and constraints of their projects against the inherent risks associated with these patterns. Regular monitoring, coupled with a readiness to adapt and refine architectural decisions, is essential in maintaining the delicate balance between scalability, reliability, and efficiency.

Ultimately, the success of using these patterns in microservices architecture hinges on understanding their dual nature. By acknowledging their potential pitfalls and employing them judiciously, we can navigate the fine line between leveraging their strengths and falling prey to their weaknesses, thereby crafting robust, scalable, and resilient microservices systems.

Leave a Reply

Your email address will not be published. Required fields are marked *