We had an all-day masterclass with Sam Newman on Microservice integration patterns. Several interesting topics were covered and we had some nice discussions. The following is an extended list of bookmarks and ideas I picked up and this is also the second part of my take on GOTO Berlin. See the first part here.
The Spotify model
The Spotify model of structuring engineering teams is getting quite some traction these days. I feel there are a couple of problems which can be solved with this approach. However, do not take this structure as a magic pill. One of the participants mentioned that Spotify does not use this model anymore, but I could not find any references to this. But, what I did find was some follow-up reads which can encourage you to adopt some parts of the Spotify model.
Pushing the limits of our network
If your microservices talk over HTTP, you will have to overcome the limitations. This reminded me of the original NodeJS presentation which is rewarding as it is. HTTP 1.1 is not fast enough for the amount of data most microservices pass around. HTTP 2.0 and HTTP 3.0 is hoping to push the limits.
Daniel Stenberg, best known for being the lead developer for curl, has 2 books to explain HTTP 2 and HTTP 3.
The first law of distributed objects
Martin Fowler talks about the first law of distributed objects in his book Patterns of Enterprise Application Architecture.
First Law of Distributed Object Design: “don’t distribute your objects”
Martin Fowler
But how does that stack up with all the interesting things in Microservices? He has a follow-up article bridging the gap.
Decomposing a system
Many of us have gone through the exercise of decomposing a monolith into smaller components. It’s common to misunderstand this as a new concept which was kickstarted with the microservices trend. It was interesting to know about a paper by David Parnas in the 70s which talks about the criterion used in decomposing a system to modules.
Here is a revisited version of the original paper.
Richardson maturity model
It’s safe to say REST is a common pattern used by microservices integration. However, it is debatable on how much it makes sense to adhere to Fielding’s original vision. I have worked on projects which have ignored the HTTP verbs and have used GET for all actions. REST does not enforce this as it’s an architectural style and not a protocol.
But if you want to make sure you utilise the entire set of elements REST has to offer, the Richardson maturity model is a good scale to measure this.
The cloud weather
With more and more services moving from on-premise to the cloud, and several services being born on the cloud, the term ‘cloud weather’ is interesting. The term shows the effects of sharing your infrastructure with others on the cloud. For instance, if you are sharing the bandwidth or compute with other users, you risk being affected by your co-tenant using more resources.
I have not seen this in practice myself nor have I heard of any war stories regarding this topic. This can also well be a modern cloud computing Urban legend.
Byzantine general’s problem
Byzantine general’s problem can be summarised as follows
How can individual parties find a way to guarantee full consensus?
For instance, if you have a service which talks to an API to make a monetary transaction, how do you make sure the transaction is complete? When you don’t get a confirmation, do you send the request again thereby risking the transaction to happen twice? It’s a very interesting problem.
On smart endpoints and dumb pipes
Any microservice-based system can be split into endpoints, which are the individual services and pipes, which are components used by the services to talk to each other.
It’s tempting to add business rules directly on the pipes. An instance is a communication queue which automatically filters messages. This will make it hard when you have to add some services or adapt to fast-changing business rules. A strong suggestion is to have smart endpoints where you keep your business rules and dumb pipes used only for communication.
Don’t get locked in to avoid lock-in
Two years back, my team moved several services running on-premise to AWS. Instead of doing a lift and shift, we used several managed services provided by AWS to redesign the system. We had a lot of debate on using the managed services provided by AWS. The argument against this was that it will make us locked to the provider.
There were strong arguments from the leadership team that this is acceptable, as we use these services to innovate faster and thereby increase the ‘speed to market’. Gregor Hohpe talks about the significant share of energy we spend to avoid lock-ins.
Reading list
to get deep into the topic