Showing posts with label Microservices. Show all posts
Showing posts with label Microservices. Show all posts

Sunday, 11 August 2024

Introduction to Microservices


𝐂𝐨𝐦𝐩𝐨𝐧𝐞𝐧𝐭𝐬 𝐨𝐟 𝐌𝐢𝐜𝐫𝐨𝐬𝐞𝐫𝐯𝐢𝐜𝐞𝐬 𝐀𝐫𝐜𝐡𝐢𝐭𝐞𝐜𝐭𝐮𝐫𝐞


Microservices architecture breaks down applications into smaller, independent services. Here's a rundown of the 𝟏𝟎 𝐤𝐞𝐲 𝐜𝐨𝐦𝐩𝐨𝐧𝐞𝐧𝐭𝐬 in this architecture:


1. 𝐂𝐥𝐢𝐞𝐧𝐭

These are the end-users who interact with the application via different interfaces like web, mobile, or PC.


2. 𝐂𝐃𝐍 (Content Delivery Network)

CDNs deliver static content like images, stylesheets, and JavaScript files efficiently by caching them closer to the user's location, reducing load times.


3. 𝐋𝐨𝐚𝐝 𝐁𝐚𝐥𝐚𝐧𝐜𝐞𝐫

It distributes incoming network traffic across multiple servers, ensuring no single server becomes a bottleneck and improving the application's availability and reliability.


4. 𝐀𝐏𝐈 𝐆𝐚𝐭𝐞𝐰𝐚𝐲

An API Gateway acts as an entry point for all clients, handling tasks like request routing, composition, and protocol translation, which helps manage multiple microservices behind the scenes.


5. 𝐌𝐢𝐜𝐫𝐨𝐬𝐞𝐫𝐯𝐢𝐜𝐞𝐬

Each microservice is a small, independent service that performs a specific business function. They communicate with each other via APIs. 


6. 𝐌𝐞𝐬𝐬𝐚𝐠𝐞 𝐁𝐫𝐨𝐤𝐞𝐫

A message broker facilitates communication between microservices by sending messages between them, ensuring they remain decoupled and can function independently.


7. 𝐃𝐚𝐭𝐚𝐛𝐚𝐬𝐞𝐬

Each microservice typically has its database to ensure loose coupling. This can involve different databases for different microservices


8. 𝐈𝐝𝐞𝐧𝐭𝐢𝐭𝐲 𝐏𝐫𝐨𝐯𝐢𝐝𝐞𝐫

This component handles user authentication and authorization, ensuring secure access to services.


9. 𝐒𝐞𝐫𝐯𝐢𝐜𝐞 𝐑𝐞𝐠𝐢𝐬𝐭𝐫𝐲 𝐚𝐧𝐝 𝐃𝐢𝐬𝐜𝐨𝐯𝐞𝐫𝐲

This system keeps track of all microservices and their instances, allowing services to find and communicate with each other dynamically.


10. 𝐒𝐞𝐫𝐯𝐢𝐜𝐞 𝐂𝐨𝐨𝐫𝐝𝐢𝐧𝐚𝐭𝐢𝐨𝐧 (e.g., Zookeeper)

Tools like Zookeeper help manage and coordinate distributed services, ensuring they work together smoothly.


 

Image source: Adnan Maqbool Khan's post on LinkedIn

Tuesday, 14 May 2024

Microservices Application Stack on Docker

This article extends my notes from an Udemy course "Kubernetes for the Absolute Beginners - Hands-on". All course content rights belong to course creators. 

The previous article in the series was Kubernetes LoadBalancer Service | My Public Notepad.




Let's consider a simple application stack running on Docker.

This is a simple voting application with following components:
  • voting app
    • web application developed in Python
    • provides the user with an interface to choose between two options: a cat and a dog
  • in-memory DB
    • Redis
    • when user makes a selection, the vote is stored in Redis
  • worker
    • an application which processes the vote, written in .NET
    • takes the new vote and updates the persistent database - it increments the number of votes for cats if vote was for cats
  • persistent database
    • PostgreSQL
    • has a table with a number of votes for each category: cats and dogs
  • vote results display app
    • an interface to show the results;  reads count of votes from the PostgreSQL database and displays it to the user
    • web application, developed in Node.js
This application is built with a combination of different services, different development tools and multiple different development platforms such as Python, Node.js, .NET etc...

It is easy to set up an entire application stack consisting of diverse components in Docker. Let's see how to put together this application stack on a single Docker engine using docker run commands.

Let us assume that all images of applications are already built and are available on Docker Repository.

We start with the data layer by starting an instance of Redis:

$ docker run -d --name=redis redis

-d, --detach = Run container in background and print container ID.
--name = name the container (important here!)

Next we will deploy the PostgreSQL database:

$ docker run -d --name=db postgres:9.4

To start with the application services we will deploy a front-end app for voting interface. Since this is a web server, it has a web UI instance running on port 80. We will publish that port to 5000 on the host system so we can access it from a browser.

$ docker run -d --name=vote -p 5000:80 voting-app

To deploy the web application that shows the results to the user:

$ docker run -d --name=result -p 5001:80 result-app

Finally, we deploy the worker by running an instance of the worker image:

$ docker run -d --name=worker worker

If we now try to load voting app like http://192.168.56.101:5000, we'll get Internal Server Error. 

This is because although all the instances are running on the host, in different containers, we haven't actually linked them together. We haven't told the voting web application to use this particular Redis instance. There could be multiple Redis instances running.

We haven't told the worker and the resulting app to use this particular PostgreSQL database that we ran. We can use links. --link container_name : host_name is a command line option which can be used to link to containers together. For example, the voting app web service is dependent on the Redis service when the web server starts as we can see in this web server code snippet:

def get_redis():
    if not hasattr(g, 'redis'):
        g.redis = Redis(host = 'redis', db=0, socket_timeout=5)
    return g.redis


Web app looks for a Redis service running on host redis, but the voting app container cannot resolve a host by the name redis. To make the voting app aware of the Redis service we'll add a --link option to running the voting app container to link it to the Redis container:

$ docker run -d --name=vote -p 5000:80 --link redis:redis voting-app

As --link uses the container name, this is why we need to name the containers. --link creates an entry in the /etc/hosts file on the voting app container, adding an entry with the host name redis with the internal IP of the red disk container:

/etc/hosts:
...
127.17.0.2 redis
..

Similarly, we need to add a link for the result app to communicate with the Postgres database:

$ docker run -d --name=result -p 5001:80 --link db:db result-app

 
Finally, the worker application requires access to both the Redis as well as the Postgres database:

$ docker run -d --name=worker --link redis:redis --link db:db worker

Using links is deprecated and the support may be removed in future in Docker. This is because advanced and newer concepts in Docker Swarm and networking supports better ways of achieving what we just did here with links.