0% found this document useful (0 votes)
31 views17 pages

Understanding Blackboard Architecture in Software

Data Mesh is a decentralized approach to data management in organizations, empowering teams to own and manage their own data like a product, rather than relying on a central data team. Blackboard Architecture serves as a metaphor for this system, where independent components interact through a shared data structure, promoting easier access and trust in data. Data Fabric, on the other hand, focuses on integrating various data sources into a unified system, utilizing AI for automation and ensuring real-time access.

Uploaded by

no one
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views17 pages

Understanding Blackboard Architecture in Software

Data Mesh is a decentralized approach to data management in organizations, empowering teams to own and manage their own data like a product, rather than relying on a central data team. Blackboard Architecture serves as a metaphor for this system, where independent components interact through a shared data structure, promoting easier access and trust in data. Data Fabric, on the other hand, focuses on integrating various data sources into a unified system, utilizing AI for automation and ensuring real-time access.

Uploaded by

no one
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

What is Data Mesh?

What is Blackboard Architecture?


Data Mesh is a modern way of organizing and managing data in large organizations.
Think of a blackboard in a classroom. Teachers and students come to the blackboard, Instead of having one big central data team that controls everything (like a "data warehouse"
read what’s written on it, and write something new based on what they see. They don’t talk team), Data Mesh gives the responsibility to the teams that create and use the data.
to each other directly — everything goes through the blackboard.
The Goal
The Blackboard Architecture Style works the same way in software:
Make data easier to access, use, and trust by:
●​ There is one big "blackboard" (shared memory/data structure).
●​ There are multiple independent components (we call them agents). ●​ Decentralizing data ownership
●​ These agents don’t talk to each other, only to the blackboard.​ ●​ Making each team responsible for their own data
The blackboard notifies the agents when something changes (like: “Hey, I’ve got ●​ Treating data as a product, not just raw files
new data!”).
●​ Each agent looks at the data and decides if it can do something. Real-life Analogy
●​ If yes, it adds new data or updates something on the blackboard.
Imagine a big company like Amazon. They have:
Simple Example
●​ A sales team
Let’s say you’re building a speech recognition system. You have a blackboard and 3 ●​ A shipping team
components: ●​ A customer support team

1.​ Sound Processor – it writes sound waves to the blackboard. Each team generates data:
2.​ Word Identifier – it checks if those waves match known words.
3.​ Sentence Builder – it tries to form full sentences from the words. ●​ Sales team knows what was sold.
●​ Shipping team knows when items were delivered.
How it works: ●​ Support team knows what problems customers had.

●​ Sound Processor writes “sound wave” on the blackboard. In a traditional system, all data is sent to a central data warehouse, and a separate data
●​ The blackboard alerts: “New sound wave available!” team tries to make sense of it.
●​ Word Identifier checks: “Ah! This looks like the word ‘Hello’.”
●​ It writes “Hello” on the blackboard. In Data Mesh, each team manages and shares its own data, like a mini service:
●​ Blackboard alerts again: “New word added!”
●​ Sentence Builder sees it and adds it to a sentence. ●​ Sales team shares clean, documented data about sales.
●​ Shipping team does the same for deliveries.
This is how the system keeps working — by reacting to changes in the blackboard. ●​ Each team's data is treated like a product that other teams can use easily.

Difference from Repository Architecture Key Principles of Data Mesh

Feature Repository Style Blackboard Style 1.​ Domain-Oriented Ownership​


→ Teams (domains) that know the data best are in charge of it.
Who is active? Clients The Blackboard (data store) 2.​ Data as a Product​
(components) → Data is well-documented, reliable, and easy to use — like a product.
3.​ Self-Serve Data Platform​
Who controls logic/flow? Clients Data store (based on its state)
→ There are tools and infrastructure so teams can manage and share data easily
When does a component When it gets a When blackboard data changes (without needing a central team)
act? request (triggers) 4.​ Federated Governance​
→ There are shared rules (security, quality, access), but teams still work
Examples DBMS, compilers AI systems, speech/image recognition independently.
What is Data Fabric?

Data Fabric is a technology architecture and set of tools that connect all your data
sources together — no matter where the data lives (cloud, databases, apps, etc.) — and
make them work as one. It’s like a smart data “blanket” that covers everything and lets
you access and manage data easily and consistently.

Real-life Analogy : Imagine a library with books spread across: Different cities (cloud
platforms), Different buildings (databases), In different languages (formats).

A Data Fabric is like a smart librarian system that:

●​ Knows where every book is,


●​ Helps you search, translate, and read any book from anywhere,
●​ Keeps everything updated and connected.

What Does It Do?

●​ Connects data from many sources (cloud, on-premise, apps, etc.)


●​ Provides a unified view of all your data
●​ Uses AI to automate tasks like data integration, discovery, and governance
●​ Ensures real-time access, data quality, and security

Key Features

1.​ Unified Data Access​


→ You don’t need to move all the data. Data Fabric lets you access and use it from
where it is.
2.​ Metadata-driven​
→ It uses metadata (info about data) to find and understand data automatically.
3.​ Smart Automation (AI/ML)​
→ It uses AI to automate data integration, discovery, quality checks, etc.
4.​ Real-time and Consistent Data​
→ Ensures you're always using the most current and trusted version of data.

How It Differs from Data Mesh


Feature Data Mesh Data Fabric

Focus Organization model & ownership Technology & integration architecture

Ownershi Decentralized (each team owns Central tools for accessing distributed
p data) data

Goal Empower teams to manage their Simplify and unify access to all data
data

Uses AI? Not required Yes, for automation and discovery


Qu’est-ce qu’un microservice ? ✅ Two Pizza Rule (règle des 2 pizzas – Jeff Bezos, Amazon)
➡️
Un microservice est un petit service indépendant, qui remplit une tâche bien précise Une équipe ne doit pas être plus grande que ce que 2 pizzas peuvent nourrir.​
(par exemple : gestion des utilisateurs, paiements, commandes…).​ Objectif : équipes petites = plus efficaces, plus rapides, moins de
Chaque microservice travaille avec d'autres pour construire une application complète. réunions, plus d’autonomie.

Il est construit autour d’un domaine métier, c’est-à-dire une fonction spécifique (ex : ✅ "You build it, you run it"
authentification).
Les équipes qui développent un service sont aussi responsables de son bon

➡️
fonctionnement (surveillance, bugs, incidents…).​
Qu’est-ce qu’un monolithe (architecture monolithique) ?
Favorise la qualité, l’autonomie, et la responsabilité.
C’est une application unique et massive, où tous les composants (authentification,
paiement, catalogue...) sont regroupés dans un seul bloc.
✅ Let data drive decision (pas HiPPO)
On prend des décisions basées sur les données, pas selon la personne la
mieux payée (HiPPO = Highest Paid Person in the Office).

⚠️ Group Think
Critère Monolithe Microservices

Développement au début ✅ Plus facile ❌ Plus complexe


✅ Très simple (un seul fichier) ❌ Plus de configuration Quand tout le monde va dans le même sens pour ne pas être rejeté → peut

➡️
Déploiement au début empêcher l’innovation.​

Maintenance à long ❌ De plus en plus difficile ✅ Plus souple et modulaire Encourager l’indépendance de pensée dans les équipes.

terme Concepts clés liés à l’approche produit / microservices


Évolution de l’appli ❌ Risquée et lente ✅ Flexible et rapide Concept Signification simple

Mise à jour ❌ Risque d’impacter tout le ✅ Mise à jour rapide et YAGNI "You Aren’t Gonna Need It" → ne pas coder ce dont on n’a pas encore
besoin. Évite les fonctions inutiles.
système ciblée
Approche Chaque équipe gère son microservice comme un produit à long terme :

Mise à l’échelle ❌ On doit tout scaler ✅ On peut scaler produit développement, amélioration continue, support, performance.

ensemble uniquement ce qui est


nécessaire
Quelle affirmation est incorrecte concernant la communication entre
microservices ?

Modularité ❌ Faible ✅ Forte (chaque service Moyen de communication Est-ce recommandé pour microservices ?
est autonome)
✅ HTTP/REST Oui, souvent utilisé pour communication
simple et directe

✅ MOM (Message Oriented


Les microservices permettent de mieux gérer des applications complexes grâce à :

✅ Équipes inter-fonctionnelles Middleware) – ex: Kafka, RabbitMQ


Oui, pour les systèmes asynchrones
(événements, files d’attente)

●​ Chaque équipe est autonome et contient tous les profils nécessaires (dev, test,
ops...).​
❌ ESB (Enterprise Service Bus) NON recommandé → trop centralisé,
contraire à l’autonomie des microservices

●​ Elle gère un microservice du développement jusqu’à la mise en production.​


D’après les principes des microservices, qu’est-ce qui doit être stupide? Dans l’approche DevOps, "échouer rapidement" (fail fast) signifie :

Les connexions doivent être simples.​ Identifier les erreurs le plus tôt possible, idéalement en développement ou en test, pas
Les services eux doivent être intelligents (logique métier, traitement, etc.). en production.​
Cela permet :
Cela veut dire :
●​ de corriger vite,
●​ Les points de communication doivent être faciles à utiliser, stables, simples. ●​ de réduire les coûts de correction,
●​ Toute la complexité est dans le service lui-même, pas dans la manière d’y accéder. ●​ d’éviter les pannes visibles par les utilisateurs.

Ce principe vient de la philosophie :​


Que veut dire le principe de Design for Failure ?
“Smart endpoints, dumb pipes” = Les extrémités (services) font le boulot, les
connexions sont simples. Réponse :​
Concevoir le système en prévoyant les fautes à l'avance.
Que représente le principe de Tolerant Reader dans les microservices ?
Explication :
Réponse :​
Utiliser uniquement les données dont on a besoin. ●​ On part du principe que les pannes arriveront (réseau, base de données,
serveur...).
Cela signifie : ●​ Du coup, on construit le système pour résister aux pannes :
○​ Retenter une requête,
●​ Un service doit être capable de lire un message même s’il contient plus ○​ Basculer vers un autre service,
d’informations que nécessaire. ○​ Sauvegarder les messages,
●​ Il ne doit pas planter si le message change légèrement (ajout de nouveaux champs, ○​ Alerter les équipes...
ordre différent…).
●​ Objectif : tolérance aux changements pour éviter les bugs lors des évolutions. Cela rend les systèmes plus robustes et fiables.

Qu'est-ce que la consistance éventuelle ?

Réponse :​
Abandonner le concept de transactions pour une gestion plus relaxée.

Explication simple :

●​ Dans les microservices, il est difficile d’avoir des transactions globales entre
services.
●​ Du coup, on accepte que les données ne soient pas synchronisées
immédiatement, mais qu'elles finiront par l’être.
●​ C’est un compromis entre disponibilité et cohérence.

Pourquoi l’approche DevOps est-elle encouragée ?

Bonnes réponses :

●​ Elle encourage l'appropriation du produit


●​ Elle marche bien avec le déploiement sur le cloud
●​ Elle facilite le développement collaboratif
●​ Elle permet de repérer les erreurs rapidement et d’apprendre vite (fail fast)
1. Architectures Structurelles 2. Architectures de déploiement
These describe how the code and components are organized logically. Ils décrivent comment le système est distribué ou déployé physiquement.

Architecture Description simple Architecture Description simple

MV* (MVC, MVVM...) Séparation entre modèle (données), vue (interface) et Mainframe Tout est centralisé sur un gros serveur central (ordinateur central).
contrôleur/logique.
Client-Serveur Le client (UI) envoie des requêtes à un serveur central.
Layered (couches) Architecture en couches : UI → logique métier → accès données.
N-tiers (3-tiers...) Déploiement en plusieurs couches séparées : UI, logique métier,
Clean / Hexa / Onion Variantes d’architecture en couches inversées centrées sur la base de données.
logique métier.
Microservices Plusieurs petits services indépendants déployés séparément.

Layered : Microfrontends Le frontend est aussi découpé en composants déployés


indépendamment.

P2P (Peer to Pas de serveur central, chaque machine agit comme client et serveur.
Peer)

Master-Worker Un maître distribue les tâches à des workers (travailleurs) pour


exécution.

Ring Chaque nœud communique avec ses voisins (comme un anneau).

Mainframe
3. Styles de communication (Communication)
Décrivent comment les composants échangent des données.

Architecture Description simple

Event-Based Les composants réagissent à des événements (pub/sub).

Request-Response Communication classique client-serveur via des requêtes


(Req/Res) (HTTP, RPC…).

Pipe and Filter Aussi un style de communication : les données traversent une
chaîne de filtres.
Event-Driven Architecture (EDA) – Summary Message-Driven vs Event-Driven Systems – Summary

Event-Driven Architecture (EDA) is a software design pattern where systems communicate


and react to events—significant occurrences or state changes—through decoupled,
🔹 Core Concepts
asynchronous services. It’s widely used in modern, distributed applications (e.g., real-time ●​ Message-Driven Systems
customer engagement platforms). ○​ Components send messages to a specific recipient (by address).

🔹 Key Concepts ○​ Messages are point-to-point, queued if the receiver is busy.


○​ Promotes loose coupling, isolation, and location transparency.
○​ Enables non-blocking communication, flow control, back-pressure, and
●​ Event: A record of a significant action or change, like a mouse click, sensor output, failure delegation.
or system state update. ○​ Foundation of Reactive Systems (as per the Reactive Manifesto).​
●​ Event Notification: A message informing that an event has occurred.
●​ Event Producer: Detects and emits events without knowing the consumer. ●​ Event-Driven Systems
●​ Event Consumer: Listens for and processes events. ○​ Components emit events without knowing who will consume them.
●​ Event Processing Platform: Middleware (e.g., Apache Kafka) that handles events ○​ Events are sent to a well-known location (like a topic or event log).
asynchronously. ○​ Consumers subscribe and track which events they have processed.

🔹 Models ○​ Encourages decoupling in time and space between producers and


consumers.

●​ Pub/Sub Model: Consumers subscribe to receive events once they are published.
●​ Event Streaming Model: Events are written to a log, and consumers read events
🔹 Key Differences
from the stream at any point in time.​ Aspect Message-Driven Event-Driven

🔹 Processing Types
Target Sent to a specific recipient Sent to no specific consumer
(broadcast-like)

Producer Knows the recipient Doesn’t know who (if anyone) is


1.​ Simple Event Processing: Each event triggers an immediate action.
awareness listening
2.​ Event Stream Processing: Continuous processing of real-time data streams.
3.​ Complex Event Processing: Detects patterns across multiple events. Coupling Loosely coupled Loosely/fully decoupled

🔹 Decoupling vs. Loose Coupling Queueing Receiver-side queues per


address
Centralized/event log-based queue

●​ Decoupling: Eliminates direct dependencies between components.


●​ Loose Coupling: Allows interaction without strong interdependence. Usage Direct actions (e.g. commands, State changes and notifications
replies)
Both lead to flexibility, scalability, and easier system evolution.
Delivery Typically expects Fire-and-forget, async

🔹 Benefits expectation acknowledgment

●​
●​
Asynchronous Communication: Improves performance and responsiveness.
Real-Time Awareness: Enables timely decisions based on current data.
🔹 Messages in CQRS/ES Context
●​ Scalability: Easily handles growing workloads. 1.​ Command : Sent to a component to change state. Has an intent and a designated
●​ Flexibility & Fault Tolerance: Components can change independently; system recipient.
continues to function if parts fail. 2.​ Event : Emitted after a state change. Describes what happened, not what should
●​ Ease of Integration: New components can be added without disrupting the system. happen. Stored for future consumers (event sourcing).
3.​ Query : Sent to retrieve information. Includes sender’s address for Reply.
4.​ Reply : Response to a query. Returned to the sender's specified address.
Différents types de "mesh" que l’on retrouve dans l’architecture logicielle moderne :

1. Service Mesh
Un Service Mesh est une infrastructure dédiée à la communication entre services dans
un système distribué (souvent microservices).

Fonctionnalités clés :

●​ Routage intelligent
●​ Load balancing
●​ Sécurité (mTLS, autorisation)
●​ Observabilité (tracing, logs, metrics)
●​ Résilience (circuit breaker, retry, timeouts)

Exemples : Istio, Linkerd, Consul Connect

2. Data Mesh
Le Data Mesh est une approche décentralisée de la gestion de données à l’échelle,
adaptée aux architectures modernes de type microservices ou cloud.

Principes clés :

1.​ Domain ownership des données (chaque domaine gère ses propres données).
2.​ Données comme produit (chaque équipe publie des données utilisables).
3.​ Infrastructure en self-service (outils pour découvrir, accéder, sécuriser les
données).
4.​ Interopérabilité via des standards partagés.

Objectif : Passer d’une architecture centralisée (data lake, data warehouse) à un modèle
décentralisé et orienté domaine.

3. API Mesh
Un API Mesh est une couche d’abstraction qui fédère plusieurs API backend dans une
seule interface cohérente.

Fonctionnalités :

●​ Agrégation d’APIs hétérogènes


●​ Résolution de dépendances entre APIs
●​ Sécurité et gouvernance centralisées

Cas d’usage : Super apps, Intégration multi-backends dans des frontends ou des
gateways
In computer networking, a proxy server is a server application that acts as an intermediary between a
4. Messaging Mesh / Event Mesh client requesting a resource and the server providing that resource. It improves privacy, security, and
possibly performance in the process.
Définition :

Un Event Mesh est une infrastructure qui connecte dynamiquement des producteurs et
consommateurs d’événements à travers différents environnements (cloud, edge,
on-premise).

Fonctionnalités :

●​ Routing dynamique d’événements


●​ Support multi-protocoles (AMQP, MQTT, Kafka, etc.)
●​ Gouvernance et sécurité des événements
Service mesh
Exemples : Solace Event Mesh, Confluent (avec Kafka), NATS JetStream

5. Identity Mesh (ou Security Mesh)


Une Security Mesh (ou Identity Mesh) est une approche distribuée de la gestion des
identités, accès et autorisations dans un système hétérogène.

Fonctionnalités :

●​ Authentification fédérée
●​ Gestion des politiques d’accès par domaine
●​ SSO, OAuth2, OpenID Connect, etc.

Résumé comparatif
Type de Mesh Portée principale Objectif

Service Mesh Réseau entre Gestion des communications inter-services


services Data mesh
Data Mesh Données Décentraliser et "produitiser" les données

API Mesh APIs Agréger et orchestrer plusieurs APIs dans


une interface unifiée

Event Mesh Événements Routage et gestion des événements à


asynchrones grande échelle

Identity/Security Identité, sécurité Gestion des identités et autorisations dans


Mesh un système distribué
API Mesh Identity Mesh

Event Mesh

An Enterprise Service Bus (ESB) is a software architectural pattern that facilitates


communication and integration between different applications in a large organization. It acts
as a central hub, enabling disparate systems to exchange data and services by handling
tasks like data transformation, message routing, and protocol conversion.
Page 1 of 7 Page 2 of 7 Page 3 of 7

Home Whiteboard AI Assistant Online Compilers Jobs Tools Art Types of Components
There are two types of components −

SQL HTML CSS Javascript Python Java C C++ PHP Scala C# A central data structure or data store or data repository, which is responsible for
providing permanent data storage. It represents the current state.

A data accessor or a collection of independent components that operate on the


Data-Centered Architecture central data store, perform computations, and might put back the results.

Interactions or communication between the data accessors is only through the data
store. The data is the only means of communication among clients. The flow of control
differentiates the architecture into two categories −
In data-centered architecture, the data is centralized and accessed frequently by other
components, which modify data. The main purpose of this style is to achieve integrality Repository Architecture Style
of data. Data-centered architecture consists of different components that communicate
Advantages
Blackboard Architecture Style
through shared data repositories. The components access a shared data structure and
are relatively independent, in that, they interact only through the data store. Provides data integrity, backup and restore features.

The most well-known examples of the data-centered architecture is a database Repository Architecture Style Provides scalability and reusability of agents as they do not have direct
architecture, in which the common database schema is created with data definition communication with each other.
In Repository Architecture Style, the data store is passive and the clients (software
protocol for example, a set of related tables with fields and data types in an RDBMS. Reduces overhead of transient data between software components.
components or agents) of the data store are active, which control the logic flow. The
Another example of data-centered architectures is the web architecture which has a participating components check the data-store for changes.
common data schema (i.e. meta-structure of the Web) and follows hypermedia data
Disadvantages
model and processes communicate through the use of shared web-based data services. The client sends a request to the system to perform actions (e.g. insert data).
The computational processes are independent and triggered by incoming It is more vulnerable to failure and data replication or duplication is possible.
requests.
High dependency between data structure of data store and its agents.
If the types of transactions in an input stream of transactions trigger selection of
Changes in data structure highly affect the clients.
processes to execute, then it is traditional database or repository architecture, or
passive repository. Evolution of data is difficult and expensive.

This approach is widely used in DBMS, library information system, the interface Cost of moving data on network for distributed data.
repository in CORBA, compilers and CASE (computer aided software engineering)
environments.
Blackboard Architecture Style
In Blackboard Architecture Style, the data store is active and its clients are passive.
Therefore the logical flow is determined by the current data status in data store. It has a
blackboard component, acting as a central data repository, and an internal
representation is built and acted upon by different computational elements.

A number of components that act independently on the common data structure


are stored in the blackboard.

Page 4 of 7 Page 5 of 7

In this style, the components interact only through the blackboard. The data-
store alerts the clients whenever there is a data-store change.

The current state of the solution is stored in the blackboard and processing is
triggered by the state of the blackboard.

The system sends notifications known as trigger and data to the clients when
changes occur in the data.

This approach is found in certain AI applications and complex applications, such


as speech recognition, image recognition, security system, and business resource
management systems etc.

If the current state of the central data structure is the main trigger of selecting
processes to execute, the repository can be a blackboard and this shared data
source is an active agent.
Advantages
A major difference with traditional database systems is that the invocation of
computational elements in a blackboard architecture is triggered by the current
Provides scalability which provides easy to add or update knowledge source.
state of the blackboard, and not by external inputs.
Provides concurrency that allows all knowledge sources to work in parallel as they
are independent of each other.
Parts of Blackboard Model Supports experimentation for hypotheses.

The blackboard model is usually presented with three major parts − Supports reusability of knowledge source agents.

Knowledge Sources (KS)


Disadvantages
Knowledge Sources, also known as Listeners or Subscribers are distinct and
independent units. They solve parts of a problem and aggregate partial results.
Interaction among knowledge sources takes place uniquely through the blackboard. The structure change of blackboard may have a significant impact on all of its
agents as close dependency exists between blackboard and knowledge source.
Blackboard Data Structure
Chapters Categories It can be difficult to decide when to terminate the reasoning as only approximate
The problem-solving state data is organized into an application-dependent hierarchy. solution is expected.
Knowledge sources make changes to the blackboard that lead incrementally to a solution Problems in synchronization of multiple agents.
to the problem.
Major challenges in designing and testing of system.
Control

Control manages tasks and checks the work state.

TOP TUTORIALS

Python Tutorial
Java Tutorial

C++ Tutorial

C Programming Tutorial
C# Tutorial
Open in app Sign up Sign in to the next layer.
Types of N-Tier architectures
Search
Let’s look at some examples of N-Tier architecture:

3-Tier architecture
2 min read · Aug 16, 2022 3-Tier is widely used and consists of the following different layers:
Karan Pratap Singh Follow
Presentation layer: Handles user interactions with the application.

Listen Share Business Logic layer: Accepts the data from the application layer, validates it as
per business logic and passes it to the data layer.

N-tier architecture divides an application into logical layers and physical tiers. Data Access layer: Receives the data from the business layer and performs the
Layers are a way to separate responsibilities and manage dependencies. Each layer necessary operation on the database.
has a specific responsibility. A higher layer can use services in a lower layer, but not
2-Tier architecture
the other way around.
In this architecture, the presentation layer runs on the client and communicates
with a data store. There is no business logic layer or immediate layer between client
and server.
Single Tier or 1-Tier architecture
It is the simplest one as it is equivalent to running the application on a personal
computer. All of the required components for an application to run are on a single
application or server.
Advantages
Here are some advantages of using N-tier architecture:
Tiers are physically separated, running on separate machines. A tier can call to
another tier directly, or use asynchronous messaging. Although each layer might be
Can improve availability.
hosted in its own tier, that’s not required. Several layers might be hosted on the
same tier. Physically separating the tiers improves scalability and resiliency and Better security as layers can behave like a firewall.
adds latency from the additional network communication.
Separate tiers allow us to scale them as needed.
An N-tier architecture can be of two types: Improve maintenance as different people can manage different tiers.

In a closed layer architecture, a layer can only call the next layer immediately Disadvantages
down. Below are some disadvantages of N-tier architecture:

In an open layer architecture, a layer can call any of the layers below it. Increased complexity of the system as a whole.

A closed-layer architecture limits the dependencies between layers. However, it Increased network latency as the number of tiers increases.
might create unnecessary network traffic, if one layer simply passes requests along
Expensive as every tier will have its own hardware cost.
Though these architectures all vary somewhat in their details, they are very similar. They
The Clean Architecture all have the same objective, which is the separation of concerns. They all achieve this
separation by dividing the software into layers. Each has at least one layer for business
13 August 2012
rules, and another for interfaces.

Each of these architectures produce systems that are:

1.​ Independent of Frameworks. The architecture does not depend on the existence
of some library of feature laden software. This allows you to use such frameworks
as tools, rather than having to cram your system into their limited constraints.
2.​ Testable. The business rules can be tested without the UI, Database, Web Server,
or any other external element.
3.​ Independent of UI. The UI can change easily, without changing the rest of the
system. A Web UI could be replaced with a console UI, for example, without
changing the business rules.
4.​ Independent of Database. You can swap out Oracle or SQL Server, for Mongo,
BigTable, CouchDB, or something else. Your business rules are not bound to the
database.
5.​ Independent of any external agency. In fact your business rules simply don’t
know anything at all about the outside world.

The diagram at the top of this article is an attempt at integrating all these architectures
into a single actionable idea.

Over the last several years we’ve seen a whole range of ideas regarding the architecture of The Dependency Rule
systems. These include:

●​ Hexagonal Architecture (a.k.a. Ports and Adapters) by Alistair Cockburn and The concentric circles represent different areas of software. In general, the further in you
adopted by Steve Freeman, and Nat Pryce in their wonderful book Growing go, the higher level the software becomes. The outer circles are mechanisms. The inner
Object Oriented Software circles are policies.
●​ Onion Architecture by Jeffrey Palermo
●​ Screaming Architecture from a blog of mine last year
●​ DCI from James Coplien, and Trygve Reenskaug.
The overriding rule that makes this architecture work is The Dependency Rule. This
●​ BCE by Ivar Jacobson from his book Object Oriented Software rule says that source code dependencies can only point inwards. Nothing in an inner
Engineering: A Use-Case Driven Approach circle can know anything at all about something in an outer circle. In particular, the
name of something declared in an outer circle must not be mentioned by the code in the
an inner circle. That includes, functions, classes. variables, or any other named software
entity.
By the same token, data formats used in an outer circle should not be used by an inner
circle, especially if those formats are generate by a framework in an outer circle. We Interface Adapters
don’t want anything in an outer circle to impact the inner circles.

The software in this layer is a set of adapters that convert data from the format most
Entities convenient for the use cases and entities, to the format most convenient for some
external agency such as the Database or the Web. It is this layer, for example, that will
wholly contain the MVC architecture of a GUI. The Presenters, Views, and Controllers all
belong in here. The models are likely just data structures that are passed from the
controllers to the use cases, and then back from the use cases to the presenters and
Entities encapsulate Enterprise wide business rules. An entity can be an object with
views.
methods, or it can be a set of data structures and functions. It doesn’t matter so long as
the entities could be used by many different applications in the enterprise.

Similarly, data is converted, in this layer, from the form most convenient for entities and
use cases, into the form most convenient for whatever persistence framework is being
If you don’t have an enterprise, and are just writing a single application, then these
used. i.e. The Database. No code inward of this circle should know anything at all about
entities are the business objects of the application. They encapsulate the most general
the database. If the database is a SQL database, then all the SQL should be restricted to
and high-level rules. They are the least likely to change when something external
this layer, and in particular to the parts of this layer that have to do with the database.
changes. For example, you would not expect these objects to be affected by a change to
page navigation, or security. No operational change to any particular application should
affect the entity layer.
Also in this layer is any other adapter necessary to convert data from some external
form, such as an external service, to the internal form used by the use cases and entities.

Use Cases
Frameworks and Drivers.
The software in this layer contains application specific business rules. It encapsulates
and implements all of the use cases of the system. These use cases orchestrate the flow of
The outermost layer is generally composed of frameworks and tools such as the
data to and from the entities, and direct those entities to use their enterprise wide Database, the Web Framework, etc. Generally you don’t write much code in this layer
business rules to achieve the goals of the use case. other than glue code that communicates to the next circle inwards.

We do not expect changes in this layer to affect the entities. We also do not expect this This layer is where all the details go. The Web is a detail. The database is a detail. We
layer to be affected by changes to externalities such as the database, the UI, or any of the keep these things on the outside where they can do little harm.
common frameworks. This layer is isolated from such concerns.

We do, however, expect that changes to the operation of the application will affect the Only Four Circles?
use-cases and therefore the software in this layer. If the details of a use-case change, then
some code in this layer will certainly be affected.
No, the circles are schematic. You may find that you need more than just these four. Typically the data that crosses the boundaries is simple data structures. You can use
There’s no rule that says you must always have just these four. However, The basic structs or simple Data Transfer objects if you like. Or the data can simply be
Dependency Rule always applies. Source code dependencies always point inwards. As arguments in function calls. Or you can pack it into a hashmap, or construct it into an
you move inwards the level of abstraction increases. The outermost circle is low level object. The important thing is that isolated, simple, data structures are passed across the
concrete detail. As you move inwards the software grows more abstract, and boundaries. We don’t want to cheat and pass Entities or Database rows. We don’t want
encapsulates higher level policies. The inner most circle is the most general. the data structures to have any kind of dependency that violates The Dependency
Rule.

Crossing boundaries. For example, many database frameworks return a convenient data format in response to
a query. We might call this a RowStructure. We don’t want to pass that row structure
inwards across a boundary. That would violate The Dependency Rule because it would
force an inner circle to know something about an outer circle.
At the lower right of the diagram is an example of how we cross the circle boundaries. It
shows the Controllers and Presenters communicating with the Use Cases in the next
layer. Note the flow of control. It begins in the controller, moves through the use case,
So when we pass data across a boundary, it is always in the form that is most convenient
and then winds up executing in the presenter. Note also the source code dependencies.
for the inner circle.
Each one of them points inwards towards the use cases.

We usually resolve this apparent contradiction by using the Dependency Inversion


Principle. In a language like Java, for example, we would arrange interfaces and
Conclusion
inheritance relationships such that the source code dependencies oppose the flow of
control at just the right points across the boundary.
Conforming to these simple rules is not hard, and will save you a lot of headaches going
forward. By separating the software into layers, and conforming to The Dependency
For example, consider that the use case needs to call the presenter. However, this call
Rule, you will create a system that is intrinsically testable, with all the benefits that
must not be direct because that would violate The Dependency Rule: No name in an
implies. When any of the external parts of the system become obsolete, like the database,
outer circle can be mentioned by an inner circle. So we have the use case call an interface
or the web framework, you can replace those obsolete elements with a minimum of fuss.
(Shown here as Use Case Output Port) in the inner circle, and have the presenter in the
outer circle implement it.

The same technique is used to cross all the boundaries in the architectures. We take
advantage of dynamic polymorphism to create source code dependencies that oppose the
flow of control so that we can conform to The Dependency Rule no matter what
direction the flow of control is going in.

What data crosses the boundaries.


Google, Amazon, Twitter, IBM, Netflix ou encore Airbnb, tous ont
fait le choix d’utiliser les microservices. Couplés au service Mesh,
ils permettent aux différents projets de gagner en agilité et en
innovation. Cet article présentera les microservices et le service
Mesh dans sa généralité ainsi que le service Istio.

Qu’est-ce qu’un microservice ?


Quant Blog Présentation générale
Le blog des Experts Data & IA

L’objectif des microservices est de remplacer les architectures


dites monolithes. Ces dernières tentent de répondre à
l’ensemble des demandes ainsi que les différents cas d’usage
auxquelles l’applications pourrait face avec le temps. Le
problème, c’est qu’au fur et à mesure que le projet grossit, des
fonctionnalités deviennent obsolètes ou inutiles mais restent
IA STRATEGY
09/10/2020
conservées. Finalement, on obtient une architecture
interdépendante que cela soit en termes de code et de
Le rôle du service Mesh dans une complexité rendant difficile la compréhension du schéma global.
Le projet devient délicat ce qui implique une baisse de prise de
architecture microservices
risque et de nouveauté car la stabilité est préférée à
l’innovation.

Les microservices, quant à eux, cherchent à décomposer les


différents modules fonctionnels répondant à un besoin unique
fournissant une fonctionnalité pour un métier comme :

Auteur : Thomas Meimoun Le login client


Le panier d’achat
Temps de lecture : 6 minutes
Le chat en ligne
Le paiement
User Interface (UI)

Introduction

disponibles
Service Mesh, comme Istio : Service de maillage à l’aide de
proxy permettant la sécurisation, la gestion et la connexion
entre les microservices. Nous reviendrons plus tard sur le
service Mesh

Bien que la conteneurisation accorde un environnement propre à


chaque microservice, il est important de faire attention aux
différents stacks technologiques utilisées pour la santé globale
de l’application, comme les langages utilisés et les bases de
données.

Après avoir mis en avant les microservices et les outils


Plusieurs microservices créent une application nécessaires au bon déploiement de la technologie, intéressons
nous plus en détail au service Mesh ou service de maillage
L’ensemble des microservices créé une application et l’utilisateur permettant d’interagir entre les différents microservices.
peut accéder à ces différents microservices via des API. Ainsi, le
but des microservices est d’offrir plus d’évolutivité et d’agilité
pour les projets IT avec une équipe propre à chaque
microservice. Notons aussi qu’il est possible d’utiliser un La communication entre microservices, le
microservice seul s’il suffit aux besoins.
service Mesh

Présentation
Les différents outils des microservices
Afin d’expliquer le plus simplement possible le service Mesh,
Après cette présentation nous pouvons distinguer trois prenons l’exemple d’un panier sur un site en ligne. Lorsqu’un
avantages concernant les microservices : utilisateur souhaite faire des achats, il passe nécessairement par
quelques microservices, comme l’authentification, l’appel à la
1. Chaque microservice peut être déployé, optimisé et mis à
base de données de l’inventaire des stocks, le portail de
l’échelle de façon indépendante
paiement mais aussi la suggestion d’articles basée sur ses goûts
2. Gestion facilité pour les pannes et les erreurs
personnels. C’est à ce moment là que le maillage de services
3. Chaque microservice peut être développé et déployé de
(service Mesh) apparaît en créant un canal de communication
façon autonome à travers différents stacks technologiques
entre les microservices
Avec l’essor du cloud computing, les microservices sont très
De ce fait, l’objectif d’un service Mesh est de rendre la
utiles et nécessitent trois composants fondamentaux
communication entre les différents microservices accessible tout
Conteneurisation, comme Docker : Affectation d’un en considérant la tolérance aux pannes, la sécurisation, la
conteneur par unité de microservice. Cela permet d’adapter robustesse et surtout l’accessibilité. Ceci est possible grâce à
le nombre de ce microservice selon les besoins permettant l’implémentation de service proxies créant le canal de
ainsi une gestion et un déploiement efficace des communication. on nomme ces proxy Sidecar Pattern. L’image
microservices suivante tente de schématiser la situation :
Orchestration, comme Kubernetes : Permet la gestion, la
configuration et l’assignement des ressources systèmes
2. Data Plane : Assure quant à lui le maillage entre les
services. Cela est rendu possible par le Sidecar qui
communique avec les microservices du Control Plane, c’est
les Fonctions de réseau primitive. De l’autre côté, il y a les
Fonctions du réseau d’application qui grâce au service
proxy maintient et gère les fonctions critiques du réseau
comme la robustesse, l’équilibrage ou encore les pannes

Ainsi, après avoir présenté globalement le service Mesh nous


pouvons nous concentrer sur un des services Mesh les plus
utilisés, Istio

Le rôle d’Istio
On constate que chaque microservice est associé à un unique
proxy Sidecar Istio est un service Mesh open source permettant aux
développeurs de connecter, gérer et sécuriser les
communications entre les microservices. C’est une solution
Le service Mesh permet de : native de Kubernetes. Il existe d’autres solutions comme Consul
ou Linkerd, AWS propose App Mesh et Microsoft la solution
Configurer, maintenir et sécuriser les communications entre Service Mesh Interface.
tous ou une partie des microservices d’une application
Configurer et exécuter les fonctions réseau d’un Bien que le service Istio soit déployable sur Kubernetes, ce
microservice : robustesse, équilibrage, panne dernier fonctionne aussi On-premise.
Les fonctions réseaux sont dissociées de l’utilisation dite de
Istio déploie des proxies appelés Sidecar Istio associés à chaque
“logique commerciale”, permettant ainsi aux développeurs
microservices. De ce fait le trafic se fait entre les Sidecar des
d’avancer sur la “logique métier” de l’application tout en
services offrant des politiques d’accès entre les microservices.
conservant les communications entre les canaux. De plus la
communication proxy passe au dessus des protocoles Les principales fonctionnalités d’Istio sont :
standards (HTTP1.x/2.x, gRPC) offrant la possibilité aux
développeurs d’utiliser la technologie qu’ils souhaitent La sécurisation des communications entre microservices
par des politiques d’identifications
Des surcouches de contrôles d’accès, de quotas et
d’allocation des ressources
Les composants d’un service Mesh Équilibrage automatique de la charge des ressources
Des métriques, logs et trafic (entrées et sorties) au sein d’un
Le service Mesh s’articule ainsi autour de deux axes principaux : cluster

1. Control Plane : Tous les proxies sont contrôlés et centralisés Comme présenté dans la section précédente, le service Mesh se
par ce service en spécifiant les politiques d’identifications, divise en deux parties le Data Plane et le Control Plane. Le
la génération de métriques et la configuration des services premier utilise des proxies appelés Envoy basés sur des règles de
proxies à travers le maillage routage spécifiques et déployés auprès de chaque microservice.

Le Control Plane quant à lui possède un ensemble de En conclusion, les microservices permettent un time to market à
composants techniques tels que Pilot pour gérer les règles de la fois plus rapide et efficace. Cela est valable pour le lancement
routage qui sont définies par les administrateurs, Citadel se de nouveaux produits, les adaptations aux besoins du marché
concentre sur l’authentification et la gestion des identités. Pour mais aussi à l’intégration d’innovation. En contrepartie,
finir Galley s’occupe de la validation, l’ingestion, le processus et la l’augmentation de la rapidité et de la quantité des itérations
distribution entre services. Le schéma suivant permet d’un rapide nécessite un effort de synchronisation des équipes. Cet aspect
coup d’oeil de comprendre le fonctionnement du maillage avec est important à mentionner car il entraîne davantage d’échanges
Istio. et de coordination entre les différentes équipes, ce qui signifie un
alignement des objectifs entre les équipes par l’entreprise. Pour
autant, le service de Mesh est encore peu utilisé dans les
entreprises.

✍ Article écrit par Thomas Meimoun

Principaux composants d’Istio

Pour finir, le service Mesh possède quelques limites. Un service


mesh est très invasif, de plus, il sert d’intermédiaire, ce qui
implique la possibilité de faire face à des ralentissements. Aussi
ce service nécessite aux développeurs des connaissances à la
fois du maillage de microservices et Kubernetes afin de fournir
une architecture optimale. Pour finir, il ajoute de la complexité à
un projet déjà existant.

Conclusion

You might also like