Understanding Blackboard Architecture in Software
Understanding Blackboard Architecture in Software
1. Sound Processor – it writes sound waves to the blackboard. Each team generates data:
2. Word Identifier – it checks if those waves match known words.
3. Sentence Builder – it tries to form full sentences from the words. ● Sales team knows what was sold.
● Shipping team knows when items were delivered.
How it works: ● Support team knows what problems customers had.
● Sound Processor writes “sound wave” on the blackboard. In a traditional system, all data is sent to a central data warehouse, and a separate data
● The blackboard alerts: “New sound wave available!” team tries to make sense of it.
● Word Identifier checks: “Ah! This looks like the word ‘Hello’.”
● It writes “Hello” on the blackboard. In Data Mesh, each team manages and shares its own data, like a mini service:
● Blackboard alerts again: “New word added!”
● Sentence Builder sees it and adds it to a sentence. ● Sales team shares clean, documented data about sales.
● Shipping team does the same for deliveries.
This is how the system keeps working — by reacting to changes in the blackboard. ● Each team's data is treated like a product that other teams can use easily.
Data Fabric is a technology architecture and set of tools that connect all your data
sources together — no matter where the data lives (cloud, databases, apps, etc.) — and
make them work as one. It’s like a smart data “blanket” that covers everything and lets
you access and manage data easily and consistently.
Real-life Analogy : Imagine a library with books spread across: Different cities (cloud
platforms), Different buildings (databases), In different languages (formats).
Key Features
Ownershi Decentralized (each team owns Central tools for accessing distributed
p data) data
Goal Empower teams to manage their Simplify and unify access to all data
data
Il est construit autour d’un domaine métier, c’est-à-dire une fonction spécifique (ex : ✅ "You build it, you run it"
authentification).
Les équipes qui développent un service sont aussi responsables de son bon
➡️
fonctionnement (surveillance, bugs, incidents…).
Qu’est-ce qu’un monolithe (architecture monolithique) ?
Favorise la qualité, l’autonomie, et la responsabilité.
C’est une application unique et massive, où tous les composants (authentification,
paiement, catalogue...) sont regroupés dans un seul bloc.
✅ Let data drive decision (pas HiPPO)
On prend des décisions basées sur les données, pas selon la personne la
mieux payée (HiPPO = Highest Paid Person in the Office).
⚠️ Group Think
Critère Monolithe Microservices
➡️
Déploiement au début empêcher l’innovation.
Maintenance à long ❌ De plus en plus difficile ✅ Plus souple et modulaire Encourager l’indépendance de pensée dans les équipes.
Mise à jour ❌ Risque d’impacter tout le ✅ Mise à jour rapide et YAGNI "You Aren’t Gonna Need It" → ne pas coder ce dont on n’a pas encore
besoin. Évite les fonctions inutiles.
système ciblée
Approche Chaque équipe gère son microservice comme un produit à long terme :
Mise à l’échelle ❌ On doit tout scaler ✅ On peut scaler produit développement, amélioration continue, support, performance.
Modularité ❌ Faible ✅ Forte (chaque service Moyen de communication Est-ce recommandé pour microservices ?
est autonome)
✅ HTTP/REST Oui, souvent utilisé pour communication
simple et directe
● Chaque équipe est autonome et contient tous les profils nécessaires (dev, test,
ops...).
❌ ESB (Enterprise Service Bus) NON recommandé → trop centralisé,
contraire à l’autonomie des microservices
Les connexions doivent être simples. Identifier les erreurs le plus tôt possible, idéalement en développement ou en test, pas
Les services eux doivent être intelligents (logique métier, traitement, etc.). en production.
Cela permet :
Cela veut dire :
● de corriger vite,
● Les points de communication doivent être faciles à utiliser, stables, simples. ● de réduire les coûts de correction,
● Toute la complexité est dans le service lui-même, pas dans la manière d’y accéder. ● d’éviter les pannes visibles par les utilisateurs.
Réponse :
Abandonner le concept de transactions pour une gestion plus relaxée.
Explication simple :
● Dans les microservices, il est difficile d’avoir des transactions globales entre
services.
● Du coup, on accepte que les données ne soient pas synchronisées
immédiatement, mais qu'elles finiront par l’être.
● C’est un compromis entre disponibilité et cohérence.
Bonnes réponses :
MV* (MVC, MVVM...) Séparation entre modèle (données), vue (interface) et Mainframe Tout est centralisé sur un gros serveur central (ordinateur central).
contrôleur/logique.
Client-Serveur Le client (UI) envoie des requêtes à un serveur central.
Layered (couches) Architecture en couches : UI → logique métier → accès données.
N-tiers (3-tiers...) Déploiement en plusieurs couches séparées : UI, logique métier,
Clean / Hexa / Onion Variantes d’architecture en couches inversées centrées sur la base de données.
logique métier.
Microservices Plusieurs petits services indépendants déployés séparément.
P2P (Peer to Pas de serveur central, chaque machine agit comme client et serveur.
Peer)
Mainframe
3. Styles de communication (Communication)
Décrivent comment les composants échangent des données.
Pipe and Filter Aussi un style de communication : les données traversent une
chaîne de filtres.
Event-Driven Architecture (EDA) – Summary Message-Driven vs Event-Driven Systems – Summary
● Pub/Sub Model: Consumers subscribe to receive events once they are published.
● Event Streaming Model: Events are written to a log, and consumers read events
🔹 Key Differences
from the stream at any point in time. Aspect Message-Driven Event-Driven
🔹 Processing Types
Target Sent to a specific recipient Sent to no specific consumer
(broadcast-like)
●
●
Asynchronous Communication: Improves performance and responsiveness.
Real-Time Awareness: Enables timely decisions based on current data.
🔹 Messages in CQRS/ES Context
● Scalability: Easily handles growing workloads. 1. Command : Sent to a component to change state. Has an intent and a designated
● Flexibility & Fault Tolerance: Components can change independently; system recipient.
continues to function if parts fail. 2. Event : Emitted after a state change. Describes what happened, not what should
● Ease of Integration: New components can be added without disrupting the system. happen. Stored for future consumers (event sourcing).
3. Query : Sent to retrieve information. Includes sender’s address for Reply.
4. Reply : Response to a query. Returned to the sender's specified address.
Différents types de "mesh" que l’on retrouve dans l’architecture logicielle moderne :
1. Service Mesh
Un Service Mesh est une infrastructure dédiée à la communication entre services dans
un système distribué (souvent microservices).
Fonctionnalités clés :
● Routage intelligent
● Load balancing
● Sécurité (mTLS, autorisation)
● Observabilité (tracing, logs, metrics)
● Résilience (circuit breaker, retry, timeouts)
2. Data Mesh
Le Data Mesh est une approche décentralisée de la gestion de données à l’échelle,
adaptée aux architectures modernes de type microservices ou cloud.
Principes clés :
1. Domain ownership des données (chaque domaine gère ses propres données).
2. Données comme produit (chaque équipe publie des données utilisables).
3. Infrastructure en self-service (outils pour découvrir, accéder, sécuriser les
données).
4. Interopérabilité via des standards partagés.
Objectif : Passer d’une architecture centralisée (data lake, data warehouse) à un modèle
décentralisé et orienté domaine.
3. API Mesh
Un API Mesh est une couche d’abstraction qui fédère plusieurs API backend dans une
seule interface cohérente.
Fonctionnalités :
Cas d’usage : Super apps, Intégration multi-backends dans des frontends ou des
gateways
In computer networking, a proxy server is a server application that acts as an intermediary between a
4. Messaging Mesh / Event Mesh client requesting a resource and the server providing that resource. It improves privacy, security, and
possibly performance in the process.
Définition :
Un Event Mesh est une infrastructure qui connecte dynamiquement des producteurs et
consommateurs d’événements à travers différents environnements (cloud, edge,
on-premise).
Fonctionnalités :
Fonctionnalités :
● Authentification fédérée
● Gestion des politiques d’accès par domaine
● SSO, OAuth2, OpenID Connect, etc.
Résumé comparatif
Type de Mesh Portée principale Objectif
Event Mesh
Home Whiteboard AI Assistant Online Compilers Jobs Tools Art Types of Components
There are two types of components −
SQL HTML CSS Javascript Python Java C C++ PHP Scala C# A central data structure or data store or data repository, which is responsible for
providing permanent data storage. It represents the current state.
Interactions or communication between the data accessors is only through the data
store. The data is the only means of communication among clients. The flow of control
differentiates the architecture into two categories −
In data-centered architecture, the data is centralized and accessed frequently by other
components, which modify data. The main purpose of this style is to achieve integrality Repository Architecture Style
of data. Data-centered architecture consists of different components that communicate
Advantages
Blackboard Architecture Style
through shared data repositories. The components access a shared data structure and
are relatively independent, in that, they interact only through the data store. Provides data integrity, backup and restore features.
The most well-known examples of the data-centered architecture is a database Repository Architecture Style Provides scalability and reusability of agents as they do not have direct
architecture, in which the common database schema is created with data definition communication with each other.
In Repository Architecture Style, the data store is passive and the clients (software
protocol for example, a set of related tables with fields and data types in an RDBMS. Reduces overhead of transient data between software components.
components or agents) of the data store are active, which control the logic flow. The
Another example of data-centered architectures is the web architecture which has a participating components check the data-store for changes.
common data schema (i.e. meta-structure of the Web) and follows hypermedia data
Disadvantages
model and processes communicate through the use of shared web-based data services. The client sends a request to the system to perform actions (e.g. insert data).
The computational processes are independent and triggered by incoming It is more vulnerable to failure and data replication or duplication is possible.
requests.
High dependency between data structure of data store and its agents.
If the types of transactions in an input stream of transactions trigger selection of
Changes in data structure highly affect the clients.
processes to execute, then it is traditional database or repository architecture, or
passive repository. Evolution of data is difficult and expensive.
This approach is widely used in DBMS, library information system, the interface Cost of moving data on network for distributed data.
repository in CORBA, compilers and CASE (computer aided software engineering)
environments.
Blackboard Architecture Style
In Blackboard Architecture Style, the data store is active and its clients are passive.
Therefore the logical flow is determined by the current data status in data store. It has a
blackboard component, acting as a central data repository, and an internal
representation is built and acted upon by different computational elements.
Page 4 of 7 Page 5 of 7
In this style, the components interact only through the blackboard. The data-
store alerts the clients whenever there is a data-store change.
The current state of the solution is stored in the blackboard and processing is
triggered by the state of the blackboard.
The system sends notifications known as trigger and data to the clients when
changes occur in the data.
If the current state of the central data structure is the main trigger of selecting
processes to execute, the repository can be a blackboard and this shared data
source is an active agent.
Advantages
A major difference with traditional database systems is that the invocation of
computational elements in a blackboard architecture is triggered by the current
Provides scalability which provides easy to add or update knowledge source.
state of the blackboard, and not by external inputs.
Provides concurrency that allows all knowledge sources to work in parallel as they
are independent of each other.
Parts of Blackboard Model Supports experimentation for hypotheses.
The blackboard model is usually presented with three major parts − Supports reusability of knowledge source agents.
TOP TUTORIALS
Python Tutorial
Java Tutorial
C++ Tutorial
C Programming Tutorial
C# Tutorial
Open in app Sign up Sign in to the next layer.
Types of N-Tier architectures
Search
Let’s look at some examples of N-Tier architecture:
3-Tier architecture
2 min read · Aug 16, 2022 3-Tier is widely used and consists of the following different layers:
Karan Pratap Singh Follow
Presentation layer: Handles user interactions with the application.
Listen Share Business Logic layer: Accepts the data from the application layer, validates it as
per business logic and passes it to the data layer.
N-tier architecture divides an application into logical layers and physical tiers. Data Access layer: Receives the data from the business layer and performs the
Layers are a way to separate responsibilities and manage dependencies. Each layer necessary operation on the database.
has a specific responsibility. A higher layer can use services in a lower layer, but not
2-Tier architecture
the other way around.
In this architecture, the presentation layer runs on the client and communicates
with a data store. There is no business logic layer or immediate layer between client
and server.
Single Tier or 1-Tier architecture
It is the simplest one as it is equivalent to running the application on a personal
computer. All of the required components for an application to run are on a single
application or server.
Advantages
Here are some advantages of using N-tier architecture:
Tiers are physically separated, running on separate machines. A tier can call to
another tier directly, or use asynchronous messaging. Although each layer might be
Can improve availability.
hosted in its own tier, that’s not required. Several layers might be hosted on the
same tier. Physically separating the tiers improves scalability and resiliency and Better security as layers can behave like a firewall.
adds latency from the additional network communication.
Separate tiers allow us to scale them as needed.
An N-tier architecture can be of two types: Improve maintenance as different people can manage different tiers.
In a closed layer architecture, a layer can only call the next layer immediately Disadvantages
down. Below are some disadvantages of N-tier architecture:
In an open layer architecture, a layer can call any of the layers below it. Increased complexity of the system as a whole.
A closed-layer architecture limits the dependencies between layers. However, it Increased network latency as the number of tiers increases.
might create unnecessary network traffic, if one layer simply passes requests along
Expensive as every tier will have its own hardware cost.
Though these architectures all vary somewhat in their details, they are very similar. They
The Clean Architecture all have the same objective, which is the separation of concerns. They all achieve this
separation by dividing the software into layers. Each has at least one layer for business
13 August 2012
rules, and another for interfaces.
1. Independent of Frameworks. The architecture does not depend on the existence
of some library of feature laden software. This allows you to use such frameworks
as tools, rather than having to cram your system into their limited constraints.
2. Testable. The business rules can be tested without the UI, Database, Web Server,
or any other external element.
3. Independent of UI. The UI can change easily, without changing the rest of the
system. A Web UI could be replaced with a console UI, for example, without
changing the business rules.
4. Independent of Database. You can swap out Oracle or SQL Server, for Mongo,
BigTable, CouchDB, or something else. Your business rules are not bound to the
database.
5. Independent of any external agency. In fact your business rules simply don’t
know anything at all about the outside world.
The diagram at the top of this article is an attempt at integrating all these architectures
into a single actionable idea.
Over the last several years we’ve seen a whole range of ideas regarding the architecture of The Dependency Rule
systems. These include:
● Hexagonal Architecture (a.k.a. Ports and Adapters) by Alistair Cockburn and The concentric circles represent different areas of software. In general, the further in you
adopted by Steve Freeman, and Nat Pryce in their wonderful book Growing go, the higher level the software becomes. The outer circles are mechanisms. The inner
Object Oriented Software circles are policies.
● Onion Architecture by Jeffrey Palermo
● Screaming Architecture from a blog of mine last year
● DCI from James Coplien, and Trygve Reenskaug.
The overriding rule that makes this architecture work is The Dependency Rule. This
● BCE by Ivar Jacobson from his book Object Oriented Software rule says that source code dependencies can only point inwards. Nothing in an inner
Engineering: A Use-Case Driven Approach circle can know anything at all about something in an outer circle. In particular, the
name of something declared in an outer circle must not be mentioned by the code in the
an inner circle. That includes, functions, classes. variables, or any other named software
entity.
By the same token, data formats used in an outer circle should not be used by an inner
circle, especially if those formats are generate by a framework in an outer circle. We Interface Adapters
don’t want anything in an outer circle to impact the inner circles.
The software in this layer is a set of adapters that convert data from the format most
Entities convenient for the use cases and entities, to the format most convenient for some
external agency such as the Database or the Web. It is this layer, for example, that will
wholly contain the MVC architecture of a GUI. The Presenters, Views, and Controllers all
belong in here. The models are likely just data structures that are passed from the
controllers to the use cases, and then back from the use cases to the presenters and
Entities encapsulate Enterprise wide business rules. An entity can be an object with
views.
methods, or it can be a set of data structures and functions. It doesn’t matter so long as
the entities could be used by many different applications in the enterprise.
Similarly, data is converted, in this layer, from the form most convenient for entities and
use cases, into the form most convenient for whatever persistence framework is being
If you don’t have an enterprise, and are just writing a single application, then these
used. i.e. The Database. No code inward of this circle should know anything at all about
entities are the business objects of the application. They encapsulate the most general
the database. If the database is a SQL database, then all the SQL should be restricted to
and high-level rules. They are the least likely to change when something external
this layer, and in particular to the parts of this layer that have to do with the database.
changes. For example, you would not expect these objects to be affected by a change to
page navigation, or security. No operational change to any particular application should
affect the entity layer.
Also in this layer is any other adapter necessary to convert data from some external
form, such as an external service, to the internal form used by the use cases and entities.
Use Cases
Frameworks and Drivers.
The software in this layer contains application specific business rules. It encapsulates
and implements all of the use cases of the system. These use cases orchestrate the flow of
The outermost layer is generally composed of frameworks and tools such as the
data to and from the entities, and direct those entities to use their enterprise wide Database, the Web Framework, etc. Generally you don’t write much code in this layer
business rules to achieve the goals of the use case. other than glue code that communicates to the next circle inwards.
We do not expect changes in this layer to affect the entities. We also do not expect this This layer is where all the details go. The Web is a detail. The database is a detail. We
layer to be affected by changes to externalities such as the database, the UI, or any of the keep these things on the outside where they can do little harm.
common frameworks. This layer is isolated from such concerns.
We do, however, expect that changes to the operation of the application will affect the Only Four Circles?
use-cases and therefore the software in this layer. If the details of a use-case change, then
some code in this layer will certainly be affected.
No, the circles are schematic. You may find that you need more than just these four. Typically the data that crosses the boundaries is simple data structures. You can use
There’s no rule that says you must always have just these four. However, The basic structs or simple Data Transfer objects if you like. Or the data can simply be
Dependency Rule always applies. Source code dependencies always point inwards. As arguments in function calls. Or you can pack it into a hashmap, or construct it into an
you move inwards the level of abstraction increases. The outermost circle is low level object. The important thing is that isolated, simple, data structures are passed across the
concrete detail. As you move inwards the software grows more abstract, and boundaries. We don’t want to cheat and pass Entities or Database rows. We don’t want
encapsulates higher level policies. The inner most circle is the most general. the data structures to have any kind of dependency that violates The Dependency
Rule.
Crossing boundaries. For example, many database frameworks return a convenient data format in response to
a query. We might call this a RowStructure. We don’t want to pass that row structure
inwards across a boundary. That would violate The Dependency Rule because it would
force an inner circle to know something about an outer circle.
At the lower right of the diagram is an example of how we cross the circle boundaries. It
shows the Controllers and Presenters communicating with the Use Cases in the next
layer. Note the flow of control. It begins in the controller, moves through the use case,
So when we pass data across a boundary, it is always in the form that is most convenient
and then winds up executing in the presenter. Note also the source code dependencies.
for the inner circle.
Each one of them points inwards towards the use cases.
The same technique is used to cross all the boundaries in the architectures. We take
advantage of dynamic polymorphism to create source code dependencies that oppose the
flow of control so that we can conform to The Dependency Rule no matter what
direction the flow of control is going in.
Introduction
disponibles
Service Mesh, comme Istio : Service de maillage à l’aide de
proxy permettant la sécurisation, la gestion et la connexion
entre les microservices. Nous reviendrons plus tard sur le
service Mesh
Présentation
Les différents outils des microservices
Afin d’expliquer le plus simplement possible le service Mesh,
Après cette présentation nous pouvons distinguer trois prenons l’exemple d’un panier sur un site en ligne. Lorsqu’un
avantages concernant les microservices : utilisateur souhaite faire des achats, il passe nécessairement par
quelques microservices, comme l’authentification, l’appel à la
1. Chaque microservice peut être déployé, optimisé et mis à
base de données de l’inventaire des stocks, le portail de
l’échelle de façon indépendante
paiement mais aussi la suggestion d’articles basée sur ses goûts
2. Gestion facilité pour les pannes et les erreurs
personnels. C’est à ce moment là que le maillage de services
3. Chaque microservice peut être développé et déployé de
(service Mesh) apparaît en créant un canal de communication
façon autonome à travers différents stacks technologiques
entre les microservices
Avec l’essor du cloud computing, les microservices sont très
De ce fait, l’objectif d’un service Mesh est de rendre la
utiles et nécessitent trois composants fondamentaux
communication entre les différents microservices accessible tout
Conteneurisation, comme Docker : Affectation d’un en considérant la tolérance aux pannes, la sécurisation, la
conteneur par unité de microservice. Cela permet d’adapter robustesse et surtout l’accessibilité. Ceci est possible grâce à
le nombre de ce microservice selon les besoins permettant l’implémentation de service proxies créant le canal de
ainsi une gestion et un déploiement efficace des communication. on nomme ces proxy Sidecar Pattern. L’image
microservices suivante tente de schématiser la situation :
Orchestration, comme Kubernetes : Permet la gestion, la
configuration et l’assignement des ressources systèmes
2. Data Plane : Assure quant à lui le maillage entre les
services. Cela est rendu possible par le Sidecar qui
communique avec les microservices du Control Plane, c’est
les Fonctions de réseau primitive. De l’autre côté, il y a les
Fonctions du réseau d’application qui grâce au service
proxy maintient et gère les fonctions critiques du réseau
comme la robustesse, l’équilibrage ou encore les pannes
Le rôle d’Istio
On constate que chaque microservice est associé à un unique
proxy Sidecar Istio est un service Mesh open source permettant aux
développeurs de connecter, gérer et sécuriser les
communications entre les microservices. C’est une solution
Le service Mesh permet de : native de Kubernetes. Il existe d’autres solutions comme Consul
ou Linkerd, AWS propose App Mesh et Microsoft la solution
Configurer, maintenir et sécuriser les communications entre Service Mesh Interface.
tous ou une partie des microservices d’une application
Configurer et exécuter les fonctions réseau d’un Bien que le service Istio soit déployable sur Kubernetes, ce
microservice : robustesse, équilibrage, panne dernier fonctionne aussi On-premise.
Les fonctions réseaux sont dissociées de l’utilisation dite de
Istio déploie des proxies appelés Sidecar Istio associés à chaque
“logique commerciale”, permettant ainsi aux développeurs
microservices. De ce fait le trafic se fait entre les Sidecar des
d’avancer sur la “logique métier” de l’application tout en
services offrant des politiques d’accès entre les microservices.
conservant les communications entre les canaux. De plus la
communication proxy passe au dessus des protocoles Les principales fonctionnalités d’Istio sont :
standards (HTTP1.x/2.x, gRPC) offrant la possibilité aux
développeurs d’utiliser la technologie qu’ils souhaitent La sécurisation des communications entre microservices
par des politiques d’identifications
Des surcouches de contrôles d’accès, de quotas et
d’allocation des ressources
Les composants d’un service Mesh Équilibrage automatique de la charge des ressources
Des métriques, logs et trafic (entrées et sorties) au sein d’un
Le service Mesh s’articule ainsi autour de deux axes principaux : cluster
1. Control Plane : Tous les proxies sont contrôlés et centralisés Comme présenté dans la section précédente, le service Mesh se
par ce service en spécifiant les politiques d’identifications, divise en deux parties le Data Plane et le Control Plane. Le
la génération de métriques et la configuration des services premier utilise des proxies appelés Envoy basés sur des règles de
proxies à travers le maillage routage spécifiques et déployés auprès de chaque microservice.
Le Control Plane quant à lui possède un ensemble de En conclusion, les microservices permettent un time to market à
composants techniques tels que Pilot pour gérer les règles de la fois plus rapide et efficace. Cela est valable pour le lancement
routage qui sont définies par les administrateurs, Citadel se de nouveaux produits, les adaptations aux besoins du marché
concentre sur l’authentification et la gestion des identités. Pour mais aussi à l’intégration d’innovation. En contrepartie,
finir Galley s’occupe de la validation, l’ingestion, le processus et la l’augmentation de la rapidité et de la quantité des itérations
distribution entre services. Le schéma suivant permet d’un rapide nécessite un effort de synchronisation des équipes. Cet aspect
coup d’oeil de comprendre le fonctionnement du maillage avec est important à mentionner car il entraîne davantage d’échanges
Istio. et de coordination entre les différentes équipes, ce qui signifie un
alignement des objectifs entre les équipes par l’entreprise. Pour
autant, le service de Mesh est encore peu utilisé dans les
entreprises.
Conclusion