0% found this document useful (0 votes)
52 views138 pages

DevOps Overview: Processes & Benefits

DevOps is a cultural and operational approach that combines software development and IT operations to enhance collaboration, automation, and delivery speed. It emphasizes continuous integration and delivery, enabling faster software releases and improved quality through practices like automated testing and monitoring. The document also discusses the DevOps lifecycle, tools, and methodologies like Agile and Scrum that support these processes.

Uploaded by

asmita
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
52 views138 pages

DevOps Overview: Processes & Benefits

DevOps is a cultural and operational approach that combines software development and IT operations to enhance collaboration, automation, and delivery speed. It emphasizes continuous integration and delivery, enabling faster software releases and improved quality through practices like automated testing and monitoring. The document also discusses the DevOps lifecycle, tools, and methodologies like Agile and Scrum that support these processes.

Uploaded by

asmita
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd

DevOps: Introduction, Process &

Continuous Delivery
What is DevOps?
• - DevOps = Development + Operations
• - A culture and set of practices that bring together software development
(design, coding, testing, maintain) software and IT operations(infra,
network , h/w and services).

• - Goal: Deliver applications and services at high velocity and quality


Why DevOps?
• - Faster delivery of software: Netflix deploys code thousands of times per
day using DevOps automation.
This enables them to release new features and fixes rapidly, keeping users
satisfied.
• - Improved collaboration between teams: Amazon’s DevOps model ensures
developers and operations teams work together, sharing responsibilities.
Developers take ownership of the app lifecycle from development to
operations, reducing blame and increasing efficiency
- Early detection of bugs: Etsy(et see) uses continuous integration (CI)
pipelines that run automated tests whenever code is committed.
This detects bugs early, often before they reach production
- Continuous feedback and improvement: Facebook gathers real-time
monitoring data and A/B testing feedback to improve user experience.
DevOps practices help integrate that feedback quickly into the product cycle
- Stable and reliable releases: Google uses canary deployments(small set of
users and blue-green strategies to reduce risk in production.
Changes are gradually rolled out to a small group before full deployment,
ensuring system stability.
Key Benefits of DevOps
- Rapid feature delivery:
• Explanation:
DevOps enables faster and more frequent delivery of new features and updates through
automation and continuous integration/delivery (CI/CD).
Example:
Spotify
Spotify deploys updates and new features frequently using microservices and DevOps pipelines.
For example: they introduced personalized playlists like "Discover Weekly“ by rolling out
changes incrementally and testing in real-time. Features go live quickly with minimal downtime
and instant user feedback.
- Improved team productivity:
Explanation:
DevOps encourages shared responsibility, automation of repetitive tasks, and smooth workflows
between development, testing, and operations teams.
Real-World Example:
Adobe
Adobe transitioned to DevOps to eliminate manual testing and deployment. Their teams use
automated testing and cloud-based CI/CD, allowing developers to focus on innovation instead of
routine tasks.
Example spotify
• - Lower failure rate of releases:
Explanation:
With continuous testing, monitoring, and early bug detection, DevOps
significantly reduces the chances of introducing bugs into production.
Real-World Example:
Flickr
Flickr used DevOps to deploy updates 10+ times a day while maintaining stability.
Automated unit and integration tests were run before every deployment.

• -Faster recovery time:


Explanation:
If a release does fail, DevOps practices like version control, automated rollbacks,
and monitoring help quickly recover and fix issues.
Real-World Example:
Netflix
Netflix uses a tool called Chaos Monkey that randomly breaks parts of their
infrastructure to test resilience.
Their DevOps model allows instant rollbacks and self-healing systems.
• - Continuous improvement:
Explanation:
DevOps encourages constant monitoring and feedback loops, enabling ongoing
learning, performance tuning, and feature refinement.
Real-World Example:
Google
Google uses extensive metrics, logs, and user feedback from production to
continuously optimize services like Gmail and Search.
DevOps tools allow them to collect and act on this data efficiently.
DevOps Lifecycle
• - Plan → Develop → Build → Test → Release → Deploy → Operate →
Monitor

• - Represents continuous loop for software development and delivery


DevOps Lifecycle Stages
• - Plan: Define requirements and tasks (Jira, Trello)

• - Develop: Write and commit code (Git, GitHub)

• - Build: Compile source code (Jenkins, Maven)

• - Test: Automated testing (Selenium, JUnit)

• - Release: Package and prepare for deployment

• - Deploy: Launch application (Docker, Kubernetes)

• - Operate: Maintain uptime (Prometheus, Nagios)

• - Monitor: Analyze logs/metrics (Grafana, ELK Stack)


Introduction to Continuous Delivery (CD)

• - CD = Continuously delivering tested code to production

• - Automates release process

• - Ensures software is always in a deployable state

• - Works with CI (Continuous Integration)


CI vs CD

• CI (Continuous Integration) vs CD (Continuous Delivery):

• - Frequent code integration vs Frequent, automated delivery

• - Automated builds/tests vs Automated release/deployment

• - Detect issues early vs Release anytime with confidence


Benefits of Continuous Delivery
• - Faster time to market

• - Less manual intervention

• - More reliable deployments

• - Easy rollbacks

• - Increased developer and customer satisfaction


Real-Time Example: E-Commerce Site
• Feature: "Add to Wishlist"
• Without DevOps:
• - Manual testing, manual deployment
• - Delays, bugs, and inefficiencies
• With DevOps:
• - Code pushed to GitHub
• - Jenkins triggers build & tests
• - Docker package created
• - Deployed via Kubernetes
• - Monitored via Prometheus
Tools Used in DevOps
• Version Control: Git, GitHub

• CI/CD: Jenkins, GitLab CI, CircleCI

• Config Management: Ansible, Puppet, Chef

• Containerization: Docker, Podman

• Orchestration: Kubernetes

• IaC: Terraform

• Monitoring: Prometheus, Grafana, ELK


Summary
• - DevOps bridges the gap between Dev and Ops

• - Encourages collaboration, automation, and faster delivery

• - Continuous Delivery ensures code is always production-ready

• - Real-time examples like e-commerce sites show its impact


DevOps as a Folk Model

• The DevOps Folk Model is a conceptual model that illustrates how Dev
(developers) and Ops (operations) work together throughout the software
delivery lifecycle to create fast, reliable, and continuous software delivery.
• It is called a "folk model" because it's not a strict framework — it's a
widely accepted mental model or common understanding of how
DevOps works in practice.

• The "folk model" of DevOps refers to an informal, experience-based


understanding of how DevOps practices work in real-world scenarios.
Instead of being rigid or strictly defined, the folk model is pragmatic,
focused on collaboration, feedback, and rapid iteration between
Development and Operations teams
The DevOps Folk Model: Core Concepts

The folk model describes a collaborative loop involving:


1. Development
• Developers write code
• Push to version control (e.g., Git)
• Unit tests are executed automatically
• Code is built into artifacts (e.g., Docker images, packages)

2. Testing
• Automated tests (unit, integration, functional)
• QA validates behavior in a test/pre-prod environment

3. Deployment
• Code is deployed to staging and then production
• Continuous Delivery or Continuous Deployment used
• Configuration managed as code
4. Operations
• Monitor performance, logs, errors
• Manage infrastructure (via IaC tools like Terraform)
• Handle scaling, rollbacks, alerts
IaC (Infrastructure as Code) is a practice in DevOps where infrastructure
(servers, networks, databases, etc.) is defined and managed using code, instead
of manual processes.
An IaC tool helps automate the provisioning, configuration, and management of
infrastructure through scripts or declarative code.

Example: What is Terraform?


• Terraform is an open-source Infrastructure as Code (IaC) tool developed by
HashiCorp. It lets you define, provision, and manage cloud infrastructure
using a declarative configuration language called HCL (HashiCorp
Configuration Language).
• Key Idea
• With Terraform, you write code to describe what infrastructure you want (like
EC2 instances, databases, load balancers), and Terraform takes care of creating,
updating, or destroying that infrastructure for you — across one or many cloud
providers.
DevOps as a Folk Model
• Real-World Example: Online Food Delivery App
• (Applies to apps like Zomato, Swiggy, Uber Eats)

ShopSwift – An E-commerce Platform


ShopSwift is an online retail company with a product team that builds
and maintains a shopping web app. They’ve adopted DevOps
practices using the folk model to continuously deliver updates to their
platform.

DevOps Folk Model Applied to ShopSwift

1. Plan
• Teams involved: Product Owners, Devs, QA, Ops
• Tooling: Jira, Confluence
• Weekly sprints are planned — features like "Add wishlist" or "Improve payment
flow" are prioritized.
2. Develop
• Teams involved: Developers
• Tooling: GitHub, Visual Studio Code
• Code is written using microservices architecture (e.g., frontend in React, backend in [Link]).
• Code is committed and pushed to GitHub → triggers CI/CD pipelines.

3. Build & Test


• Tooling: Jenkins + Docker + Jest + Selenium + SonarQube
• CI runs:
– Unit tests
– Static code analysis
– Docker image builds
• The image is tagged and pushed to a container registry (e.g., ECR or Docker Hub).

4. Pre-Prod QA Testing
• Tooling: Kubernetes (Staging Cluster), Postman, Cypress
• The latest image is deployed to a staging (pre-prod) environment.
• QA engineers run:
– Functional tests
– API tests
– Performance tests
5. Release & Deploy
• Tooling: ArgoCD or Helm (for K8s), Terraform (IaC), AWS
• Once approved, the deployment is promoted to production using GitOps practices.
• Infrastructure is provisioned and managed using Terraform.
• ArgoCD pulls changes from Git and applies them to the Kubernetes production
cluster.

6. Operate & Monitor


• Tooling: Prometheus, Grafana, ELK Stack, PagerDuty
• The operations team monitors:
– Server health
– Traffic spikes
– Application logs and error rates
• On-call engineers are alerted if something fails (e.g., payment service outage).

7. Feedback & Improve


• Tooling: New Relic, Sentry, Customer Support CRM
• User experience data and error tracking provide insights for future improvements.
• Feedback loops back to planning → starts the cycle again.
What is the DevOps Compact?

• The DevOps Compact is a set of principles or agreements that align the


goals, responsibilities, and mindset of everyone involved in the software
delivery lifecycle — from idea to production. It’s an unwritten contract
that promotes collaboration, ownership, automation, and continuous
improvement across roles.

• Think of it as a social contract within an organization that defines how


teams work together to deliver, operate, and improve software efficiently
and reliably.
Core Pillars of the DevOps Compact
[Link] Responsibility for Delivery and Operations

• Principle:
All team members share responsibility for delivering and running software
— not just Dev or Ops.

Example:
• In a DevOps compact team, if an API service crashes in production,
developers are involved in diagnosing and fixing the issue, not just Ops.
• Teams follow You build it, you run it principle (popularized by Amazon).
2. End-to-End Visibility

Principle:
Everyone has access to logs, metrics, deployment pipelines, and system health
dashboards.

Example:

• Devs can view real-time monitoring dashboards (e.g., Grafana,


Prometheus) to see how their code performs in production.

• Product managers can check deployment status in CI/CD tools like


Jenkins or GitHub Actions.
[Link] Over Manual Work

Principle:
Automate everything: testing, builds, deployments, infrastructure, monitoring,
etc.

Example:
Use Terraform to define infrastructure as code.

Use GitOps to trigger deployments based on Git commits.

Use CI/CD pipelines to run automated tests and deploy on merge.


[Link] Testing and Quality Assurance

Principle:
Testing is everyone’s responsibility and part of the delivery pipeline.

Example:

• Developers write unit and integration tests using frameworks like JUnit or
Jest.
• QA engineers focus on exploratory testing and improving test automation
coverage.
• All tests run automatically on every pull request.
[Link] Learning and Feedback Loops

Principle:
Incidents, deployments, and releases are learning opportunities. Feedback is
fast and constant.

Example:

• After a production incident, the team holds a blameless postmortem to


identify root causes and fix them.
• Teams implement feedback from real users via telemetry, user behavior
tracking (e.g., Mixpanel), or support tickets.
6. Frequent, Small, Safe Changes

Principle:
Deliver small, incremental changes that are easier to test, review, and rollback.

• Example:

• Instead of a big monthly release, features are deployed weekly or daily


behind feature flags.
• Canary deployments are used to release new code to a small subset of users
first.
[Link] is a Shared Concern (DevSecOps)

Principle:
Security is built into the pipeline, not added at the end.

• Example:

• Developers use tools like Snyk or Trivy to scan for vulnerabilities in code
and container images.
• Secrets are managed via vaults (e.g., HashiCorp Vault), not hardcoded.
Software Development Methodologies
• The process of splitting up the development work, usually into distinct phases is
known as a software development methodology

 These phases may include:


i. Specifications of deliverables or artifacts
ii. Development and verification of code with respect to the specification
iii. Development of the code to its final customers or production environment

 Different Software development model:

1. Waterfall
2. Agile
3. Scrum
Waterfall model
• The waterfall model is a project management process with an emphasis on a
sequential progression from one stage of the process to next.
• The original stages are requirement specification, design , implementation,
integration, testing, installation, and maintenance, and progress visualized as
flowing from one stage to another. Hence the name Waterfall
Agile
• Agile is the name given to a group of software development methodologies
that are designed to be more lightweight and flexible than previous
methods such as waterfall .
• Agile includes processes such as scrum, which will define next, and other
methods that place a heavy emphasis on collaboration, flexibility and the
end result of working software
• Agile is a methodology (and mindset) for software development and project
management that emphasizes:
• Iterative and incremental work – delivering value in small, usable pieces rather
than all at once.
• Flexibility and adaptability – responding to change quickly instead of rigidly
following a fixed plan.
• Collaboration and communication – fostering close cooperation among cross-
functional teams and with customers.
• Continuous improvement – regularly reflecting on processes and outcomes to
adapt and improve.
• Customer focus – prioritizing delivering working solutions that meet user needs.
Its foundation comes from the Agile Manifesto (2001), which values:
• Individuals and interactions over processes and tools. Working software over
comprehensive documentation. Customer collaboration over contract negotiation.
Responding to change over following a plan
• In short:
Agile is a way of working that breaks big problems into smaller,
testable steps, values teamwork and adaptability, and focuses on delivering real
value to users continuously.
In the book, they highlight that:
• Agile and DevOps share the same cultural foundation → Both emphasize
collaboration, communication, and breaking down silos. Agile does this within
development teams, while DevOps extends it to include operations, QA, security,
and beyond.
• Agile focuses on how we build software → short iterations, rapid feedback,
adapting to change.
• DevOps focuses on how we deliver and operate software → automation,
continuous integration/deployment, monitoring, and shared responsibility.
• Together, Agile and DevOps create a continuous cycle: plan → build → test →
release → operate → learn → adapt.
• Jennifer Davis frames Agile as a precursor and enabler of DevOps: you can adopt
Agile development practices without DevOps, but you can’t fully realize DevOps
without Agile principles (because you need iteration, feedback loops, and
collaboration for it to work).
• Agile as the mindset and process foundation, while DevOps extends those
principles across the whole lifecycle and organization.
Scrum
• Scrum is a software development methodology
that focuses on maximizing a development
team’s ability to quicky respond to the
changes in both the project and customer
requirements.
• It is uses predefined development cycle called
Sprints
• Scrum is a framework within Agile methodology, used to manage complex
projects (commonly software development). It organizes work into time-
boxed iterations called Sprints (usually 2–4 weeks).
• Key elements of Scrum:
• Roles:
– Product Owner → defines priorities and goals (represents customer).
– Scrum Master → facilitates process, removes blockers, ensures team follows Scrum
principles.
– Development Team → cross-functional members who design, build, test, and deliver.
• Artifacts:
– Product Backlog → list of all desired features/tasks.
– Sprint Backlog → subset of items chosen for the current Sprint.
– Increment → working, potentially shippable product delivered at the end of a Sprint.
• Ceremonies (Meetings):
– Sprint Planning → decide what to build this sprint.
– Daily Standup → short (15-min) sync about progress/blockers.
– Sprint Review → demo finished work to stakeholders.
– Sprint Retrospective → reflect on process, find improvements.
Example of Scrum in Action

• Imagine a company building a food delivery app.


• Product Backlog Items (PBIs):
– User registration
– Browse restaurants
– Add to cart
– Payment integration
– Delivery tracking
• Sprint 1 (2 weeks):
• The team selects User registration and Browse restaurants from backlog.
• They design, code, test, and by the end of Sprint, a user can register and see restaurants.
• In the Sprint Review, they demo to stakeholders, who ask for simpler signup with
Google.
• In the Retrospective, team realizes testing took too long, so they decide to automate it
next sprint.
• Next sprint, they move on to “Add to cart” + integrate the Google signup feedback.
Over several sprints, the app grows in working increments.
Scrum’s Connection with DevOps

• Scrum ensures short, iterative development cycles. DevOps ensures smooth


delivery, automation, and operations.
• Here’s how they align:
• Continuous Integration/Continuous Delivery (CI/CD): Scrum teams deliver small
increments every sprint. DevOps automates testing, deployment, and monitoring,
ensuring these increments reach production quickly and safely.
• Collaboration: Scrum already breaks silos inside development. DevOps extends this
collaboration to include operations, security, and infrastructure.
• Feedback Loops: Scrum gets feedback from stakeholders each sprint. DevOps adds
real-time feedback from production (monitoring, logs, user metrics).
• Retrospectives vs. Continuous Improvement: Scrum retrospectives drive internal
process improvements; DevOps practices (like blameless postmortems) reinforce
continuous learning across the whole delivery pipeline.

• In short: Scrum helps organize what to build and in what order, while DevOps helps
teams deliver it continuously, reliably, and at scale.
Scrum vs. DevOps at a glance

Scrum = How teams organize and deliver work (process + collaboration).


• DevOps = How teams automate and deliver software (tools + practices).
• Scrum gives structure for planning & teamwork,
DevOps provides the automation & infrastructure to deliver continuously.

How They Connect


• Scrum focuses on short cycles (Sprints) → DevOps enables continuous
delivery inside those Sprints.
• Scrum teams commit to increments → DevOps pipelines make sure
increments can be built, tested, and deployed automatically.
• Scrum emphasizes feedback → DevOps enables quick feedback from
automated tests, monitoring, and users in production.
Operations Methodologies

• Similar to Software development methodologies split up software


development work into different phases or otherwise try to bring more order
to those processes, IT or operations work can be split up or organised as well.

• ITIL
• COBIT
ITIL
• ITIL, formally called as Information technology Infrastructure Library, is set of practices defined for
managing IT services. It is published as a series of five volumes that describe it processes, procedures,
tasks and checklists, and is used to demonstrate compliance as well as measure improvement towards the
end.
• ITIL Service Operation Methodologies

[Link] Management
• Definition:
Monitors and manages all events occurring in the IT infrastructure to ensure normal operation and detect
exceptions.
• Example:
A network monitoring tool (like Nagios or SolarWinds) detects that a server’s CPU usage has crossed
90%. It automatically triggers an alert to the operations team before the server crashes, enabling
proactive resolution.
[Link] Management
– Goal: Restore normal service quickly when something breaks.
– Example:
• A company’s payment system goes down.
• Users cannot check out in the food delivery app.
• The IT support team logs the incident, prioritizes it, and works on a quick fix.
• Business impact is reduced (payments restored in 30 minutes).
3. Problem Management
- Goal: Find root causes of recurring incidents and prevent them.
- Example:
– Payment system went down 3 times last month.
– Root cause analysis shows database connection pooling is misconfigured.
– Permanent fix is applied → reduces future incidents.
4. Request Fulfilment
• Definition:
Manages service requests from users (non-disruption related), such as requests for
information, password resets, or new hardware/software.
• Example:
An employee submits a request for access to a new project management tool. The Service
Desk processes the request through automated workflows and provides access within 24
hours.
5. Access Management
• Definition:
Grants authorized users the right to use a service while preventing unauthorized access.
• Example:
When a new employee joins, HR triggers a request to grant access to payroll systems. Access
Management ensures the user has proper permissions and that no one outside HR can access
confidential payroll data.
COBIT
COBIT (Control Objectives for Information and Related Technologies) is a
framework for IT governance and management.
• Created by ISACA : Information Systems Audit and Control Association
• Helps organizations ensure that IT is aligned with business goals, delivers value,
and manages risks effectively.
• Think of it as the "big picture framework" for governing IT—not just managing
day-to-day operations like ITIL, but ensuring IT is secure, compliant, and
contributes to business strategy.
COBIT Key Principles
• Meeting stakeholder needs → IT should support business goals.
• End-to-end governance of enterprise IT → Covers everything (processes,
people, technology, information).
• Applying a single integrated framework → Works alongside ITIL, Scrum,
DevOps, etc.
• Enabling a holistic approach → Includes processes, culture, organizational
structures, and technology.
• Separating governance from management →Governance = deciding direction
(e.g., risk, compliance).
– Management = executing (day-to-day IT operations).
Example: Bank implementing COBIT

Imagine a bank that wants to improve its IT governance.

• Business Goal
• Ensure customer data security while enabling mobile banking innovation.

• How COBIT helps:

• Governance (Board/Executives)
– Define risk appetite → "Customer data must always be encrypted."
– Align IT with business → Approve investments in mobile banking security.
• Management (IT Teams)
– Apply policies → Implement encryption in databases and mobile apps.
– Monitor → Run audits, track compliance with data privacy laws (e.g., GDPR).
– Report → Send regular updates to executives on risk and compliance.
• Result
• Bank can innovate (mobile app features) while ensuring security & compliance.
• COBIT provides the governance structure to balance risk, value, and compliance.
COBIT is essential because it:
• Bridges the gap between IT and business.
• Ensures IT investments deliver value.
• Protects organizations from technology-related risks.
• Helps achieve compliance and efficiency.
• FLOW CHART
System Methodologies
• It Focus on thinking about the system as a whole rather than limiting focus to more
specific areas such as a s/w development or IT operations. Systems thinking skills are
crucial for anyone working with complex system
• Lean Production in DevOps
• Definition: Lean Production is a methodology focused on maximizing value while
minimizing waste in processes. It originated from manufacturing (Toyota Production
System) but is widely applied in software development and DevOps.
• Goal in DevOps: Deliver high-quality software faster by removing bottlenecks,
automating repetitive tasks, and focusing on customer value.

Working for 5 years James, Daniel ,Roos coined the term called LEAN Production.
Five principles of lean were defined as:
[Link]
[Link] Stream
[Link]
[Link]
[Link]
1. Value – Focus on What Delivers Value to the Customer

• Meaning: Identify what the customer truly needs and prioritize those
features.

• DevOps Example:
– Using Agile user stories and CI/CD pipelines to release only features
that meet customer needs.
– Example: A streaming service focuses on improving video load time
rather than adding unnecessary UI animations.
2. Value Stream – Map the Entire Process

• Meaning: Visualize all steps from idea to delivery to identify bottlenecks


and waste.

• DevOps Example:

– Mapping the flow of code from development → testing →


deployment → monitoring.

– Example: Using Value Stream Mapping (VSM) to see delays in


manual QA approvals, then automating those tests with Jenkins.
3. Flow – Ensure Smooth, Continuous Delivery

• Meaning: Remove interruptions, delays, and inefficiencies to enable


smooth work progression.
• DevOps Example:

– Implementing CI/CD pipelines so code changes automatically build,


test, and deploy without manual intervention.

– Example: A retail app deploying updates multiple times per day instead
of waiting for monthly release cycles.
4. Pull – Deliver When Needed

• Meaning: Produce (or deploy) only when there is demand, avoiding


overproduction and unused features.

• DevOps Example:

– Using on-demand infrastructure (e.g., AWS auto-scaling) and


feature flags to release features only when ready and requested.

– Example: A payment service enables a new fraud detection feature only


for selected users during beta testing.
5. Perfection – Continuous Improvement

• Meaning: Continuously refine processes to reduce waste and improve


quality.

• DevOps Example:

– Regular retrospectives, automated monitoring, and feedback loops


to improve pipelines and performance.

– Example: After monitoring production errors, a DevOps team


automates rollback mechanisms, reducing downtime by 60%.
Development, release and deployment
Concepts
• There are several different terms related with Development, release and
deployment Concepts. These are the concepts that describe hows of developing
and deploying software, and understanding what there are and how they relate
will give readers a more mature understanding of how tools can be used to
facilitate these practices.

• Version Control: Version Control is the practice of tracking and managing


changes to software code and related files over time. In DevOps, Version
Control Systems (VCS) allow teams to collaborate efficiently, maintain a history
of code changes, and integrate with automated build/test/deployment pipelines.

• Test Driven Development: In this, the code developer starts by writing a failing
test for a new code functionality, then writes the code itself, and finally ensures
that the test passes when the code is complete. The test is a way of defining the
new functionality clearly.
Version Control: Flow diagram
1. Planning Phase
• Plan (Start Sprint)

• Inputs: Backlog tasks, bugs, and feature requests are collected from users or
business teams.

• Activity: Teams prioritize and plan tasks for a sprint (Agile approach).

• Goal: Define what will be developed, tested, and deployed in this iteration.

- Collect backlog tasks, bugs, and feature requests


- Prioritize and plan sprints using Agile methodologies
- Define deliverables for the sprint
2. Development Phase

• Developers work on dedicated development databases (Dev DBs)

• Code changes and database schema changes reviewed

• Use of topic branches and pull requests in Git


3. Delivery Phase
Develop

• Developers work on dedicated development databases (Dev DBs) to make


changes safely without affecting production.

• Schema Changes Review: Database schema or code changes are reviewed before
merging into the main codebase.

• Topic Branch & Pull Request:


– Developers create feature/topic branches in the Version Control System (e.g.,
Git).
– Pull requests (PRs) are created for review by peers before merging into the
main branch
• Generate migration scripts for database/application changes
-Automated build & test pipelines (CI/CD)
-Approval, merge into main branch, and staging deployment
4. Operations Phase
Operate
• Production Deployment: After staging verification, the code is deployed to
production.
• Monitoring (Check/Drift Check):
– Continuous monitoring ensures the application is performing as expected.
– Drift check ensures that the production environment matches the expected
configuration (no unapproved changes).
• Feedback Loop: Bugs and issues from production are sent back to the
backlog, feeding into the next sprint.

• Deploy code to production after staging validation


• Monitor application performance and perform drift checks
• Feed production issues back into backlog for improvement
Test Driven Development
Key Steps in TDD
• Write a Test – Create a test case for a small functionality, even though the code
doesn’t exist yet.
• Run the Test – Initially fails because the functionality isn’t implemented.
• Write Code – Implement just enough code to make the test pass.
• Run Tests Again – Ensure all tests pass.
• Refactor Code – Optimize the code without changing its functionality.
• Repeat – Continue writing new tests for new features.

Example in DevOps Context


• Scenario: A DevOps team is building a login API.
• Step 1: Write a test for "User should be able to log in with valid credentials".
• Step 2: Run test → Fails because no login code exists.
• Step 3: Implement the login function.
• Step 4: Run the test again → Passes.
• Step 5: Commit the code → CI pipeline triggers automated tests.
• Step 6: If tests fail in CI, changes are rejected until fixed.
Red – Write a Failing Test
• What It Means:
– Before writing any code, you write a test that defines what the code should do.
– The test will fail initially because the feature does not exist yet.
Example:
- You’re developing a calculator app.
- You write a test: assert add(2,3)= =5
- It fails as add() is not yet implemented.

Green – Write Just Enough Code to Pass the Test


• What It Means:
– Implement the minimum amount of code needed to
make the test pass.
– Focus is on correctness, not optimization.
Example:
– Implement: def add(a,b)
return a+b
Run the test again, and now it passes.

Refactor – Improve the Code


• What It Means:
– Clean up the code (optimize, remove duplication) without changing its functionality.
– Ensure all tests still pass after improvements.
Example:
– If multiple functions have similar code, you refactor them into a shared helper function.
Benefits of TDD in DevOps

• Ensures code quality and early bug detection.

• Reduces integration issues during CI/CD.

• Improves confidence in deployments and faster releases.

• Supports shift-left testing – testing early in the development cycle.


Application Deployment
• Application deployment is the process of planning, maintaining and executing on
the delivery of a software release.

• Application deployment in DevOps is the process of releasing software from


development into production environments using automated pipelines,
ensuring speed, reliability, and consistency.

• Application deployment is a critical aspect to engineering quality software.

Key Stages of Application Deployment in DevOps


• 1. Build phase:
• 2. Test phase
• 3. Deploy phase(CD)
• 4. Monitor & Feedback Phase
Flow Diagram
Build Phase
• Source code is committed to a Version Control System (VCS) (e.g., GitHub, GitLab).
• The Continuous Integration (CI) process compiles code, runs unit tests, and packages the
application into deployable artifacts (e.g., Docker images, JAR/WAR files).
Example: Developer pushes code to GitHub → Jenkins triggers a build → Docker image is created
and stored in DockerHub.

Test Phase
• The CI pipeline runs automated tests (unit, integration, and security tests) to validate the
application.
Example: Jenkins runs Selenium tests on a staging environment to verify UI functionality.

Deploy Phase (Continuous Deployment - CD)


• The application is automatically deployed to staging or production environments using
deployment tools.
• Tools like Kubernetes, Ansible, Terraform, Jenkins, ArgoCD ensure smooth and repeatable
deployments.

Monitor & Feedback Phase


• After deployment, monitoring tools (e.g., Prometheus, Grafana, ELK Stack) track application
health.
• Issues are logged and fed back into the DevOps cycle for improvements.
Example: After deployment, Grafana shows increased response times → Issue sent to backlog for
Infrastructure Concepts
• All computer software runs on infrastructure of some sort, whether that be
hardware that an organization owns and manages itself, leased equipment that is
managed and maintained by someone else, or on-demand compute resources that
can easily scale up or down as needed. These concepts, once solely the realm of
operations engineers, are important for anyone involved with a software product
to understand in environments where the lines between development and
operations are starting to blur.
• Configuration Management (CM)
In DevOps, Configuration Management (CM) is the practice of
systematically managing and maintaining the state of infrastructure, systems, and
software so they remain consistent, reliable, and predictable across all
environments (development, testing, staging, production).
• cloud computing
In DevOps, cloud computing refers to using on-demand computing
resources (servers, storage, databases, networking, and services) delivered over
the internet to support software development, testing, deployment, and
operations.
• Infrastructure automation: Infrastructure automation is a way of creating systems that
reduces the burden on people to manage the systems and their associated services, as
well as increasing the quality, accuracy, and precision of a service to its consumers.
Indeed, automation in general is a way to cut down on repetitious work in order to
minimize mistakes and save time and energy for human operators.
• For example, instead of running the same shell commands by hand on every server in an
organization’s infrastructure, a system administrator might put those commands into a
shell script that can be executed by itself in one step rather than many smaller ones.

• Artifact management: An artifact is the output of any step in the software development
process. Depending on the language, artifacts can be a number of things, including JARs
(Java archive files), WARs (web application archive files), libraries, assets, and
applications. Artifact management can be as simple as a web server with access controls
that allow file management internal to your environment, or it can be a more complex
managed service with a variety of extended features. Much like early version control for
source.

• Containers: One of the bigger pain points that has traditionally existed between
development and operations teams is how to make changes rapidly enough to support
effective development but without risking the stability of the production environment
and infrastructure. A relatively new technology that helps alleviate some of this friction
is the idea of software containers—isolated structures that can be developed and
deployed relatively independently from the underlying operating system or hardware.

Cultural Concepts
Define: The final concepts we define in this chapter are cultural ones. While some software
development methodologies, such as Agile, define ways in which people will interact while developing
software.
[Link]: A retrospective is a discussion of a project that takes place after it has been completed,
where topics such as what went well and what could be improved in future projects are considered.
Retrospectives usually take place on a regular (if not necessarily frequent) basis, either after fixed periods of
time have elapsed (every quarter, for example) or at the end of projects. A big goal is local learning—that is,
how the successes and failures of this project can be applied to similar projects in the future. Retrospective
styles may vary, but usually include topics of discussion such as: 1. What Happened? 2. What went well? 3.
What went poorly?
2. Postmortem
Unlike the planned, regular nature of a retrospective, a postmortem occurs after an unplanned incident or
outage, for cases where an event’s outcome was surprising to those involved and at least one failure of the
system or organization was revealed. Whereas retrospectives occur at the end of projects and are planned in
advance, postmortems are unexpected before the event they are discussing. Here the goal is organizational
learning, and there are benefits to taking a systemic and consistent approach to the postmortem by including
topics such as:
What happened?
A timeline of the incident from start to finish, often including communication or system error
logs.
Debrief
Every person involved in the incident gives their perspective on the incident, including their
thinking during the events.
Remediation items
[Link]

Blamelessness is a concept that arose in contrast to the idea of blame culture. Though it had been discussed
for years previously by Sidney Dekker and others, this idea really came to prominence with John Allspaw’s
post on blameless postmortems, with the idea that incident retrospectives would be more effective if they
focused on learning rather than punishment.

[Link] learning:

Organizational learning as a goal I spart of collection, growing and sharing an organizational body of
knowledge.
Organization who made their learning more deliberate, setting at specific goal and taking actionabke steps to
increase their collective learning.
Chapter II

Pillars of Effective DevOps


Pillars of Effective Devops
• The 4 Pillars of Effective DevOps

[Link]

[Link]

[Link]

[Link]
1. Collaboration

Definition:
• As a pillar of devops, collaboration refers to the intentional processes and common purpose of
individuals. Examples of collaboration in action include:
• Asynchronous code review
• Documentation
• Updating issues and bug reports
• Demonstrating weekly progress
• Regular status updates
• Pairing
Example:
• Developers and operations engineers co-design monitoring and deployment strategies.
• During an incident, all relevant team members (Dev, Ops, QA) are involved in resolving the issue
together — not blaming one group.
In January 2015, Anita Woolley and colleagues published findings from their analysis of teams in an article
for the New York Times called “Why Some Teams Are Smarter Than Others”. Woolley’s smarter teams
outperformed other teams based on the following characteristics:
• Communication
• Equal participation
• Theory of Mind
In other words, effective collaboration includes communication, equal participation, and Theory of Mind
(ToM). ToM is the ability to recognize one’s own perspective, and that others have a distinct and different
perspective born from their own context. Examining how individuals are different, and exploring how these
differences impact.
Mentorship

• Mentorship
A successful formal mentorship program educates mentors and mentees to their
respective purposes, roles, and obligations. A healthy mentorship flows both ways,
allowing for each participant in the relationship to grow and learn. Understanding
this relationship can help you become a mentor even if you’ve never had one
yourself.

• Senior-to-Junior Mentoring

• Senior-to-Senior Mentoring

• junior-to-Senior Mentoring

• Junior-to-Junior Mentoring
Introducing Mindsets
• Introducing Mindsets in DevOps refers to the process of understanding and
shaping the beliefs, attitudes, and ways of thinking that influence how individuals
and teams approach DevOps practices.
• It highlights that DevOps success is not just about tools and processes but about
adopting a cultural shift—where collaboration, shared responsibility, learning, and
adaptability are at the core.

1. Cultivating the Right Mindset: Encourage collaboration, learning, and shared

ownership.

2. Fixed Mindset: Stuck in old ways, avoids change, blames others.

3. Growth Mindset: Embraces change, learns from failure, seeks improvement.

4. Focus on individual growth: Combination of six strategies.


Cultivating the Right Mindset in DevOps

• Cultivating the right mindset means encouraging attitudes and behaviors that
support continuous delivery, learning, and improvement.
• Key traits of the right DevOps mindset:
• Collaboration over silos: Developers, operations, QA, and business teams
work together.
• Automation as a habit: Reduce manual processes to improve speed and
reliability.
• Continuous feedback: Learn quickly from failures and successes.
• Customer-focused outcomes: Deliver value to users, not just complete tasks.
• Adaptability: Embrace change rather than resist it.

• Example:
A team adopts a culture of “you build it, you run it”, where developers own
the full lifecycle of their code. When issues arise in production, the team uses
blameless post-mortems to learn and improve rather than pointing fingers.
Fixed Mindset in DevOps

• A fixed mindset assumes abilities and outcomes are static:


• People believe “things are how they are” and cannot change much.
• Failures are seen as personal shortcomings rather than learning opportunities.
• There’s resistance to new processes, tools, or cultural shifts.

DevOps Fixed Mindset Examples:


• Siloed thinking: “Deployment issues are ops’ problem, not ours.”
• Fear of change: “We’ve always done releases manually; automation will just
cause problems.”
• Blame culture: “The pipeline failed because QA didn’t test enough.”
• Impact: Fixed mindsets lead to slower releases, more errors, low morale, and
inability to fully realize DevOps benefits.
Growth Mindset in DevOps

• A growth mindset assumes skills and outcomes can improve with effort, learning,
and adaptation:
• Failures are opportunities to learn.
• Collaboration and feedback drive continuous improvement.
• Innovation is welcomed.

DevOps Growth Mindset Examples:


• Continuous improvement: “The last deployment failed—let’s analyze the root
cause and improve our pipeline.”
• Experimentation: “Can we try infrastructure as code to reduce errors?”
• Shared responsibility: “We all own quality and uptime; let’s work together to fix
this.”
• Impact:
Growth mindset leads to faster delivery cycles, higher-quality software, and a
stronger DevOps culture.
Focus on individual growth
Following six strategies will help individuals prepare for future challenges and
develop more flexibility in changing time
• 1. Learn the fundamentals

• 2. Develop your Niche

• 3. Recognize your strength and progress

• 4. Ensure deliberate, equality practice

• 5. Develop your working style.

• 6. Enhance your team style.


Learn the fundamentals
Learn the Fundamentals:
i. Even experienced professionals should revisit and understand the core skills of their role.

ii. Many skip this step due to overconfidence or assuming experience is enough.

iii. Fundamentals include not only technical skills but also understanding team culture and environment.

iv. Continually revisiting basics ensures long-term adaptability and readiness.

Example: Imagine a DevOps engineer joining a new company:


• They have 10 years of experience but never worked with Kubernetes.
• Instead of assuming their past skills are enough, they take time to learn container orchestration
basics—pods, deployments, services.
• They also learn the team's workflow and cultural practices (e.g., how incident management is
handled).
• This approach helps them quickly contribute, troubleshoot effectively, and adapt to new challenges.
• In contrast, someone with a fixed mindset might say, “I already know DevOps, I don’t need to learn
this.” This could lead to mistakes, misunderstandings, and slower integration into the team.
Developing Your Niche

[Link] a New Area to Excel In:


Look beyond your current skills and find something you don’t already know.
Repeating the same tasks year after year limits growth and reduces learning ability

[Link] Skills Outside Your Immediate Role:


The new skill doesn’t have to be directly related to your current tasks—it could be
in an adjacent area within the team or organization.
Gaining this knowledge strengthens overall competence and adaptability.

[Link] Benefit:
Learning something new adds depth to your expertise, helps you handle your main
responsibilities better, and prepares you for future challenges
Real-Time Example (DevOps Context):

A DevOps engineer primarily manages CI/CD pipelines.

To develop a niche:

• They notice the team struggles with cloud cost optimization.

• They start learning about FinOps (Cloud Financial Operations), including


tools like AWS Cost Explorer and Kubernetes cost monitoring.

• Over time, they become the go-to person for optimizing cloud costs, adding
significant value beyond their original role.

• This makes them more versatile, boosts their career prospects, and benefits the
team by reducing costs.
Recognize your strengths and progress
Self-Recognition of Progress
• You should know when you've done something well and when you're ready to move to the next
challenge.
• External feedback (appreciation, coaching, evaluations) is useful but not always reliable because most
systems focus on external ratings rather than individual needs.
Self-Assessment Over External Validation
• Relying solely on external systems can limit your growth.
• Being able to honestly evaluate yourself allows you to steer your career in the direction you want,
regardless of external judgments.
Context Shapes Growth
• Quoting José Ortega y Gasset: “I am I and my circumstance.” This means your growth depends not
only on who you are but also on the environment you operate in.
Productivity Variability
• Research from the 1960s shows that some engineers are much more productive than others—sometimes
ten times more productive.
• However, being "10x productive" might indicate that the person is under-challenged in their role.
Balanced Growth
• While expertise in a single topic is good, it should not come at the cost of continuous learning,
resilience, and flexibility.
• The Ortega Hypothesis suggests that average contributors are also critical, meaning that collective
Example

• Imagine XYZ, a software engineer:

• She receives praise from her manager for delivering projects quickly (external
validation).

• Instead of just relying on praise, Sarah sets her own benchmarks: learning a new
programming language every six months and improving her debugging speed.

• She notices she’s outperforming her peers significantly. Rather than becoming
complacent, she requests more challenging tasks (e.g., leading a new AI project).

• Over time, Sarah’s ability to self-assess keeps her engaged, growing, and
contributing meaningfully to her company—proving that both high performers
and consistent contributors drive success.
Ensure deliberate , equality practice
• Ensure deliberate and quality practice means focusing on intentional, goal-
oriented, and high-quality efforts to improve skills, processes, or
deliverables instead of practicing mindlessly. It emphasizes learning with
purpose and improving through feedback.

Example
• Software Development (Agile/DevOps Context)
• Deliberate Practice:
A developer intentionally practices writing unit tests daily to improve code
reliability.
• Quality Practice:
Each test is peer-reviewed and run through CI pipelines to ensure it meets
coding standards and functional accuracy.
Develop your working style
• Developing your working style means shaping the way you approach, organize,
and execute work so that it aligns with your strengths, team goals, and project
needs. It’s about finding the best balance between productivity, collaboration, and
adaptability.
• Example (Agile/DevOps Context)
• Person A’s Working Style:
– Likes to start the day with a standup meeting.
– Works best with Kanban board tracking in Jira.
– Does deep work coding in the morning and prefers async code reviews in the
afternoon.
– Shares weekly progress updates with stakeholders.
• Person B’s Working Style:
– Prefers fewer meetings and communicates mainly via Slack.
– Uses time-blocking to focus on testing automation scripts.
– Enjoys pair programming sessions when tackling complex bugs.
– Writes detailed documentation after completing a task.
Enhancing the team style
• Enhancing the team style means improving how a team works together
—how they communicate, collaborate, make decisions, and deliver results.
Instead of focusing only on individual working styles, this approach
emphasizes team cohesion, adaptability, and synergy so the group
performs better collectively.
Mindsets and learning Organizations
Mindset + Learning Organization Together

• Mindset = Individual/team attitude toward growth.


• Learning Organization = Company culture/system that enables those
mindsets to thrive.

• Example in Action:
A bank modernizing its systems:
• Developers adopt a growth mindset by learning Kubernetes even though
they only knew traditional servers.
• The bank supports this by being a learning organization (funding
certifications, running internal workshops).
The Role of Feedback
• A research found that nature of feedback people receives is key factor that
they develop a mindset.

• Fixed mindsets tends to pay attention on feedback

• Whereas Growing mindset, very attentive on feedback that will help them
to improve.

• Feedback = fuel for learning, improvement, and trust. Without feedback


loops, teams stagnate and mistakes repeat.
Reviews and Rankings
Two goals of feedback: a. Performance review
b. Value offered to the organization

 Frequency of Feedback → Regular reviews encourage improvement.

 Ranking System → Traditional rankings (like stack ranking at Microsoft) can hurt
collaboration.

 Problems with Rock Stars and Superflocks → Over-reliance on “superstars” demotivates


others.

 Value of Social Capital → Trust and networks within teams matter as much as technical skills.
Frequency of Feedback
• Annual Reviews: Too infrequent, feedback often delayed and less useful.

• Regular Feedback: Small, continuous cycles allow quicker corrections.

• Agile Approach: Encourages immediate learning and reduces stress.

• Example: QA engineer fixes bugs faster when feedback given during sprints, not

yearly.
Ranking Systems
• Stack Ranking: GE's Jack Welch system → Top 20% rewarded, bottom 10% fired,
70% average.
• Problems: Creates unhealthy competition and reduces collaboration.
• Modern Approach: Focus on collaboration and shared team goals instead of forced
ranking.
• Example: Microsoft dropped stack ranking → improved teamwork and innovation.
Conflict
2. Affinity

Definition:
• Affinity is about building empathy and strong relationships between individuals
and teams. It means caring about others’ work, challenges, and successes.

Why It Matters:
• Without affinity, collaboration falls apart. Trust and understanding are what make
cross-functional teams effective.

Example:
• Developers shadow operations during on-call rotations to understand the impact of
their code on infrastructure.
• Teams hold blameless retrospectives after incidents to learn together, not assign
fault.
Benefits of improved affinity
• There are multitude of benefits to improving affinity between teams at
individual, team, and organizational levels. Increased empathy and
communication helps people feel that their voices are heard and their
concerns appreciated, which creates a cycle of improved morale and
productivity.
• Shortened Cycle Time
• Reduction of communication barriers
• Trust
• Innovation
Requirements for Affinity
• Slack
• Explicit values and goals
• Space
• Collaboration and cooperation
Measuring Affinity
• Employees skills and evaluation
• Interteam interaction
• Giving back to the community
3. Tools

Definition:
• Tools refer to the automation, platforms, and technologies that enable fast, reliable, and
scalable software delivery.

Why It Matters:
• Tools support collaboration and scalability but don't replace the human or cultural aspect.
The goal is to reduce friction and enable self-service.

Examples:
• CI/CD pipelines automate testing and deployment (e.g., GitHub Actions, Jenkins).
• Infrastructure as Code tools like Terraform or Ansible ensure repeatable and versioned
infra setup.
• Monitoring tools (e.g., Prometheus, Grafana) provide visibility for everyone.
Caveat:
• Using tools without fostering collaboration and affinity will not create effective DevOps.
Tools Ecosystem Overview

• Version Control → Git, GitHub, GitLab, Bitbucket


• CI/CD → Jenkins, GitLab CI, CircleCI, ArgoCD
• Configuration Management → Ansible, Puppet, Chef
• Containerization & Orchestration → Docker,
Kubernetes, OpenShift
• Monitoring & Logging → Prometheus, Grafana,
ELK Stack
• Collaboration → Slack, MS Teams, Jira
Software Development

• DevOps bridges the gap between development


& operations
• Faster coding, builds, and automated testing
• Shift-left approach → testing & security early
in lifecycle
• Continuous integration and delivery (CI/CD)
Automation

• Reduces manual effort, improves speed &


minimizes errors
• Areas: Build & Deployment, Infrastructure as
Code, Testing, Monitoring
• Benefits: repeatability, scalability, faster
feedback cycles
Monitoring

• Ensures system reliability, scalability, and


performance
• Infrastructure monitoring → Prometheus, Nagios
• Application monitoring → New Relic,
AppDynamics
• Logging & tracing → ELK Stack, Jaeger, Zipkin
• Helps reduce MTTR (Mean Time to Repair)
Right Tools for Real Problems

• Choose tools to solve actual problems, not for


popularity
• Consider scalability, integration, ease of use,
cost
• Example: If deployments are slow → use
CI/CD, not monitoring tool
Embracing Open Source

• Dominates DevOps (Docker, Kubernetes,


Jenkins)
• Benefits: cost-effective, innovation,
transparency, flexibility
• Risks: need strong governance for updates &
security
Standardization of Tools

• Prevents tool sprawl & encourages consistency


• Benefits: easier collaboration, reduced
training, streamlined support
• Example: Standardizing on GitLab CI for all
projects
Consistent Processes for Tool Analysis

• Systematic evaluation before adoption


• Criteria: problem addressed, technical fit,
security, cost
• Use Proof of Concept (PoC) before adoption
Exception to Standardization

• Needed for project-specific requirements or


emerging tech
• Legacy systems may require unique tools
• Exceptions should be documented & approved
Selection and Elimination of Tools

• Selection: Identify → Evaluate → PoC →


Standardize
• Elimination: remove tools with overlap, low
adoption, poor support
• Goal: maintain lean & effective toolchain
Mind Map - DevOps Tools Ecosystem
4. Scaling

Definition:
• Scale is about sustaining DevOps practices as the organization grows,
ensuring consistency, autonomy, and quality across multiple teams.

Why It Matters:
• What works for 1 team may break for 10 or 100 teams. DevOps practices
need to be adaptable, repeatable, and resilient at scale.

Example:
• Building internal platforms that provide self-service infrastructure for teams.
• Establishing organizational patterns (like SREs, platform teams) to support
team autonomy while ensuring standards.
Understanding Scaling

• Scaling is about adapting people, processes, and


technology as organizations grow.
• Horizontal Scaling → Distributing workloads across
multiple teams/systems.
• Vertical Scaling → Enhancing capabilities of existing
teams/systems.
• Goal: Maintain agility and speed while expanding.
• Example: A startup moving from one monolithic
service to microservices with multiple teams.
Considering Enterprise DevOps

• Enterprise DevOps focuses on large organizations


with multiple teams and stakeholders.
• Requires strong governance, compliance, and unified
toolchain strategies.
• Cross-team collaboration is essential for avoiding
silos.
• Automation at scale is critical (CI/CD pipelines, IaC).
• Example: A global bank standardizing CI/CD
pipelines across 50+ teams.
Scaling for Teams

• Teams are the core unit of DevOps scaling.


• Challenges: Communication, avoiding silos,
maintaining consistency.
• Best practice: Small, autonomous teams ('two-pizza
rule').
• Clear ownership and accountability of services.
• Example: A product team responsible for an
independent microservice from dev → deploy →
monitor.
Growing Your Teams: Hiring as Scaling

• Hiring is a scaling strategy, not just filling vacancies.


• Focus on T-shaped professionals (deep expertise +
broad knowledge).
• Balance between experienced hires and juniors.
• Onboarding is critical to align new hires with
DevOps culture.
• Example: Hiring DevOps engineers with cloud
knowledge + cross-functional collaboration skills.
Retention

• Retention ensures stability and avoids costly turnover.


• Strategies: Career growth, recognition, healthy work-
life balance.
• Psychological safety encourages innovation &
collaboration.
• Recognition and rewards help boost morale.
• Example: Providing training budgets and certification
support to retain DevOps talent.
Scaling Approaches Diagram
Team Scaling Flow
Chapter III

Comparison of two common software


architecture
Monolithic vs. Microservices
Architectureapproaches
What is Monolithic Architecture?
• Entire program is built as a single, indivisible
unit
• Any update requires modifying & redeploying
the whole application
• Simple and easier for small applications
• Becomes complex and hard to maintain as it
grows
Flow diagram
The image represents a Monolithic Architecture for an
E-commerce application.
1. Clients
Multiple users (clients) are sending requests to the application.
Requests pass through a Load Balancer, which distributes traffic across servers (helps
handle many users simultaneously).

2. Monolithic E-commerce Application


i. Inside the monolith, all features and services are packaged together in a single
codebase and deployed as one unit.
ii. Shop UI – the front-end user interface (e.g., browsing products).
iii. Catalog Service – manages product listings.
iv. SC Service (Shopping Cart Service) – handles shopping cart operations.
v. Discount Service – applies discounts, coupons, and offers.
vi. Order Service – manages placing and tracking of orders.

All these services are tightly coupled, meaning if one part changes, the whole app
must be rebuilt and redeployed.
[Link]
• A single RDBMS (Relational Database Management System) is shared
by all services.
• Example: MySQL or PostgreSQL storing products, users, orders,
discounts, etc.
Importance of Monolithic Systems

• Simiplicity : offer straightforward development and deployment processes.

• Cost effectiveness

• Performance: Because everything is operating within the same process:

• Security: fewer inter-service communication points

• Legacy Support
Characteristics of Monolithic Architecture

• Monolithic architecture exhibits several defining characteristics:

• Single Codebase: The program is simpler to manage and implement since all of its
components are created and maintained in a single codebase.
• Tight Coupling: The architecture's components are closely linked, rely on one another,
and frequently exchange resources and data directly.
• Shared Memory: Monolithic applications typically share the same memory space,
allowing components to communicate efficiently without the need for network overhead.
• Centralized Database: Data storage is centralized within the application, typically
using a single database instance for all data storage needs.
• Layered Structure: The structure of monolithic systems is frequently layered, with
separate layers for data access, business logic, and presentation. This might result in
dependencies across layers even while it separates issues.
• Limited Scalability: Because the entire application must be scaled at once, scaling a
monolithic application can be difficult and frequently leads to inefficiencies and higher
resource usage.
Challenges in deploying Monolithic Architecture
• Monolithic architecture deployment presents a number of difficulties, such as:
• Long Deployment Cycles:
– When a monolithic application is deployed, the complete codebase is usually deployed as a single unit.
– Because every component of the program must be packaged, tested, and deployed simultaneously, this may
lead to longer deployment times.
• Risk of Downtime:
– Deploying a monolithic application may require taking the entire system offline temporarily, especially if
the deployment involves making significant changes or updates.
– This downtime can impact user experience and business operations.
• Limited Scalability:
– Because scaling typically involves replicating the complete application stack, scaling a monolithic
application can be difficult.
– This can result in inefficiencies and higher infrastructure expenses, especially when demand is high.
• Resource Consumption:
– Compared to more lightweight architectures like microservices, monolithic programs may use more
memory and CPU.
– This may result in decreased overall efficiency and increased infrastructure expenses.
• Limited Flexibility:
– Compared to architectures with separated components, it can be more difficult to make modifications to a
monolithic application.
– Modifications may require altering several areas of the codebase, which raises the possibility of adding
What is Microservices Architecture?

• Application is designed as a set of small,


independent services
• Each service handles a single functionality
• Services communicate using lightweight
protocols (HTTP, queues)
• Independent development, deployment, and
scaling possible
Diagrammatical representation
[Link] Apps

Two types of clients are shown: Web and Mobile applications.


Both communicate with the backend through an API Gateway.

2. API Gateway

Acts as a single entry point for all client requests.


Responsibilities:
Routing requests to the correct microservice.
Handling authentication and load balancing.
Simplifying the client experience (clients don’t need to know the internal service structure).

3. Microservices

The application is split into independent services, each focusing on a single business
capability:
Catalog Service → Manages product listings.
Shopping Cart Service → Handles cart items and session data.
Discount Service → Manages coupons, offers, and discounts.
Ordering Service → Handles order placement, tracking, and processing.
4. Independent Databases
Unlike monolithic systems (which share a single database), each
microservice has its own database.
Benefits:
Services can use the best database technology for their needs (SQL,
NoSQL, etc.).
Avoids dependency conflicts.
Increases fault isolation (if one DB crashes, others continue to work).

5. Message Broker
Provides asynchronous communication between services (e.g.,
RabbitMQ, Kafka).
Example:
When an order is placed, the Ordering Service can publish an event.
The Discount or Notification service can subscribe and act on it without
directly depending on the Ordering Service.
Differences between Monolithic and
Microservices
Aspect Monolithic Architecture Microservices Architecture
Architecture Single-tier architecture Multi-tier architecture
Size Large, tightly coupled Small, loosely coupled
components components
Deployment Deployed as one unit Independent service
deployment
Scalability Horizontal scaling is hard Easier horizontal scaling
Development Simpler initially Complex due to multiple
services
Technology Limited choices Best tech per service
Fault Tolerance Entire app may fail Failure isolated to services
Maintenance Simple but grows harder More effort to manage
services
Flexibility Less flexible More flexible, independent
scaling
Communication Faster internal calls Slower due to network calls
Monolithic Architecture:

• Small to medium-sized apps


• When team size is small
• Faster initial development

Microservices Architecture:
• Large, complex applications
• When scalability and flexibility are required
• Large teams with distributed responsibilities
DevOps influence on architecture
The architecture of a system describes its major components, their
relationships(structures), and how they interact with each other. Architecture serves as a
blueprint for a system. DevOps affects the architecture of our applications, while devops
teams seek productivity through automation; also seek architecture agility in production
environments. This architecture agility made us to go from monolithic architecture to
micro service architecture.
Architecture rules of thumb
There are a number of architecture rules that might help us understand how to deal with the traditional
undesirable situation.
[Link] separation of concerns:
The fundamental principle is simply that we should consider different aspects of a system separately. In an e-
commerce app:
UI = The website where customers browse products.
Business Logic = The system that calculates prices, taxes, discounts, and validates stock availability.
Data Access = The database that stores user details, products, and order history.

[Link] principle of cohesion: In computer science, cohesion refers to the degree to which the elements of a
software module belong together. Cohesion can be used as a measure of how strongly related the functions in
a module are. It is desirable to have strong cohesion in a module. We can see that strong cohesion is another
aspect of the principle of the separation of concerns.
Real-world analogy: A security guard who only checks ID cards at the entrance.
Real-world analogy: A single person in a company who is the receptionist, accountant, IT admin, and HR
— overloaded and inefficient.

[Link]: Coupling refers to the degree of dependency between two modules. We always want low
coupling between modules. Again, we can see coupling as another aspect of the principle of the separation of
concerns.
• High Coupling: Imagine two friends who share a single phone. If one takes the phone, the other can’t
communicate.
• Low Coupling: Each friend has their own phone but they can still call each other — independent but
connected.
Fig: Software architecture pattern with three- tier
The presentation tier: The presentation tier will be a web frontend
implemented using the React web framework. It will be deployed as a set of
JavaScript and static HTML files.

The logic tier: The logic tier is a backend implemented using the Clojure
language on the Java platform. The Java platform is very common in large
organizations, while smaller organizations might prefer other platforms
based on Ruby or Python.

The data tier: In our case, the database is implemented with the
PostgreSQL database system. PostgreSQL is a relational database
management system. While arguably not as common as MySQL
installations, larger enterprises might prefer Oracle databases.

From a DevOps point of view, the three-tier pattern looks compelling, at least
superficially. It should be possible to deploy changes to each of the three layers
separately, which would make it simple to propagate small changes through the
servers
Example: Architecture Rules of Thumb for
DevOps
• Optimize for Change
• Rule of Thumb: “Expect and embrace change in requirements, technology, and
scale.”
• Notes:
– Architect systems with loose coupling → services/modules are independent.
– Use APIs and microservices so changes can be made without impacting the whole
system.
– Build for incremental updates → small, frequent releases rather than big-bang
deployments.
– Favor cloud-native and containerized approaches for flexibility.
• Impact in DevOps:
– Faster delivery cycles.
– Easier rollbacks and reduced risk during change.
– Supports continuous improvement culture.
• Example: Adding a new recommendation engine to an e-commerce app without
touching order or payment services.
Design for Operations, Not Just for Development
Rule of Thumb: “If it’s not operable, it’s not done.”
 Notes:
 Architecture must account for production realities like monitoring, scaling, and security.
 Integrate observability (logs, metrics, tracing) into system design.
 Ensure resilience and recovery → redundancy, failover mechanisms, self-healing services.
• Build securDesign for Operations, Not Just for Development
• Rule of Thumb: “If it’s not operable, it’s not done.”
• Notes:
– Architecture must account for production realities like monitoring, scaling, and security.
– Integrate observability (logs, metrics, tracing) into system design.
– Ensure resilience and recovery → redundancy, failover mechanisms, self-healing services.
– Build security and compliance into the design, not as an afterthought.
• Impact in DevOps:
– Reduces MTTR (Mean Time to Repair).
– Improves reliability and customer experience.
– Ensures smoother handoffs between Dev and Ops.
• Example: Designing APIs with rate limiting and logging for production monitoring, not just for
functionality.
 ity and compliance into the design, not as an afterthought.
Impact in DevOps:
 Reduces MTTR (Mean Time to Repair).
 Improves reliability and customer experience.
 Ensures smoother handoffs between Dev and Ops.
Example: Designing APIs with rate limiting and logging for production monitoring, not just for
– Automate Everything Repeatable
– Rule of Thumb: “If it’s repeatable, it must be automated.”
– Notes:
• Manual, repetitive tasks lead to inconsistency and human error.
• Automate CI/CD pipelines for code builds, tests, and deployments.
• Automate infrastructure provisioning using IaC tools (Terraform,
Ansible, CloudFormation).
• Use automation for testing, monitoring, scaling, and recovery.
– Impact in DevOps:
• Consistency across environments (dev, staging, prod).
• Accelerates delivery and increases team productivity.
• Enhances reliability by reducing human errors.
– Example: Automated pipeline builds code, runs tests,
provisions cloud infrastructure, and deploys updates with
minimal human intervention.

You might also like