0% found this document useful (0 votes)
91 views8 pages

DevOps and SRE Training Curriculum

The document outlines a 9 week DevOps training program covering topics such as source code management, continuous integration/delivery, containerization, cloud services, automation, infrastructure as code, and monitoring/logging. Major projects involve building out ECS and EKS Kubernetes clusters on AWS.

Uploaded by

giteb37229
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
91 views8 pages

DevOps and SRE Training Curriculum

The document outlines a 9 week DevOps training program covering topics such as source code management, continuous integration/delivery, containerization, cloud services, automation, infrastructure as code, and monitoring/logging. Major projects involve building out ECS and EKS Kubernetes clusters on AWS.

Uploaded by

giteb37229
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

CONTENTS

Week 0 - DevOps SRE Fundamentals

● DevOps/SRE tools setup on AWS


● DevOps Flow
● Redhat Linux - Introduction, and common commands
● Linux folder and file structure
● AWS account setup
● AWS basics / SRE system overview
● DevOps projects and real time scenarios
● Git and Github Overview

Week 1 - Source Code management & Networking Concepts

● Difference between CVCS and DVCS


● Importance of Git
● Installation of Git
● Git three-stage Architecture
● Detail explanation of Repository, Commit, Tags, Snapshots,
Push-Pull Mechanism, and Branching Strategy
● Working with Git stash and Git pop
● Resolve Merge conflicts in Git
● Git Revert and Reset (Reset vs Revert)
● Git rebase
● Working with git Squash
● Git cherry pick
● What is Git fork?
● Git Integration on VScode, Git Authentication with Github via SSH
and HTTPS Protocol
● Github Introduction, Creating Repositories, PR’s
● Networking Concepts in Detail
CONTENTS
Week 2 - Continuous Integration and Continuous Delivery

● Understanding aggregate functions (SUM, AVG, COUNT)


● Continuous Integration/Continuous Deployment (CICD) Workflow
Overview
● Understanding JenkinsFile: Exploring Jenkins Jobs, Jenkins
Pipeline, and Jenkins File
● Build Triggers in Jenkins: SCM Polling, GITScm Polling, Build
Periodically - Uncovering the What, Why, and How
● Groovy File Creation: Conceptualizing and Crafting Groovy Scripts
for Jenkins
● Integrating GitHub with Jenkins: Establishing Webhooks for
Seamless Collaboration
● Grasping Merge Request Concepts in the CI/CD Process
● Jenkins Master-Slave Configuration: Optimizing Resource
Utilization in CI/CD
● Triggering Pipelines Directly from JenkinsFile: Streamlining
Automation
● GitLab Branching Strategy: Best Practices for Code Collaboration
and Version Control
● Hands-On Experience: Building a Jenkins CI Pipeline with Groovy,
Incorporating Various Stages
● Configuring Docker Engineer as the Jenkins Slave - Jenkins
Dynamic Slave Configuration
● Jenkins Plugins - Docker, Git, Maven and other common used
plugins
● Integrating Kubernetes with jenkins

Week 3 - Package Management ( Docker) Using Real Time


Scenarios & Understanding SonarQube

● Conceptual Concepts of Dockers


○ What is Virtualization before deep dive into the
Containerization
○ O.S level virtualization
○ Docker vs Virtual Machine
○ What is Docker and its History
CONTENTS
○ Docker Architecture
○ Advantages and limitations of Docker
○ Components of Docker (Docker Daemon, Docker Client,
Docker Host)
○ Docker Image
○ Docker lifecycle
○ Docker Image TAR and Unarchive, Docker container states,
Docker Networking
○ (Create and Manage), Dockerfile and CD flow
○ CD Tools with Docker (Integrating CD tools like Jnekins and
Github action using projects)
○ Docker Networking
○ Docker Security Introduction
○ Docker volume
● SonarQube, Quality Gates, and Profiles:
○ Understanding SonarQube's Role in Code Quality
Assessment
○ Implementing Quality Gates to Ensure Code Quality
Standards
○ Configuring and Managing SonarQube Profiles for Code
Analysis

Week 4 - Automation Using Shell Scripting & RedHat Linux


Administration

● Shell Scripting
○ Basics of Shell Scripting
○ Real - Time Scenarios We have in shell Scripting.
● Red hat Linux Administration
○ Introduction to Red Hat Linux
○ File System Management
○ User and Group Administration
○ Package Management with Yum
○ System Services and systemd
○ Networking Configuration
CONTENTS
○ Networking Concepts - SNAT, DNAT,IP, Netmask
○ Security and Permissions
○ System Performance Monitoring
○ Storage Management
○ Backup and Restore
○ Kernel and Module Management
○ Remote Access with SSH
○ CPU Scheduling, Job Scheduling
● Python Based Automation Scripts

Week 5 - Deep Dive Kubernetes

● Introduction to Kubernetes
○ Defining Kubernetes and its Role in Container
Orchestration
○ Exploring the Features and History of Kubernetes
● Kubernetes Architecture
○ In-Depth Analysis of Kubernetes Architecture
○ Understanding Node Components, Manifest File
Components, and Service Components
○ Overview of Node and Pod Fundamentals
○ Role of Master Node and Components of the Control Plane
○ Installing and Configuring kubectl and minikube
● Kubernetes Basics
○ Kubernetes Commands: Navigating and Interacting with
Kubernetes
○ Creation and Deletion of Pods
○ Managing Kubernetes YAML Configurations
○ Higher-Level Kubernetes Objects and Object Management
○ Labels and Selectors in Kubernetes
○ Kubernetes Networking, Services, and NodePort
○ Understanding Namespaces in Kubernetes
○ Multi Container Pod Setup
○ Pods Design pattern - Sidecar, Ambassador
● Working with Applications in Kubernetes
○ Installing Kubernetes on AWS
○ Deploying Microservices Applications to Kubernetes Cluster
CONTENTS
○ ConfigMap and Secret Usage in Kubernetes
○ Exploring Volumes in Kubernetes
○ Persistent Volume and LivenessProbe in Kubernetes
○ Replication, Auto-Healing, and Deployment in Kubernetes

Week 6 - Cloud Services & AWS Fundamentals

● Overview of AWS Global Infrastructure


● Detailed Overview of Elastic Compute Cloud (EC2)
● Setting Up Your First EC2 Instance
● In-Depth Guide to EC2 Instance Configuration
● Exploring EC2 Options in Detail
● Connecting to Cloud Instances
● Security Group Handling and Management
● Introduction to Amazon S3
● Auto Scaling and Load Balancing
● Understanding Cloud Formation and CloudWatch
● Exploring Simple Notification Service (SNS) and Simple Queue
Service (SQS)
● Overview of Relational Database Service (RDS) and Identity and
Access Management (IAM)
● Project-Based Learning: ECS and ECR
● Serverless Architecture
● Utilizing CloudWatch for Monitoring and Setting Billing Alarms
● Hands-On Experience with AWS Services:
○ AWS VPC
○ AWS Lambda
○ Amazon API Gateway
○ Amazon SNS
○ Amazon CloudFront
○ AWS CloudFormation
CONTENTS
Week 7 - Ansible

● Defining Ansible and Understanding the Need for Configuration


Management
○ In-Depth Exploration of Ansible Architecture
○ Analyzing the Architecture of Ansible for Efficient
Configuration Management.
○ Detailed Steps for Installing and Setting Up Ansible.
○ Exploring Essential Components such as Ansible Roles,
Ansible Collections, Ad-hoc Commands, and Playbooks
Setup.
● Automation with Ansible Playbooks
○ Creating Playbooks for Automation.
○ Building Playbooks to Copy Files with Special Variables.
○ Utilizing Ansible Handlers and Notifiers for Effective
Automation.
○ Implementing Playbooks for Downloading Artifacts and
Unzipping Files.
● Advanced Automation Scenarios with Ansible
○ Leveraging Ansible Tags for Targeted Deployment to
Servers.
○ Automating the Installation of Apache and Configuring the
Corresponding Configuration Files
○ Configure Multi node k8s cluster with Ansible
○ Manage Variable and Ansible Facts

Week 8 - Infrastructure As Code using Terraform

● Introduction to NumPy for numerical operations


● Introduction to Infrastructure as Code (IaC)
● Getting Started with Terraform
● Terraform Basics: Variables, Resources, Attributes, and
Dependencies
● Terraform State Management
● Advanced Terraform Concepts: for-each and module
● Terraform Project Development
CONTENTS
● AWS Infrastructure Security with Terraform
● CIDR Setup Example with /16
● Subnet Configuration with Terraform
● Terraform State Locking
● Terraform Modules

Week - 9 Monitoring And Logging

● Installation Of grafana
● Database Installation MySQL
● Grafana Setup with My SQL
● Installation of prometheus
● Setting Up Prometheus on Kubernetes cluster
● Monitoring K8 Cluster with prometheus
● Alerts in Grafana
● Grafana Plugins

Projects

Major Projects - Mentor Guided

● Design and implement a robust DevOps project involving the


deployment of Amazon Elastic Container Service (ECS) on EC2
instances, incorporating CloudWatch for monitoring, and
integrating load balancers for optimal application scaling
● Amazon EKS, incorporating the Nginx Ingress Controller for
efficient traffic routing, and integrating Cert-Manager for SSL
certificate management.

Major Projects - Self Guided

● Github action to add ad node as a self hosted runner and run a


specific docker image and expose it to particular port using
nginx
● System Monitoring Script via Shell Scripting - Write a script to
monitor system resources (CPU, memory, disk usage) and
CONTENTS
generate reports. Include email notifications for critical
conditions.
● DevOps pipeline with AWS, CI/CD pipeline with GitHub Actions
and Jenkins,
● Deploying containerized applications using Docker and
Kubernetes
● Deploying AWS services using Terraform

Common questions

Powered by AI

Integration of SonarQube in a DevOps pipeline significantly enhances code quality by automatically analyzing and identifying issues in the codebase. SonarQube's roles include detecting bugs, vulnerabilities, and code smells, which are less severe issues that could lead to further problems . It provides insights that help maintain coding standards through Quality Gates and Profiles, which define thresholds for code quality metrics that must be met . By integrating with CI/CD tools like Jenkins, it performs continuous inspection of the code and provides immediate feedback to developers, facilitating early detection of potential issues before the code is deployed . This continuous feedback loop helps ensure that high code quality is maintained consistently, reducing the risk of defects and improving maintainability.

Git and GitLab employ several strategies to manage code collaboration and version control, particularly enhancing CI/CD workflows. Git, being a distributed version control system (DVCS), allows multiple developers to work on the same codebase concurrently, managing changes through features like branching, merging, and rebasing to ensure code integrity and collaboration efficiency . GitLab builds on this by providing an integrated interface for managing repositories, issues, and pipelines from a centralized location. It supports detailed branching strategies and merge request workflows that facilitate peer reviews and collaborative decision-making . GitLab CI/CD integrates closely with the version control functionality to automatically trigger builds, tests, and deployments based on commit activities, using pipelines defined in YAML files. This seamless automation reduces manual effort while maintaining a constant flow of well-tested, deployable code, thus streamlining the CI/CD process .

Docker and Kubernetes complement each other in cloud-native application deployment by together enabling scalable, flexible, and efficient management and orchestration of containerized applications. Docker provides the environment for developing, packaging, and distributing applications as lightweight containers, encapsulating everything needed to run the software reliably . Kubernetes, on the other hand, orchestrates these Docker containers across a cluster of machines, automating deployment, scaling, and operation of application containers . Kubernetes ensures high availability and fault tolerance by providing features such as automatic bin-packing, self-healing capabilities with restarts and rescheduling, rolling updates, and load balancing . By using Docker for containerization and Kubernetes for orchestration, organizations can achieve consistent and resilient application deployments across hybrid and multi-cloud environments, optimizing resource utilization and reducing operational complexities.

Jenkins primarily serves as the automation server in a CI/CD workflow, orchestrating the various tasks involved in building, testing, and deploying code. It does so by utilizing Jenkins Pipelines and Jobs, triggered by events such as code updates in a repository . Jenkins integrates with version control systems like Git to poll source code changes and automatically triggers builds through scripts defined in JenkinsFile . Git functions as the version control system, managing source code changes, and facilitating collaboration among developers through branches, commits, and pull requests . On the other hand, Docker is used for containerization, packaging the application's code along with its dependencies into containers, ensuring consistency across different environments . In summary, Jenkins acts as the orchestrator of the CI/CD process, Git handles source code management, and Docker ensures consistent deployment environments.

Containers offer a number of advantages over traditional Virtual Machines (VMs), including reduced resource usage, faster deployment times, and greater portability. Containers allow multiple applications to be run in isolation on a single operating system kernel, sharing system resources efficiently without the overhead of running a full guest OS for each application as VMs do . Docker implements containerization by packaging applications along with their dependencies into Docker Containers, enabling applications to run reliably in any environment, from local machines to cloud systems . Additionally, Docker's architecture, including the Docker Daemon, Docker Client, and Docker Host, simplifies the management and orchestration of containers, offering streamlined processes for automated building, deploying, and scaling of applications .

Red Hat Linux's file system management and user administration tools are crucial for effective system administration. The file system management tools facilitate storage management, including mounting and unmounting of file systems, partitioning, and usage monitoring, thus optimizing resource allocation and safeguarding data integrity . Tools such as Logical Volume Manager (LVM) provide flexibility by allowing dynamic resizing of disk storage, pivotal in environments with varying storage requirements. User and group administration tools ensure secure access control by managing user accounts, permissions, and authentication policies . Commands and utilities for creating, modifying, and deleting user profiles, along with assigning roles and permissions, minimize security risks while maintaining operational productivity. These combined capabilities enable administrators to efficiently configure and manage system resources and user interactions.

Kubernetes architecture comprises several key components that contribute to its efficient container orchestration. The Control Plane, which includes the API server, scheduler, and controller manager, manages the cluster state and dispatches resources and workloads. The API server is the interface through which users interact with the cluster . Scheduler handles pod assignments based on available resources. Node Components, such as the kubelet, run on each node and communicate with the Control Plane to execute instructed tasks, while kube-proxy manages network rules for pod communication . Pods, the smallest deployable units, run containers, and are managed by Replication Controllers or Deployments, ensuring desired states and load distribution. Together, these components coordinate workloads and maintain desired states across the cluster, utilizing efficient resource allocation and automated scaling and healing .

Kubernetes includes several advanced features that enhance application reliability and scalability. One key feature is Automatic Scaling, where the Horizontal Pod Autoscaler dynamically adjusts the number of pods in response to CPU utilization or other select metrics, ensuring applications can handle varying loads efficiently . Self-Healing capabilities such as Liveness and Readiness Probes monitor the health of application pods, automatically restarting or rescheduling them upon failures, maintaining consistent service availability . Kubernetes' StatefulSets offer stable network identities and persistent storage volumes, crucial for stateful applications needing stable resources or dependencies . Additionally, ReplicaSets maintain a defined number of pod replicas to ensure availability, seamlessly managing scaling events alongside Deployment objects. These features work in conjunction to maintain performance and reliability while facilitating the scaling of complex applications across distributed systems.

Ansible is a powerful tool for managing IT infrastructure, offering a straightforward approach through its agentless architecture. Its architecture consists of a control node that orchestrates operations on managed nodes using SSH for communication, eliminating the need for an agent on target machines . Ansible simplifies automation through its use of playbooks, written in YAML, which are procedures that define the tasks to be executed on servers. Playbooks can include modules, tasks, and variable files, enabling extensive customization and scalability of infrastructure . Ansible's roles and collections further enhance reusability and organization of workflows by grouping tasks and dependencies. Using ad-hoc commands, Ansible provides flexibility in executing quick, one-time tasks across hosts without defining complex workflows . Overall, Ansible's ease of use and extendability through a broad range of modules position it as an efficient choice for configuring systems, deploying applications, and managing IT infrastructure at scale.

Infrastructure as Code (IaC) is significant in modern cloud environments because it enables automated, repeatable, and efficient provisioning and management of infrastructure. IaC reduces the potential for human error, shortens deployment times, and facilitates version control by treating infrastructure configuration as code, enabling safer and faster rollbacks and deployments . Terraform facilitates the implementation of IaC by providing a platform-independent configuration language to define infrastructure resources in a declarative manner . It handles dependencies automatically and stores the infrastructure state, allowing consistent management of environments. Terraform's modular approach allows infrastructure components to be reused and composed effectively, simplifying complex cloud architectures while ensuring compliance and governance across deployments. Through its extensive provider ecosystem, Terraform supports a wide array of cloud services, enabling seamless integration and scalability in dynamic cloud environments .

You might also like