Transcript Summary: Introduction to Ansible Automation
1. Main Topic/Purpose
This introductory lecture provides a comprehensive overview of Ansible as a DevOps
automation tool, tracing the evolution of automation from bash scripting to modern
configuration management tools, and explaining why Ansible has become one of the most
popular and simplest automation solutions available today.
2. Key Points
Evolution of Automation Tools: The automation landscape progressed through
several phases:
1. Scripting era: Bash (Linux), Batch (Windows), then programming languages
(Perl → Python → Ruby)
2. Configuration Management era: Puppet (introduced the concept), SaltStack
(simple command execution), Chef (more powerful but complex with
servers/clients/workstations)
3. Modern era: Ansible (simplicity-focused) and Terraform (cloud-specialized)
Python became dominant due to extensive libraries, while Puppet introduced the
revolutionary concept of managing configuration state across infrastructure from a
centralized server.
Ansible's Revolutionary "Agentless" Architecture: Unlike Puppet, Chef, and SaltStack
which require agents installed on all managed machines, Ansible requires NO
agents. Instead, it leverages existing protocols:
o SSH for Linux servers (Ubuntu, CentOS, RHEL)
o WinRM for Windows servers
o APIs for cloud platforms (uses boto for AWS)
o Python libraries for databases
o Network protocols for switches/routers
The instructor emphasizes: "There is no server in Ansible. Ansible is called as control
machine" because it doesn't run persistent services—it's just Python modules that execute
and return results.
"Simplicity is Beauty" - Core Ansible Philosophy:
o Playbooks written in YAML format (easy to read/write, structured, minimal
complexity)
o No programming language knowledge required
o No complicated databases or storage—just YAML, INI, or text files
o Installation simple: pip install ansible or package managers
o "Being so simple, Ansible is also very powerful"
o The instructor repeatedly stresses: "Ansible was built on the principle of
simplicity. So try to keep your code as simple as possible."
Comprehensive Use Cases: Ansible excels at:
o System automation: Linux/Windows task automation
o Change management: Production server changes with playbooks serving as
documentation
o Provisioning: Complete infrastructure setup from scratch (cloud instances +
services)
o Orchestration: Large-scale automation combining multiple tools/scripts
o Integration: Works with Jenkins, cloud services, network devices, databases
The instructor notes playbooks can "go directly into your documentation" if written with
proper task titles.
How Ansible Works - Execution Model:
o Inventory file: Contains target machine info (IP addresses, usernames,
passwords)
o Modules: 1000+ built-in modules for specific tasks (installing packages,
restarting services, taking EBS snapshots)
o Playbooks: Define which modules execute on which hosts
o Execution:
For OS targets: Creates Python packages → delivers to destination →
executes → returns output → no residual software left
For API targets (cloud): Executes Python scripts locally on control
machine
o All connection and execution details abstracted—users just specify module
and destination
3. Action Items
Upcoming Learning Path:
Explore multiple Ansible examples and automation scenarios
Learn various Ansible features as needs arise throughout the course
Practice writing simple, effective playbooks
Understand when to use Ansible vs. specialized tools like Terraform
Immediate Next Steps:
"Enough talk now. Let's get into some action" - transition from theory to practical
hands-on exercises
Decision Framework Provided:
Use Ansible for general automation across infrastructure
Consider Terraform for extensive cloud-specific automation
Base tool choice on specific use cases and needs
Remember Ansible can replace multiple automation tools but evaluate appropriately
4. Important Details
Creator: Ansible developed by Michael DeHaan, written in Python
Corporate Ownership: Acquired by Red Hat, now offers Ansible Tower and enterprise
versions
Historical Context - Configuration Management Concept: Puppet originally
introduced this concept where "configuration state across your infrastructure was
supposed to be managed through Puppet server" with agents regularly querying for
desired state—like applying "a law" across all machines
Combination Approach (Historical): "A combination of Puppet and Salt stack was
very famous. Generally we used to run Puppet agents through Salt stack, an initial set
up through Salt stack and then most of the configuration through Puppet."
Why Python Won: Among Perl, Python, and Ruby, "Python got more popular
because of its huge libraries where we get library to automate various kinds of
things." Perl was "too much complicated as too much syntax."
Chef's Complexity: Chef came with "more complications. There are too many moving
parts in Chef. You have server, you have client, you have workstation."
Ansible Focus Evolution: "Initially came in more focused on Linux machines. Then
the Windows automation was also possible... cloud automation, then networking
tool automation, database automation. There's so much of integration came in with
Ansible now."
Output Format: Ansible output written in JSON format
No Disclaimer About Tools: "I'm not saying Puppet or Chef is bad, it's all just a matter
of the use case or your inclination towards a tool that also matters."
Domain-Specific Languages: Puppet has its own DSL (Domain Specific Language),
Chef allows Ruby code, but Ansible uses YAML—no programming required
Remote vs. Local Execution:
o Remote: SSH/WinRM sends scripts to destination, executes, returns output
o Local: API tasks execute on control machine with no script transfer
Instructor's Personal Opinion: Ansible is the instructor's "personal favorite" among
automation tools
Windows Connection Setup: Requires enabling remote connection in Windows
PowerShell for WinRM to work
Installation Methods: pip install ansible OR package managers (both equally valid)
Course Context: This is the introductory lecture for the Ansible section within the broader
"Decoding DevOps – From Basics to Advanced Projects with AI" course, following the
Terraform section and preparing students for hands-on Ansible automation exercises.
Transcript Summary: Setting Up Ansible Infrastructure
1. Main Topic/Purpose
This hands-on lecture demonstrates how to set up a complete Ansible infrastructure on
AWS, including launching EC2 instances for the control machine and client servers,
configuring security groups properly, understanding SSH fingerprints and known_hosts
behavior, and installing Ansible on the control machine.
2. Key Points
Infrastructure Architecture Design: Simple but practical setup consisting of:
o 1 Control Machine: Ubuntu 22.04 EC2 instance with Ansible installed
o 3 Client Machines: CentOS Stream 9 instances (2 web servers, 1 DB server)
o Future expansion: Will add Ubuntu instance as additional web server
o Ansible runs only on control machine—clients need nothing installed
Critical Security Group Configuration: The most important setup detail that causes
common failures. Client security groups must allow two SSH rules:
1. Port 22 from your IP (for direct SSH access)
2. Port 22 from Control Security Group (for Ansible connectivity)
The instructor emphasizes: "Be careful in this one. Otherwise Ansible will not be able to SSH
or connect to these machines." This is easy to miss but essential for Ansible to function.
SSH Fingerprint and known_hosts Mechanism: Detailed explanation of SSH's first-
connection behavior:
o First SSH to a machine: Prompts "Are you sure you want to continue?"
showing the host's fingerprint (NOT SSH keys)
o Fingerprints stored in ~/.ssh/known_hosts file
o Subsequent connections don't prompt because fingerprint is recognized
o Clearing known_hosts: cat /dev/null > ~/.ssh/known_hosts removes all stored
fingerprints
o Why this matters: "Ansible is also going to do the same to connect to those
web servers and DB servers... we are going to see similar question, but we are
going to resolve that very differently" (foreshadowing inventory file
configuration)
Ansible is a Python Library with Dependencies: Installation through apt install
ansible brings Python dependencies automatically. Key insights:
o Uses both Python 2 and Python 3 depending on modules
o Python interpreter selection depends on "many factors"
o Critical distinction: "All the code... will be translated to Python scripts. And
those Python scripts will run on the target machine on the client machine and
not on the control machine."
o Control machine runs Python only for localhost operations
o Different Ansible modules use different Python interpreters/versions on client
side
Documentation-First Learning Approach: The instructor explicitly states: "In fact you
will see that we use documentation a lot. I'm going to take you through the
documentation in almost every lecture." This emphasizes learning to navigate official
documentation rather than memorizing commands, modeling real-world DevOps
practices.
3. Action Items
Control Machine Setup:
1. Launch EC2 instance with these specifications:
o Name: control
o AMI: Ubuntu 22.04
o Instance Type: T2 Micro
o Key Pair: Create new key named control (PEM format)
o Security Group: control-SG - Allow port 22 from your IP only
Client Machines Setup (Launch 3 simultaneously):
1. Launch 3 EC2 instances:
o Initial name: vprofile-web00 (count: 3)
o Rename individually:
Instance 1: web01
Instance 2: web02
Instance 3: db01
o AMI: CentOS Stream 9 (AWS Marketplace, free with T2 Micro)
o Instance Type: T2 Micro (must change from default)
o Key Pair: Create new key named client_key (PEM format)
o Security Group: client-SG with TWO rules:
Port 22 from your IP
Port 22 from Control Security Group (critical for Ansible)
Ansible Installation on Control Machine:
1. SSH into control machine:
2. ssh -i ~/downloads/[Link] ubuntu@<CONTROL_PUBLIC_IP>
3. Update package repository:
4. sudo apt update
5. Add Ansible repository (command from official documentation)
6. Install Ansible:
7. sudo apt install ansible -y
8. Verify installation:
9. ansible --version
Next Lecture Preview:
Learn to use Ansible to connect to web/DB servers
Study the inventory file (described as "very important file")
Test connectivity between control and client machines
4. Important Details
CentOS Login Username: For CentOS Stream 9 from AWS Marketplace, the default
username is ec2-user (not centos or root)
Key File Format: All key pairs created as PEM format (not PPK)
AWS Marketplace Subscription: CentOS Stream 9 AMI requires "Subscribe now" step
but remains free when used with T2/T3 Micro instances
Instance Type Warning: AWS may suggest larger instance types—must manually
change to T2 Micro to stay in free tier
Ansible Version Variability: Instructor saw version 2.4.6 with Python 3, but notes
"you may see a different version than mine"—versions don't need to match exactly
Configuration File Location: Mentioned during version check output but details
deferred to "coming lectures"
SSH Connection Syntax:
ssh -i <key-file> <username>@<public-ip>
known_hosts File Location: ~/.ssh/known_hosts in home directory
Clearing known_hosts Command Explanation:
cat /dev/null > ~/.ssh/known_hosts
o /dev/null is null/empty
o Redirects empty content to file, effectively clearing it
o Alternative to deleting the file
Installation Documentation Path:
o Google: "Ansible installation"
o [Link] → Installing Ansible → Installing Ansible on specific
operating system → Ubuntu
Repository Addition Command: Uses software-properties-common package and
add-apt-repository command (exact command copied from documentation)
Python Interpreter Selection Factors: "Depends on many factors which Python
interpreter it is going to use. We have different modules in Ansible... different
modules uses different Python interpreter, different Python version."
Where Python Runs: "Mostly it's at the client side. That means where we are doing
the setup web server db server. On that machines Python interpreter will be used."
Control Machine Python Usage: Only runs Python scripts "when you mention local
host. If you want to run something locally."
Terminal Paste Shortcut: Shift+Insert for pasting in Git Bash terminal
Connection Interruption Note: Instructor experienced SSH disconnection during
recording ("For some reason it just threw me out") and reconnected—normal
occurrence to expect
Teaching Philosophy Observed: The instructor deliberately explores SSH fingerprint behavior
in depth, explicitly stating why: to prepare students for how Ansible handles first-time
connections differently, demonstrating "teach the why, not just the what" approach.
Transcript Summary: Ansible Inventory File and Ping Module
1. Main Topic/Purpose
This hands-on lecture teaches how to create an Ansible inventory file in YAML format,
configure SSH key-based authentication to target machines, resolve host key verification
issues, and successfully test connectivity using the Ansible ping module—establishing the
foundation for managing multiple target servers from a control machine.
2. Key Points
Inventory File is Central to Ansible: The inventory file provides Ansible with all
information needed to connect to target machines: IP addresses, usernames, SSH
private keys, and port numbers. Two formats exist (INI and YAML); this course uses
YAML format. The instructor strongly advises: "I do not recommend using the default
inventory file. Always have inventory in your repository. So then you can fetch it on
any machine and you can run your Ansible code."
Critical Security Practice - SSH Key Permissions: Private SSH keys must have 400
permissions or Ansible refuses to use them. The error message explicitly states:
"Permission 0664 for client key are too open... unprotected private key file." This is a
security requirement, not just a preference. Command: chmod 400 [Link]
Host Key Verification Creates Interactive Barrier: On first SSH connection, Ansible
prompts "Do you want to continue? Yes or no?" which breaks automation. The
instructor explains: "We don't want our execution to be interactive, no questions
asked... If you're running it from background, this is not going to work out."
Solution: Edit /etc/ansible/[Link] and set host_key_checking = false by:
1. Removing semicolon (comment character) from the line
2. Changing value from true to false
This tells Ansible to "just accept the connection. Just say yes by default."
YAML Inventory File Structure and Indentation: Proper YAML formatting is critical
with consistent spacing (2 spaces per indentation level). Structure:
all:
hosts:
web01:
ansible_host: <private_IP>
ansible_user: ec2-user
ansible_ssh_private_key_file: [Link]
Variables include:
o ansible_host: Target machine IP (use private IP, not public)
o ansible_user: SSH username (CentOS 9 AMI from Amazon uses ec2-user, NOT
centos)
o ansible_ssh_private_key_file: Path to private key (relative to current
directory)
Ansible Ping Module Tests SSH Connectivity: The ping module is NOT network ICMP
ping—it performs SSH login and returns. Command syntax:
ansible web01 -m ping -i inventory
o web01: Host name from inventory file
o -m ping: Module to execute
o -i inventory: Path to inventory file
Success response shows:
o Green color (success vs red for failure)
o changed: false (nothing modified on target)
o Response: pong (confirming successful SSH)
3. Action Items
Project Structure Setup:
1. Create project directory: mkdir vprofile
2. Change into directory: cd vprofile
3. Create exercise folder: mkdir exercise1
4. Change into exercise: cd exercise1
5. Treat vprofile as Git repository with exercise subdirectories for easy code revision
Create Inventory File:
1. Create file: vi inventory (name can be anything)
2. Write YAML structure with proper indentation (2 spaces):
3. all: hosts: web01: ansible_host: <web01-private-IP> ansible_user: ec2-user
ansible_ssh_private_key_file: [Link]
4. Save and quit (:wq)
Copy SSH Private Key:
1. Exit from control machine temporarily
2. Display local key: cat [Link]
3. Carefully copy entire key content (no extra spaces/characters)
4. macOS warning: Extra % may appear at end—don't copy it
5. Copy from -----BEGIN... to ...END----- including five hyphens
6. SSH back to control machine
7. Navigate to exercise1 directory
8. Create key file: vi [Link] (exact name from inventory)
9. Paste key content (ensure no extra characters)
10. Save and quit
Set Key Permissions (Critical):
chmod 400 [Link]
Configure Ansible to Skip Host Key Checking:
1. Switch to root: sudo su -
2. Navigate: cd /etc/ansible
3. Backup existing config: mv [Link] [Link]
4. Generate new config (copy command from documentation)
5. Edit config: vi [Link]
6. Search for: /host_key_checking
7. Remove semicolon at line start (uncomment)
8. Change value from true to false
9. Save: :wq
10. Exit root: exit
11. Return to ubuntu user and exercise1 directory
Test Connection:
ansible web01 -m ping -i inventory
Expected Success Output:
Green colored response
changed: false
pong response
Next Lecture Preview:
Add remaining machines (web02, db01) to inventory
Explore additional inventory file features
4. Important Details
Terminology: From this point forward, web01, web02, and db01 are referred to as
"targets" (not clients, not nodes)
Private vs Public IP: Always use private IP addresses in inventory when all machines
are in the same VPC/network. Security group rules were configured to allow this.
CentOS Username Critical: CentOS 9 AMI from Amazon uses ec2-user as the default
username, NOT centos or root. This is AMI-specific.
Default Inventory Location: /etc/ansible/hosts (if not specified with -i)
Ansible Command Structure:
ansible <host-pattern> -m <module-name> -i <inventory-file>
Comment Characters in [Link]: Both ; (semicolon) and # (hash) are used as
comment characters
Key File Path: Can be relative (e.g., [Link] in current directory) or absolute
(e.g., /home/ubuntu/vprofile/exercise1/[Link])
Common Errors Encountered:
1. Host Key Verification Failed:
Error: "failed to connect to the host via SSH, host key verification
failed"
Solution: Configure host_key_checking = false
2. Unprotected Private Key File:
Error: "permission 0664 for client key are too open"
Solution: chmod 400 [Link]
Ansible Color Coding:
o Red: Failure/unreachable
o Green: Success
o Yellow/Orange: Changed (will be seen in future modules)
Module Response Format: Every module returns output. Ping module returns:
changed: false
ping: pong
Configuration File Generation: The /etc/ansible/[Link] file contains a command
within it to regenerate itself if needed
File Permission Check: View permissions with ls -l [Link]
o Before chmod: 664 (too open)
o After chmod: 400 (correct)
Documentation Reference: Search "Ansible inventory" → "How to build your
inventory" for official docs
Inventory File Variables:
o ansible_host: IP address or hostname
o ansible_port: Default 22 if not specified
o ansible_user: SSH username
o ansible_password: NEVER USE - "very dangerous... in clear text"
o ansible_ssh_private_key_file: Preferred authentication method
o Many other variables available (check documentation)
Multiple Indentation Levels in YAML:
o Level 1 (0 spaces): all:
o Level 2 (2 spaces): hosts:
o Level 3 (4 spaces): web01:
o Level 4 (6 spaces): ansible_host:, ansible_user:, etc.
Instructor's Memory Note: "I already know this because I used Ansible a lot. So most
of the variables... got automatically by hearted" - emphasizes that memorization
comes with practice, not required upfront
Pedagogical Approach: The instructor intentionally attempts connection with incorrect key
permissions (664) to demonstrate the error message, then shows the correct solution—
teaching through controlled failure and resolution rather than just showing the correct path.
Transcript Summary: Ansible Inventory Part 2 - Grouping and Variables
1. Main Topic/Purpose
This lecture extends inventory file knowledge by demonstrating how to add multiple hosts,
organize them into logical groups (including nested parent-child group hierarchies), define
variables at different levels (host vs group), and use host patterns for targeting multiple
machines efficiently—transforming from managing individual hosts to managing
infrastructure at scale.
2. Key Points
Host Grouping Enables Scalable Management: Instead of running commands against
individual hosts (ansible web01, ansible web02, etc.), create logical groups to
manage multiple hosts simultaneously. The instructor emphasizes: "What if like this
you have 10, 20, 50, and like that many, many host, right? Doing a ping to each and
every machine is going to take like forever." Groups solve this by allowing single
commands like ansible webservers -m ping to target all web servers at once.
Group Hierarchy with Parent-Child Relationships: Ansible supports groups of groups
through the children: keyword. Create parent groups (e.g., dc_oregon) that contain
other groups (webservers, dbservers) as children. This enables commands like
ansible dc_oregon -m ping to target all hosts across multiple child groups—useful for
regional, environmental, or organizational structures.
Variable Precedence: Host Level > Group Level: Variables can be defined at both
levels with host-level variables having highest priority:
o Host level: Specific to individual machines (e.g., ansible_host for unique IPs)
o Group level: Shared across group members (e.g., ansible_user,
ansible_ssh_private_key_file)
The instructor explains: "If it does not find variables at the host level, then only it'll go at the
group level." This eliminates repetition—define common credentials once at group level
rather than repeating for every host.
Flexible Host Pattern Targeting: Multiple ways to target hosts:
o Group name: ansible webservers (targets all hosts in group)
o all keyword: ansible all (targets everything in inventory)
o Asterisk wildcard: ansible '*' (same as all, requires single quotes)
o Prefix matching: ansible 'web*' (targets hosts starting with "web")
o Regular expression patterns similar to Linux shell globbing
This flexibility enables efficient execution against subsets of infrastructure.
YAML Formatting is Critical: Consistent indentation (2 spaces per level) and proper
column alignment are mandatory for inventory files to parse correctly. The instructor
repeatedly stresses: "Make sure they are all in the same line... In the same column I
should say, not line" and "YAML space is very important."
3. Action Items
Exercise 2 - Add Multiple Hosts:
1. Copy exercise1 to exercise2: cp -r exercise1 exercise2
2. Navigate to exercise2: cd exercise2
3. Open inventory file: vi inventory
4. Copy web01 configuration (4 lines): 4yy
5. Paste twice to create web02 and db01 entries
6. Update hostnames: web02, db01
7. Update ansible_host with correct private IPs:
o Get web02 private IP from AWS console
o Get db01 private IP from AWS console
8. Keep ansible_user: ec2-user and key file same for all
9. Test individual hosts:
10. ansible web02 -m ping -i inventoryansible db01 -m ping -i inventory
Add Host Groups:
1. At same indentation level as hosts: (2 spaces), add:
2. children: webservers: hosts: web01: web02: dbservers: hosts: db01:
3. Ensure proper indentation (6 spaces for host names under each group)
4. Add colons after each hostname: web01:, web02:, db01:
Create Parent Group:
1. At same level as other groups, create:
2. dc_oregon: children: webservers: dbservers:
3. Use underscores (_), NOT hyphens (-) in group names
4. Test group targeting:
5. ansible webservers -m ping -i inventoryansible dbservers -m ping -i inventoryansible
dc_oregon -m ping -i inventoryansible all -m ping -i inventoryansible '*' -m ping -i
inventoryansible 'web*' -m ping -i inventory
Exercise 3 - Group-Level Variables:
1. Copy exercise2 to exercise3: cp -r exercise2 exercise3
2. Navigate to exercise3: cd exercise3
3. Open inventory file
4. Under dc_oregon: group, at same level as children:, add:
5. vars: ansible_user: ec2-user ansible_ssh_private_key_file: [Link]
6. Remove ansible_user and ansible_ssh_private_key_file from all individual hosts
(web01, web02, db01)
7. Keep only ansible_host at host level
8. Test functionality:
9. ansible all -m ping -i inventory
10. Test variable precedence by intentionally breaking it:
o Change ansible_user: ec2-user to ansible_user: ec2-
o Run ping command (should fail with "permission denied")
o Revert to correct username
o Confirm success
Next Lecture Preview:
Learn modules that make actual changes to machines
Install packages
Manage services
Manage files
4. Important Details
Inventory Structure Example (Final State):
all:
hosts:
web01:
ansible_host: <private_IP>
web02:
ansible_host: <private_IP>
db01:
ansible_host: <private_IP>
children:
webservers:
hosts:
web01:
web02:
dbservers:
hosts:
db01:
dc_oregon:
children:
webservers:
dbservers:
vars:
ansible_user: ec2-user
ansible_ssh_private_key_file: [Link]
Indentation Levels (2 spaces each):
0 spaces: all:
2 spaces: hosts:, children:
4 spaces: host names (web01:), group names (webservers:)
6 spaces: ansible_host: under hosts, hosts: under groups
8 spaces: host names under group's hosts:
Vi Editor Copy/Paste Commands:
Copy 4 lines: 4yy (position cursor on first line)
Paste: P (capital P)
These commands work within vi editor for quick duplication
Group Naming Rules:
Use underscores: dc_oregon ✓
Avoid hyphens: dc-oregon ✗ (causes warnings/errors depending on Ansible version)
The instructor explicitly warns: "'-' is not at all recommended. It'll give you a warning
or it may also give you error if you're using a different version of Ansible."
Host Pattern Syntax:
Direct group: ansible webservers -m ping -i inventory
All hosts: ansible all -m ping -i inventory
Wildcard (quoted): ansible '*' -m ping -i inventory
Prefix match (quoted): ansible 'web*' -m ping -i inventory
Single quotes required when using wildcards
Variable Types:
Host-level: Highest priority, specific to individual machines
Group-level: Lower priority, shared across group members
Lookup order: Host variables checked first, then group variables
Common use case: IP addresses at host level, credentials at group level
Error Messages:
Permission denied: Indicates incorrect ansible_user OR incorrect
ansible_ssh_private_key_file OR both
The instructor tests this by changing ec2-user to ec2- (invalid username)
Result: "permission denied. It is using the user called ec2, there is no such user in this
machine so that's why permission denied."
Module Execution Scope:
"This is not just about ping, okay? Any module like that you can execute for the
group."
Groups work with all Ansible modules, not just testing commands
Common Mistake Prevention:
Always verify column alignment in YAML
Check group names match exactly between definition and command line
Remember to add colons after hostnames when converting to group members
Don't forget to remove redundant variables after moving them to group level
Regular Expression Support:
Ansible host patterns support Linux-style glob patterns
* matches any characters
Can use prefix matching: web* matches web01, web02, web-anything
Vi Editor Context:
Instructor uses vi throughout (not nano or other editors)
Commands shown assume vi/vim proficiency
Alternative: use any text editor, manually copy/paste configurations
Pedagogical Pattern: The instructor uses a "copy, modify, test, break, fix" cycle—copy
working configuration, modify for new concept, test success, intentionally break to show
error, fix to reinforce learning. This hands-on approach with controlled failures builds
troubleshooting skills alongside configuration knowledge.
Transcript Summary: YAML & JSON Data Formats for Ansible
1. Main Topic/Purpose
This lecture explains JSON and YAML data formats by building from Python dictionary
foundations, demonstrating why YAML has become the preferred format for modern DevOps
tools (especially Ansible playbooks), and teaching how to read Ansible's JSON output—
emphasizing the evolution from complex XML formats to human-readable data structures.
2. Key Points
Evolution from XML to JSON to YAML: The instructor references Maven's [Link] as
an example of difficult-to-read/write formats, stating: "If you remember [Link]
file from Maven and Jenkins session, you know, it's really difficult to read and also to
write. But thankfully now we have moved to a better data structure." The
progression represents increasing human readability while maintaining machine
parsability. The instructor adds: "In today's time if I see any tool using XML format, I'll
be really worried."
Python Dictionary → JSON → YAML Transformation: All three formats represent the
same data structure, just with different syntax:
o Python dictionary: Horizontal format with {}, [], quotes, commas
o JSON: Vertical/formatted version of Python dictionary (still has {}, [], quotes)
o YAML: Cleanest format using only indentation, colons, and hyphens (no
braces/brackets)
Example structure shown: Three keys (DevOps, development, ansible_facts) with values as
lists and dictionaries. The instructor's verdict: "I know which one you liked. That's everyone's
choice. YAML."
YAML Syntax Essentials:
o Indentation is critical (like Python)—use 2 or 3 spaces consistently
o Lists: Use hyphen + space (- item)
o Dictionaries: Use key: value (colon + space mandatory)
o No braces or brackets needed (unlike JSON)
o Quotes optional unless special characters present
o Three hyphens (---): Optional file start indicator
o Space after hyphen and colon is required, not optional
Understanding Ansible Output (JSON Format): When Ansible modules execute, they
return JSON output with critical keys:
o changed: Boolean indicating if task modified system state
false for ping module (no changes made—"soft touch module, it login
and come back")
true when task makes changes (installing packages, pushing configs
first time)
false on subsequent runs if system already in desired state
o Module-specific keys: Each module has unique output (e.g., ping returns
pong)
o ansible_facts: Often contains discovered information about target systems
The instructor emphasizes: "Changed will be true if this module if this task execution made
any change... When you run it once again, if the system is in the same state, the changed
value will be false."
Foundation in Python Data Types is Essential: Repeatedly emphasized that
understanding Python dictionaries and lists makes JSON and YAML intuitive. "When
you're learning Python, it's very important for you to understand Python data types.
If you understand that, you will very easily understand JSON and then you'll very
easily understand YAML."
3. Action Items
Understanding Data Structure Relationships:
1. Review Python dictionary concepts (keys, values, lists, nested dictionaries)
2. Compare horizontal (Python) vs vertical (JSON) vs indented (YAML) representations
3. Practice mentally converting between formats
Study Ansible Output:
1. Examine JSON output from Ansible module executions
2. Identify the changed key in all outputs
3. Understand module-specific output keys (e.g., ping: pong)
4. Look for ansible_facts dictionaries in responses
Learn YAML Syntax:
1. Memorize list syntax: - item (hyphen + space + item)
2. Memorize dictionary syntax: key: value (colon + space + value)
3. Practice proper indentation (2 or 3 spaces consistently)
4. Remember quotes are optional unless special characters
5. Know --- is optional file start marker
Reference Documentation:
1. Google "ansible YAML" to find official YAML syntax documentation
2. Read Ansible documentation on YAML basics
3. Study examples of lists, dictionaries, and nested structures
4. Review indentation rules and spacing requirements
Next Lecture:
Deep dive into how changed value works in practice
See actual package installation showing changed: true then changed: false
Understand idempotency through the changed key behavior
4. Important Details
Example Data Structure (All Three Formats):
Python Dictionary (Horizontal):
{'DevOps': ['ansible', 'terraform'], 'development': ['python', 'java'], 'ansible_facts':
{'discovered_interpreter': '/usr/bin/python3'}}
JSON (Vertical):
"DevOps": [
"ansible",
"terraform"
],
"development": [
"python",
"java"
],
"ansible_facts": {
"discovered_interpreter": "/usr/bin/python3"
YAML (Indented):
DevOps:
- ansible
- terraform
development:
- python
- java
ansible_facts:
discovered_interpreter: /usr/bin/python3
version: 2
Ansible Ping Module Output Example (JSON):
"db1": {
"ansible_facts": {
"discovered_python_interpreter": "/usr/bin/python3"
},
"changed": false,
"ping": "pong"
YAML List Format:
Correct: - item (hyphen, space, item)
Can start on same line or next line after key
Space after hyphen is mandatory
YAML Dictionary Format:
Format: key: value
Space after colon is mandatory
Nested dictionaries use additional indentation
Critical Spacing Rules:
Indentation: 2 or 3 spaces (be consistent within file)
After hyphen in list: must have space
After colon in dictionary: must have space
Wrong: key:value ✗
Correct: key: value ✓
Special Characters and Quotes:
Quotes optional for simple strings
Use quotes when special characters present
Both single and double quotes work
File Start Marker:
Three hyphens: ---
Optional but indicates YAML file start
Shows where content begins
Ansible Usage:
Playbooks: Written in YAML format
Module outputs: Returned in JSON format
Inventory files: Can be YAML or INI format
Changed Key Behavior:
Present in all Ansible module outputs
Boolean value (true or false)
Ping module: always false (no system changes)
Installation modules: true first run, false subsequent runs if already installed
Configuration modules: true when file differs, false when already correct
PyCharm Feature Noted:
"PyCharm is smart. It's already taking everything properly"
IDE automatically formats vertical JSON structures
Documentation Search:
Google: "ansible YAML"
Look for "YAML syntax" in Ansible docs
Contains lists, dictionaries, indentation examples
Instructor's Preference (Implied):
Strongly prefers YAML over JSON over XML
JSON described as improvement over XML
YAML described as "pleasure" compared to JSON
XML tools would cause worry in modern context
List of Dictionaries Example (From Documentation):
- Martin:
name: Martin D'vloper
job: Developer
skills:
- python
- perl
- Kabita:
name: Kabita Singh
job: DevOps
skills:
- ansible
- terraform
This shows list (two items), each item is dictionary, nested dictionary values, and nested list
values.
Data Type Understanding Hierarchy:
1. Python data types (foundation)
2. JSON (vertical Python)
3. YAML (indented, clean syntax)
Humor Note:
"Yeah, I know, not so funny" — regarding ping/pong output
Acknowledges the obvious joke but emphasizes it's standard Ansible behavior
Teaching Method: Visual transformation—showing the exact same data structure in three
formats side-by-side helps students understand that these are different representations of
identical information, not different data. The "make it vertical, remove brackets, use
indentation" progression demystifies YAML by showing it as a cleaned-up version of familiar
Python dictionaries.
Transcript Summary: Ansible Ad Hoc Commands and Idempotency
1. Main Topic/Purpose
This hands-on lecture introduces Ansible ad hoc commands as quick, one-off task execution
methods, demonstrates core modules (yum, service, copy), and teaches the fundamental
concept of idempotency in configuration management—where Ansible only applies changes
when the target state differs from the desired state, making it superior to traditional scripts.
2. Key Points
Ad Hoc Commands: Quick Tasks vs. Version-Controlled Playbooks: Ad hoc
commands execute Ansible modules directly from command line without writing
playbooks. The instructor acknowledges: "It's not a good practice. Everything should
be through the code, so you can version-control it and keep it in your repositories,
right? So everything should be version-controlled." However, they're valuable for rare
tasks like: "If you want to power-off all the machines in your lab for Christmas
vacation, you could execute a quick one-liner in Ansible without writing a playbook."
Ad hoc is for expediency; playbooks are for sustainability.
Idempotency: The Core Principle of Configuration Management: This is the defining
characteristic separating configuration management tools from scripts. The instructor
emphasizes: "Focus on my words, it is going to compare the state. If there is a
difference, then it'll apply." Key behaviors:
o First run: changed: true (task modifies system)
o Second run: changed: false, message: "Nothing to do" (system already in
desired state)
o After manual change: changed: true again (detects drift and corrects it)
Definition provided: "Configuration management tools are idempotent. What is
idempotent? ...If the target is in a different state, then only it'll apply the changes. If it's in
the same state, it'll not apply the changes and your scripts or your commands are also not
going to fail."
Privilege Escalation with --become: Many system operations (package installation,
service management) require root privileges. Using --become flag escalates privileges
to sudo level. Without it:
o Error: "This command has to be run under the root user"
o The inventory user (ec2-user) has sudo privileges but Ansible won't use them
automatically
o Solution: Add --become to command: ansible web01 -m yum -a "..." --become
This is required for most system-level operations.
Core Ansible Modules Demonstrated:
o yum: Package management - name=httpd state=present (install) or
state=absent (remove)
o service: Service management - name=httpd state=started enabled=yes (start
and enable)
o copy: File transfer - src=[Link] dest=/var/www/html/[Link] (copies
with path)
o ping: Connectivity test (from previous lectures)
o file: File/folder properties (mentioned, not demonstrated)
o user: User management (mentioned, not demonstrated)
State Comparison: Configuration Management vs. Scripts: The instructor draws a
critical distinction:
o Scripts: "It simply reapplies. It does not check, until and unless, you add some
intelligence to it. It'll simply reapply the change."
o Configuration Management: Automatically compares current vs. desired
state before acting
o Even file content changes are detected: "Single change, single line space,
anything, any difference, it is going to find out and it is going to reapply the
change"
When asked "difference between scripting and configuration management," answer:
"Configuration management tools maintain the state of the target."
3. Action Items
Setup Exercise 4:
1. Copy exercise3 to exercise4: cp -r exercise3 exercise4
2. Navigate to exercise4: cd exercise4
3. No new files to write—focus on command-line execution
Package Management with Yum Module:
1. Install Apache (first attempt - will fail):
2. ansible web01 -m [Link] -a "name=httpd state=present" -i inventory
Expected error: "This command has to be run under the root user"
3. Install Apache with privilege escalation:
4. ansible web01 -m [Link] -a "name=httpd state=present" -i inventory --
become
Expected output: changed: true, shows installation results
5. Run same command again (demonstrate idempotency):
6. ansible web01 -m [Link] -a "name=httpd state=present" -i inventory --
become
Expected output: changed: false, message: "Nothing to do"
7. Install on entire webservers group:
8. ansible webservers -m [Link] -a "name=httpd state=present" -i
inventory --become
o web01: changed: false (already installed)
o web02: changed: true (new installation)
9. Remove Apache (test state=absent):
10. ansible webservers -m [Link] -a "name=httpd state=absent" -i inventory
--become
Shows removal: changed: true
11. Reinstall Apache:
12. ansible webservers -m [Link] -a "name=httpd state=present" -i
inventory --become
Service Management:
ansible webservers -m [Link] -a "name=httpd state=started enabled=yes" -i
inventory --become
Starts httpd service
Enables httpd to start on boot
Equivalent to: systemctl start httpd && systemctl enable httpd
File Copy Operations:
1. Create test file:
2. vim [Link]
Content: "This is managed by ansible"
3. Copy file to web servers:
4. ansible webservers -m [Link] -a "src=[Link]
dest=/var/www/html/[Link]" -i inventory --become
Expected: changed: true
5. Re-run copy (test idempotency): Same command Expected: changed: false (no
differences detected)
6. Modify source file: Edit [Link] (add extra periods, spaces, any change)
7. Re-run copy (detect change): Same command Expected: changed: true (difference
detected and applied)
Security Group and Testing:
1. Edit security group for web instances
2. Add inbound rule: Port 80 from anywhere (or from your IP)
3. Save rule
4. Get public IP of web01 or web02
5. Open in browser: [Link]
6. Verify [Link] content displays
Experimentation Tasks:
Make changes to [Link] file
Push changes using copy module
Observe changed status
Verify changes in browser
Next Lecture:
Learn to write Ansible playbooks
"Much better picture you'll get in the next lecture when we learn how to write
playbooks"
4. Important Details
Ad Hoc Command Syntax:
ansible <host-pattern> -m <module-name> -a "<arguments>" -i <inventory-file> [--become]
Module Arguments Format:
Enclosed in double quotes
Space-separated key=value pairs
Example: "name=httpd state=present"
Example: "name=httpd state=started enabled=yes"
Example: "src=[Link] dest=/var/www/html/[Link]"
State Values for Yum Module:
state=present: Install package if not installed
state=absent: Remove package if installed
Service Module States:
state=started: Start service
state=stopped: Stop service
enabled=yes: Enable service at boot
enabled=no: Disable service at boot
Copy Module Requirements:
src: Source file path (relative or absolute)
dest: Destination path including filename (must specify complete path)
Example: src=[Link] dest=/var/www/html/[Link]
Note: "You have to mention the complete path, including the file name"
Changed Status Behavior:
Scenario changed Value Message
First installation true Installation details
Second run (no change) false "Nothing to do"
After removal true Removal details
After source file edit true Update applied
After service start true Service started
Service already running false No action needed
Instructor's Favorite Feature: "I love this feature of Ansible. I have used many configuration
management tools. And every configuration management tool or automation tool, we write
scripts with its own format."
Terminology Clarification:
Ansible scripts called playbooks (not scripts)
Written in YAML format
Ad hoc commands bypass playbook creation
Documentation Reference:
"Introduction to ad hoc commands"
"Don't worry, you don't need to by-heart these module names as clear-cut
documentation will go through them"
Modules have different arguments (check documentation)
Difference Detection:
"Difference could be at the source or at the destination"
Applies to file content, package versions, service states, etc.
"Single change, single line space, anything, any difference, it is going to find out"
Example Output Messages:
Success with no changes: "success", changed: false, "Nothing to do"
Success with changes: Results showing what was installed/changed
Failure: Error message explaining why (e.g., needs root privileges)
Security Group Configuration:
Port 80 (HTTP) must be open
Options: From anywhere ([Link]/0) or from your IP
Required to access web content in browser
Module Full Names:
[Link]
[Link]
[Link]
Can use short names: yum, service, copy (built-in modules)
Privilege Escalation Details:
ec2-user has sudo privileges
Ansible won't assume sudo automatically
Must explicitly request with --become
Without it: "This command has to be run under the root user"
Question for Interviews: "Someone ask you difference between scripting and configuration
management, so you can tell this. Configuration management tools maintain the state of the
target. And you can use this word called as idempotent."
Teaching Pattern: The instructor uses repetition with variation (run command → observe
output → run again → compare output → modify state → run again → observe change) to
deeply embed the idempotency concept through hands-on experience rather than
theoretical explanation alone.
Transcript Summary: Ansible Playbooks and Module Documentation
1. Main Topic/Purpose
This comprehensive lecture teaches how to write Ansible playbooks from scratch, explains
the hierarchical YAML structure (playbook → plays → tasks → modules), demonstrates
debugging and validation options (syntax check, dry run, verbosity levels), and emphasizes
learning to navigate Ansible's extensive module documentation rather than memorizing
module names.
2. Key Points
Playbook Hierarchy: List of Plays → List of Tasks → Dictionary of Module Options:
Understanding the three-level structure is fundamental:
o Level 1 (Global): Hosts, become, tasks keywords (no indentation or 2 spaces)
o Level 2: Task names and module names (4 spaces indentation)
o Level 3: Module options/arguments as key-value pairs (6 spaces indentation)
Structure: "Playbook is list of plays, list in YAML starts like this right? Hyphen, hyphen. So this
two item in this list, and each item is a dictionary in itself, which has key and value... Here
the key task and its value is another list, list of dictionaries."
Three Essential Validation Steps Before Execution:
1. Syntax Check: ansible-playbook --syntax-check [Link] (validates YAML
structure)
2. Dry Run: ansible-playbook -C -i inventory [Link] (simulates execution
without changes)
3. Actual Execution: Remove -C flag
The instructor emphasizes: "It is considered as a very good practice that you do dry run
before you actually execute the playbook. But don't take it as a guarantee. If it runs in the
dry run there is no guarantee it is going to run." Dry runs get "very close to the actual
execution, but not the exact execution."
Verbosity Levels for Debugging (-v through -vvvv): Progressive detail for
troubleshooting:
o No flag: Standard output (play names, task names, changed status)
o -v: Shows Ansible config file, JSON output
o -vv: Adds Python version, playbook path, line numbers
o -vvv: Shows Linux command options, identity files, connection users
o -vvvv: Maximum detail for deep debugging
Use case: "When something fails logically... syntactically, if you make mistake it's going to be
very easy to sort out. But logical mistakes are difficult to catch. What user are you using the
right user? Are you using the right key? Does that user has Sudo privilege?"
"Gathering Facts" - Hidden First Task: Ansible automatically runs a setup module
before your tasks: "This basically is collection of information of the target machine.
Ansible by default uses a module called setup that collects the target machine
information in the runtime." This appears as an extra task in output (shows 3 tasks
when you wrote 2). Will be covered in separate lecture but explains why task count
doesn't match your playbook.
Documentation Over Memorization Philosophy: The instructor strongly emphasizes
learning to navigate documentation rather than memorizing hundreds of modules.
After showing the massive module list: "Now I'm going to give you one task an
assignment. You have to go through this entire list of all the modules and you have to
by heart all the module names. Ah I'm just kidding of course... You have
documentation. Know how to use the documentation. That's all you need to do.
Learn the basic structure and learn the documentation."
3. Action Items
Setup Exercise 5:
1. Copy exercise4 to exercise5: cp -r exercise4 exercise5
2. Remove [Link] file (not needed)
3. Verify inventory file and login key present
Remove Packages from Previous Exercise:
ansible webservers -m yum -a "name=httpd state=absent" -i inventory --become
Create First Playbook ([Link]):
1. Create file: vi [Link]
2. Write playbook with two plays:
---
- name: Webserver setup
hosts: webservers
become: yes
tasks:
- name: Install httpd
[Link]:
name: httpd
state: present
- name: Start httpd service
[Link]:
name: httpd
state: started
enabled: yes
- name: DBserver setup
hosts: dbservers
become: yes
tasks:
- name: Install mariadb-server
[Link]:
name: mariadb-server
state: present
- name: Start mariadb service
[Link]:
name: mariadb
state: started
enabled: yes
Execute Playbook with Validation Steps:
1. Syntax check:
2. ansible-playbook --syntax-check [Link]
o Success: Prints playbook name
o Failure: Shows line number and error details
3. Dry run:
4. ansible-playbook -C -i inventory [Link]
o Simulates execution without making changes
o Verifies logic before actual run
5. Actual execution:
6. ansible-playbook -i inventory [Link]
Test Verbosity Levels:
ansible-playbook -i inventory [Link] -v
ansible-playbook -i inventory [Link] -vv
ansible-playbook -i inventory [Link] -vvv
ansible-playbook -i inventory [Link] -vvvv
Intentional Error Testing (Learning Exercise):
1. Open playbook
2. Deliberately break YAML indentation (remove spaces, misalign)
3. Run syntax check to see error messages
4. Observe line number reporting (may not always point to exact error location)
5. Fix and re-validate
Observe Output Behavior:
First run: All tasks show changed: true
Second run: Already-completed tasks show changed: false
Note "Gathering facts" task appears automatically (not in your playbook)
Documentation Exploration Assignment:
1. Visit: [Link]/ansible/latest
2. Navigate to: "Using Ansible modules and plugins" → "Index of all modules"
3. Search for specific modules (e.g., yum, service, copy)
4. For each module:
o Review Parameters section (available options)
o Check Requirements section
o Study Examples section (copy-paste starting point)
5. Browse module collections:
o [Link] (core modules)
o Amazon AWS modules
o Azure modules
o Windows modules
o Community modules
6. Get comfortable with documentation navigation
Next Lecture Preview:
Advanced module options
Requirements for modules
More practical examples
4. Important Details
Playbook Naming Conventions:
Can use any name (like inventory files)
Common pattern: descriptive names with .yaml extension
Example: [Link] (indicates contents: web and db configurations)
YAML File Start Marker:
Three hyphens: ---
Not mandatory but "standard practice"
Indicates beginning of YAML document
Play Structure Keywords:
name: Descriptive name for the play (optional but recommended)
hosts: Target group or host pattern from inventory
become: yes/no (privilege escalation for all tasks in play)
tasks: List of tasks to execute
Task Structure:
Each task starts with - (list item indicator)
name: Descriptive task name (shows in output, aids readability)
Module name: [Link]:, [Link]:, etc.
Module options: Indented key-value pairs under module name
Module Naming Convention Evolution:
Old format: yum, service, copy
New format: [Link], [Link]
"Newer Ansible modules are properly structured. The builtin module section,
community module and modules provided by different providers"
Collections structure: [Link].autoscaling_group, [Link].*
Common Module Arguments:
yum/apt: name (package name), state (present/absent/latest)
service: name (service name), state (started/stopped/restarted), enabled (yes/no)
copy: src (source path), dest (destination path)
Execution Command Differences:
Ad hoc: ansible <host-pattern> -m <module> -a "<args>"
Playbook: ansible-playbook -i <inventory> <[Link]>
Output Interpretation:
PLAY [Webserver setup] *****
TASK [Gathering Facts] ***** ← Automatic, not in playbook
ok: [web01]
ok: [web02]
TASK [Install httpd] *****
changed: [web01] ← Made changes
changed: [web02]
TASK [Start httpd service] *****
changed: [web01]
changed: [web02]
Status Keywords:
ok: Task succeeded, no changes needed (idempotent behavior)
changed: Task succeeded and made changes
failed: Task failed
skipped: Task was skipped based on conditions
Syntax Error Behavior:
"Offending line appears to be here"
Line number may be approximate: "Don't completely rely on this line number...
Sometimes you make mistakes in some other line and it tells you that the mistake is
in some other line. That is because of the YAML structure."
Check entire playbook structure if error location seems correct
Indentation Rules (Critical):
--- # File start (0 spaces)
- name: Play name # Play level (0 spaces, list item)
hosts: webservers # Play properties (2 spaces)
tasks: # Tasks keyword (2 spaces)
- name: Task name # Task list item (4 spaces)
module_name: # Module (6 spaces)
option1: value # Options (8 spaces)
option2: value
Documentation Navigation Path:
1. [Link]/ansible/latest
2. "Using Ansible modules and plugins"
3. "Modules and plugin index"
4. "Index of all modules"
5. Search or browse alphabetically
6. Click module → See parameters, requirements, examples
Module Documentation Sections:
Synopsis: Brief description
Parameters: All available options with types and defaults
Requirements: Python libraries or system dependencies needed
Examples: Copy-paste-ready code samples
Return Values: What the module returns
Collections Mentioned:
[Link].* - Core Ansible modules
[Link].* - AWS modules
[Link].* - Azure modules
[Link].* - Windows management
[Link].* - POSIX utilities
Community collections for specific technologies
Playbook File Location:
Same directory as inventory file in exercises
Can be anywhere, specify path in command
Common practice: Keep in Git repository
Best Practices Emphasized:
1. Give descriptive names to plays and tasks
2. Run syntax check before execution
3. Perform dry run to catch logic errors
4. Use appropriate verbosity for debugging
5. Keep playbooks in version control
6. Learn documentation navigation over memorization
Instructor's Humor:
After showing massive module list: Assignment to memorize all modules → "Ah I'm
just kidding of course"
Emphasizes practical approach over rote learning
Comparison to Ad Hoc:
Ad hoc: Quick, one-off commands
Playbooks: Structured, repeatable, version-controllable automation
Playbooks provide better output formatting and organization
Task Name Flexibility:
Can use any descriptive text: "Install Apache", "Start service", "Deploy config"
Makes output readable and serves as documentation
Unlike module parameters where names must be exact (e.g., package name must be
httpd, not arbitrary)
Gathering Facts Details:
Automatic task at start of each play
Uses setup module
Collects: OS info, Python interpreter, network details, hardware specs
Stored in memory during execution (not saved)
Status always shows ok (doesn't make changes)
Separate lecture will cover in detail
Teaching Pattern: The instructor uses a "build → break → fix → explore" cycle—first building
working playbook, then intentionally breaking it to show error handling, fixing it, then
exploring the vast documentation landscape to show students how to be self-sufficient with
Ansible's extensive module library. The joke about memorizing all modules reinforces the
practical, documentation-driven approach.
Excellent! This is a comprehensive summary of a detailed Ansible training transcript. Here's
the structured summary:
Transcript Summary: Finding, Using, and Troubleshooting Ansible Modules
1. Main Topic/Purpose
This intensive hands-on lecture teaches the complete DevOps workflow for working with
Ansible modules: discovering appropriate modules through documentation, implementing
them in playbooks, encountering and diagnosing failures (especially dependency issues),
troubleshooting using Google and Stack Overflow, and applying fixes—demonstrating that
reading requirements documentation prevents most problems but troubleshooting skills are
essential when issues inevitably arise.
2. Key Points
Module Discovery Through Categorized Documentation: Rather than browsing all
modules randomly, use Ansible's module index organized by categories (cloud,
database, files, etc.). The instructor emphasizes files modules are heavily used:
"Being a DevOps, our job will be majorly managing files. Their configuration of files
could be an archive, it could be an artifact, a text file, configuration files, scripts. We
deal with files a lot." Navigate: Ansible module index → Category (e.g., Files) →
Specific module (e.g., copy) → Examples section (copy-paste starting point).
Module Parameters: Required vs Optional (Read Documentation First): Every
module has:
o Required parameters: Must be specified (e.g., dest for copy module)
o Optional parameters: Enhance functionality (e.g., backup: yes, owner, group)
Check documentation's Parameters section to identify which are mandatory. Optional
parameters can add powerful features like automatic backups with timestamps: "If you look
at the options, or parameters, and you can find interesting options to use which may
enhance your playbook."
Python Dependencies Are Common Module Requirements: Many modules
(especially database, cloud, network) require Python libraries on target machines
(not control machine). The instructor deliberately triggers this error: "I did this
purposefully, actually... But I wanted you to fail on this one and then fix it."
Critical workflow:
1. Read Requirements section in module documentation: "The below
requirements are needed on the host that executes this module"
2. Install dependencies before module execution (add yum/apt task in
playbook)
3. Example: mysql_db module requires PyMySQL or MySQL-python library
4. Find package names: yum search python | grep -i mysql → python3-PyMySQL
Troubleshooting Skills: Google + Stack Overflow + Notes Section: When errors
occur:
1. Copy error message (omit numbers/timestamps for better search results)
2. Google error with "ansible" keyword
3. Check Stack Overflow solutions
4. Also check module documentation's Notes section: "When you get in the
errors, right, you can see, you know, in the notes, commonly found errors.
You see Python, MySQL dependencies explicitly mentioning the socket file."
The MySQL socket file error demonstrates this: needed login_unix_socket:
/var/lib/mysql/[Link] on CentOS (different from Ubuntu's default path).
Community Collections Provide Enhanced Modules: Beyond built-in modules,
community-maintained collections offer improvements:
o Install: ansible-galaxy collection install [Link]
o Use fully qualified name: [Link].mysql_db (instead of just
mysql_db)
o Often include better documentation, common error solutions, and additional
features
o Still require same dependencies and troubleshooting approaches
3. Action Items
Exercise 6 - Copy Module with Backup:
1. Copy exercise5 to exercise6: cp -r exercise5 exercise6
2. Navigate: cd exercise6
3. Rename playbook: mv [Link] [Link]
4. Edit [Link]:
o Remove database play (keep only webservers play)
o Add copy task from documentation example
o Modify task:
o - name: Copy index file [Link]: src: files/[Link] dest:
/var/www/html/[Link] backup: yes
5. Create files directory: mkdir files
6. Create content: vi files/[Link]
o Content: "learning modules in ansible"
7. Execute: ansible-playbook -i inventory [Link]
8. Verify on target machine:
9. ssh -i [Link] ec2-user@<web01-ip>cd /var/www/htmlls
o Should see: [Link] and backup file with timestamp
Control Machine Hostname Change (Optional):
sudo hostname control
# Log out and back in to see change
Exercise 7 - Database Module with Dependencies:
1. Copy exercise5 to exercise7: cp -r exercise5 exercise7
2. Navigate: cd exercise7
3. Rename: mv [Link] [Link]
4. Edit [Link]:
o Remove webservers play (keep only dbservers)
o Existing tasks: Install mariadb-server, start service
Add Database Creation Task (Will Fail Initially):
- name: Create database
mysql_db:
name: accounts
state: present
5. Execute (expect failure): ansible-playbook -i inventory [Link]
6. Error: Missing Python MySQL library
Troubleshoot and Fix Dependency:
1. SSH to db01:
2. ssh -i [Link] ec2-user@<db01-ip>
3. Find package: yum search python | grep -i mysql
4. Identify: python3-PyMySQL
5. Exit db01
6. Add dependency installation task to playbook before database tasks:
7. - name: Install pymysql [Link]: name: python3-PyMySQL state:
present
Fix Socket File Error:
6. Execute again (expect socket error)
7. Google error message: "ansible unable to find mysql socket"
8. Solution from Stack Overflow or documentation: Add socket parameter
9. Update database task:
10. - name: Create database mysql_db: name: accounts state: present
login_unix_socket: /var/lib/mysql/[Link]
Use Community MySQL Collection:
10. Install collection:
11. ansible-galaxy collection install [Link]
12. Update module name in playbook:
13. - name: Create database [Link].mysql_db: name: accounts state:
present login_unix_socket: /var/lib/mysql/[Link]
Add MySQL User Creation:
12. Find module: [Link].mysql_user
13. Add task:
14. - name: Create database user [Link].mysql_user: name: vprofile
password: 'admin943' priv: 'accounts.*:ALL' state: present login_unix_socket:
/var/lib/mysql/[Link]
15. Execute final playbook: ansible-playbook -i inventory [Link]
16. All tasks should succeed with changed status
Skills Practiced:
Finding modules in categorized index
Reading module documentation (parameters, requirements, examples)
Copy-pasting examples and adapting to needs
Installing Python dependencies on target machines
Troubleshooting errors using Google/Stack Overflow
Reading module Notes for common issues
Installing and using community collections
Reusing solutions across similar tasks
4. Important Details
Copy Module Parameters:
src: Source file path (can be relative: files/[Link])
dest: Destination with full path including filename (/var/www/html/[Link])
backup: yes/no (creates timestamped backup before overwriting)
owner: File owner (optional)
group: File group (optional)
mode: File permissions (optional)
content: Alternative to src—specify literal text content inline
Backup File Naming:
Format: [Link]
Example: [Link].19... (with date/time stamp)
Located in same directory as original file
MySQL Module Dependencies:
Required Python libraries: PyMySQL or MySQL-python
Package names vary by OS:
o CentOS/RHEL: python3-PyMySQL
o Ubuntu: python3-pymysql
Install on target machine, not control machine
Add installation task before database tasks in playbook
MySQL Socket File Locations:
CentOS/RHEL: /var/lib/mysql/[Link]
Ubuntu: /var/run/mysqld/[Link]
Socket files enable inter-process communication
Parameter: login_unix_socket: /path/to/socket
Must be specified explicitly for MySQL modules
Module Naming Conventions:
Built-in: [Link] or just modulename
Community: [Link] (e.g., [Link].mysql_db)
Cloud providers: [Link], [Link]
Requirements Section in Documentation:
Specifies Python libraries needed
Indicates where requirements must be installed (usually target machine)
Example from mysql_db: "The below requirements are needed on the host that
executes this module"
Common Errors and Solutions:
Missing Python library: Install via yum/apt in playbook
Socket file not found: Specify login_unix_socket parameter
Access denied: Usually socket file issue or credentials
Module not found: Install collection with ansible-galaxy
Google Search Tips:
Include "ansible" keyword with error message
Remove numbers, timestamps, specific IPs from error text
Example: "ansible unable to find mysql socket" (not full error with line numbers)
Community Collections:
Install command: ansible-galaxy collection install [Link]
View installed collections: ansible-galaxy collection list
Collections provide:
o Enhanced modules
o Better documentation
o Common error solutions in Notes section
o Active community support
Instructor's Teaching Philosophy: "I did this purposefully, actually. Now, we have community
MySQL module also where all this is already mentioned... But I wanted you to fail on this one
and then fix it. But now you know, if you read properly, you won't even get into those
problems. But anyways, you will get into problem. Don't worry, it will happen."
DevOps Reality: "This will be your very regular work as a DevOps engineer. You need to
write playbook. There'll be different kinds of tasks. You need to find modules for your task,
use it. If it fails, you need to troubleshoot."
File Modules Importance: "These files modules will help you execute those tasks. So, I
recommend you take some time and go through all these modules one by one."
Solution Reusability: "You see you solve once, and then you can use it as many as times."
(After fixing socket file issue, same solution applies to mysql_user module)
Playbook Structure Reminder:
---
- name: Play name
hosts: group_name
become: yes
tasks:
- name: Install dependency
[Link]:
name: package_name
state: present
- name: Use module requiring dependency
module_name:
parameter: value
login_unix_socket: /path/to/socket
Documentation Navigation Pattern:
1. Ansible module index
2. Select category (Files, Database, Cloud, etc.)
3. Find specific module
4. Check Requirements section first
5. Check Parameters section (required vs optional)
6. Go to Examples section
7. Copy, paste, modify
8. Check Notes for common errors
Skills Summary from Lecture: "We just learned two modules, but from that are three
including copy. But with that, we have learned skills of finding the module, using it,
troubleshooting it, fixing it, and reusing it."
Teaching Strategy: The instructor creates a "productive struggle" learning experience by
having students encounter errors organically, then guiding them through authentic
troubleshooting (SSH to machine, search packages, Google errors, read Stack Overflow),
mirroring actual DevOps work. Only after solving through struggle does the instructor reveal:
"But now you know, if you read properly, you won't even get into those problems"—
teaching both prevention (reading docs) and cure (troubleshooting skills).
Transcript Summary: Ansible Configuration Files and Priority Levels
1. Main Topic/Purpose
This lecture explains Ansible's configuration system, including the four priority levels for
configuration files, why project-specific configuration is essential for team collaboration,
common configuration settings, and how to create and use a local [Link] file in your
repository rather than relying on system-level defaults.
2. Key Points
Four Configuration Priority Levels (Highest to Lowest): Understanding this hierarchy
is critical for predictable Ansible behavior:
1. ANSIBLE_CONFIG environment variable (highest priority): export
ANSIBLE_CONFIG=/path/to/[Link]
2. ./[Link] (current working directory): Project-specific, version-controlled
3. ~/.[Link] (hidden file in home directory): User-specific
4. /etc/ansible/[Link] (global system file): Lowest priority, system-wide
defaults
The instructor emphasizes: "So mostly we use [Link] file in the current directory...
everyone should have the same setting. So you will commit [Link] in the repository
itself." Levels 1, 3, and 4 are system-level (inconsistent across team members), while level 2
is repository-specific (ensures consistency).
Always Use Project-Specific Configuration for Teams: The instructor strongly
advocates for [Link] in repositories: "Being a DevOps, you will be writing Ansible
playbooks and you'll be doing version control. You'll be putting that in the repository,
which will be used by the entire team or everyone in the project. So everyone should
have the same setting." At the end: "But always, always, always you should have an
Ansible configuration file in the repository where you have your playbook. That, keep
in mind."
Common Configuration Settings to Know:
o host_key_checking: false (disable SSH fingerprint prompts for automation)
o inventory: Path to inventory file (eliminates need for -i flag every time)
o forks: Parallel execution limit (default 5, depends on control machine
resources)
o log_path: Enable execution logging (disabled by default)
o become: true (global privilege escalation, replaces per-playbook become: yes)
o become_method: sudo (how to escalate privileges)
o ask_pass: Prompt for password vs. using keys
These eliminate repetitive command-line flags and playbook directives.
Configuration File Structure Has Sections: Settings are organized into bracketed
sections:
o [defaults]: Most common settings (inventory, forks, log_path, etc.)
o [privilege_escalation]: become settings
o [ssh_connection]: SSH-specific options
o [inventory]: Inventory-related settings
o Others: [variables], [powershell], [winrm], etc.
The global file has "more than a thousand lines" with many sections. "Hash and semicolon
are for comment" (both # and ; work).
Don't Memorize, Reference When Needed: The instructor discourages rote learning
of all settings: "Don't waste in going through all of them. Whenever you require to
change something in Ansible, you want Ansible to do something like this instead of
that, right? Then check the configuration settings and make the changes." Use
documentation: [Link]/archive/ansible where clicking any setting shows
description and usage.
3. Action Items
Navigate to Exercise 7:
cd exercise7
Create Local [Link] File:
1. Create file: vi [Link]
2. Add [defaults] section:
3. [defaults]
4. host_key_checking = false
5. inventory = ./inventory
6. forks = 5
7. log_path = /var/log/[Link]
8. Add [privilege_escalation] section:
9. [privilege_escalation]
10. become = true
11. become_method = sudo
12. become_ask_pass = false
13. Save and quit
Fix Log File Permissions (Will Initially Fail):
1. Run playbook (expect warning):
2. ansible-playbook [Link]
3. Error: "/var/log/ansible is not writeable"
o Ubuntu user cannot create file in /var/log (owned by root)
4. Create log file with proper ownership:
5. sudo touch /var/log/[Link]
6. sudo chown ubuntu:ubuntu /var/log/[Link]
Test Configuration:
1. Run playbook without -i flag (inventory path now in config):
2. ansible-playbook [Link]
o Should execute without warnings
o No need to specify inventory path
3. View log file:
4. cat /var/log/[Link]
5. Run with increased verbosity (more detailed logs):
6. ansible-playbook -vv [Link]
o Check log file again: cat /var/log/[Link]
o Should show more detailed output
Understand Global Configuration (Reference Only):
1. Switch to root: sudo -i
2. View global file: vim /etc/ansible/[Link]
3. Note sections: [defaults], [privilege_escalation], [ssh_connection], etc.
4. Exit without changes (use local [Link] instead)
Documentation Reference:
Google: "Ansible configuration file"
Visit: [Link]/archive/ansible
Browse available settings
Click individual settings for descriptions
Key Takeaway: Always commit [Link] to repository alongside playbooks for team
consistency.
4. Important Details
Use Case Example - Why Configuration Matters: "Ansible will connect to Linux machine by
using SSH on port 22. That's the default port for SSH... but for some reason, maybe security
reasons, you change the port number of SSH on your servers... to something else, maybe
2020. Now in this case, Ansible will try to access these machines on port 22 and it'll fail."
Complete [Link] Example:
[defaults]
host_key_checking = false
inventory = ./inventory
forks = 5
log_path = /var/log/[Link]
[privilege_escalation]
become = true
become_method = sudo
become_ask_pass = false
Settings Explanations:
host_key_checking = false
Disables SSH fingerprint verification prompt
Required for non-interactive automation
Previously set in global file: /etc/ansible/[Link]
inventory = ./inventory
Relative path to inventory file in current directory
Eliminates need for -i inventory flag on every command
Note: "This is the path of my file, this is the setting name. Don't get confused."
forks = 5
Number of parallel host connections
Example: "Let's say you have a group of hosts, let's say web server, and that contains
20 host... you can mention Ansible should be connecting to all the machine at a time,
or five machine at a time, or one machine at a time."
Depends on control machine resources: CPU, RAM, network bandwidth
"Keep in mind this is not the limitation of Ansible, this is limitation of the control
machine where your Ansible is running."
Instructor sets to 5 despite only having 3 hosts: "I just have three hosts at the max. It
can anyways go with three machine at a time. You can mention the settings just to
learn."
log_path = /var/log/[Link]
"By default Ansible does not store any log of execution, but if you want you can
mention a path and it's going to store all the output in the log file."
Location must be writable by executing user
Verbosity levels affect log detail: -v, -vv, -vvv, -vvvv
become = true
Global privilege escalation for all playbooks
"Now you remember in the playbook we mentioned become, yes. If you have
multiple playbooks, all the playbooks we need to mention that we can make this as a
global setting also."
Eliminates need for become: yes in every playbook
become_method = sudo
"Become means sudo in Linux"
Other methods exist (su, pbrun, etc.) but sudo is standard
become_ask_pass = false
"The EC2 hyphen user, that is going to do sudo, it's not going to ask any password for
it"
Assumes passwordless sudo (common in cloud environments)
Other Settings Mentioned (Not Configured):
force_colors
Controls colored output in playbook execution
Instructor's humor: "I mean, who doesn't like colors, right?"
ask_pass
"By default, Ansible will not ask password. It'll try to use the key or the password. If it
doesn't find, it will fail. But if you want Ansible to ask password to log into the target
machine, you can enable that."
Set to true to prompt for SSH password
debug
"By default it's not going to do the debug but if you want it to debug also while
running the playbook you can enable that."
Different from -v verbosity flags
Global Configuration File Details:
Location: /etc/ansible/[Link]
Size: "More than a thousand lines"
Multiple sections: defaults, variables, powershell, inventory, winrm, ssh_connection,
sudo_become_plugin, su_become_plugin
Generated file (default file has minimal content)
Comment characters: # and ; (both work)
Documentation Navigation:
Search: "Ansible configuration file"
URL pattern: [Link]/archive/ansible
Interactive: Click settings for descriptions and usage examples
Priority Visualization:
ANSIBLE_CONFIG env variable (1st priority - highest)
./[Link] (2nd priority - recommended)
~/.[Link] (3rd priority)
/etc/ansible/[Link] (4th priority - lowest)
Team Collaboration Rationale:
System-level configs (priorities 1, 3, 4): Different on each team member's machine
Repository config (priority 2): Version-controlled, consistent across team
"Everyone should have the same setting. So you will commit [Link] in the
repository itself."
Command Simplification: Before [Link]:
ansible-playbook -i inventory [Link] --become
After [Link]:
ansible-playbook [Link]
(inventory path and become behavior configured globally)
Log File Ownership Issue:
Initial error: "/var/log/ansible is not writeable"
Cause: Ubuntu user creating file in root-owned directory
Solution: Create as root, change ownership to user
Command sequence:
sudo touch /var/log/[Link] chown ubuntu:ubuntu /var/log/[Link]
Verbosity and Logging:
Default execution: Standard output
-vv flag: More detailed logs
Logs capture all output shown in terminal
Useful for auditing and troubleshooting
Instructor's Philosophy: "Don't waste in going through all of them. Whenever you require to
change something in Ansible, you want Ansible to do something like this instead of that,
right? Then check the configuration settings and make the changes."
Critical Reminder (Emphasized Three Times): "But always, always, always you should have
an Ansible configuration file in the repository where you have your playbook. That, keep in
mind."
Teaching Approach: The instructor uses a "need-to-know" philosophy—showing essential
settings for common use cases rather than exhaustive coverage of 1000+ configuration
options. The deliberate log file permission failure teaches students that configuration
changes often have system-level implications requiring troubleshooting beyond just editing
the config file.
Transcript Summary: Ansible Variables - Types, Definition, and Usage
1. Main Topic/Purpose
This lecture provides a comprehensive introduction to Ansible variables, covering three main
types (custom variables, fact variables, runtime variables), multiple definition locations
(playbooks, group_vars, host_vars, roles), variable usage syntax, and the debug module for
printing/troubleshooting—with emphasis on understanding variable sources before diving
into external variable files in the next lecture.
2. Key Points
Three Types of Variables in Ansible:
1. Custom Variables: User-defined variables for reusable values (ports,
usernames, passwords, database names)
2. Fact Variables: Auto-generated by setup module during "gathering facts" task
(OS info, CPU cores, IP addresses, architecture)
3. Runtime Variables: Task output stored using register keyword for use in
subsequent tasks
The instructor clarifies: "Those were our own custom variables. We are defining them.
Ansible has also its own variables. The majority of its variable gets generated from the setup
module."
Multiple Variable Definition Locations (With Best Practices):
o In playbook (vars:): Simple but not recommended for production - "I'm
stressing on in because it's really not a good way"
o group_vars/all: Variables for ALL hosts in inventory
o group_vars/<groupname>: Variables for specific group (e.g., webservers)
o host_vars/<hostname>: Variables for specific host (e.g., web01)
o Roles (vars and defaults files): Will be covered later
The instructor emphasizes: "There are different places where you can define variables in
Ansible... I don't want you to memorize all this right now. We'll be doing this."
Variable Usage Syntax - Double Quotes with Double Curly Braces: The critical syntax
that differs from other languages:
"{{ variable_name }}"
The instructor notes: "I know. A little complicated. Not complicated. Little extra characters.
In Bash, we have just dollar and the variable name. Here, we have like this." This syntax is
mandatory when using variables as values in playbooks.
Fact Variables Provide Rich System Information: Auto-generated variables from
setup module include:
o ansible_os_family: OS family (RedHat, Debian)
o ansible_processor_cores: CPU core count
o ansible_kernel: Kernel version
o ansible_devices: Connected devices
o ansible_default_ipv4: IP, MAC address, gateway
o ansible_architecture: 32-bit vs 64-bit
Use cases: "We can use these variables to put its values in, let's say, configuration files or
make decisions based on host information like 64 bit, install this package, 32 bit, install the
other package."
Debug Module for Printing/Troubleshooting (Two Methods):
o Method 1: var: variable_name (prints variable only, no curly braces)
o Method 2: msg: "Text {{ variable_name }}" (custom message with variable)
Important note: "Mostly it is used for troubleshooting purpose only. That's why it's called a
'debug'. You really don't need to print messages from the playbook. Playbook anyways is
very verbose... If you write your playbook, if you name your task properly, you get proper
messages."
3. Action Items
Setup Exercise 8:
cp -r exercise7 exercise8
cd exercise8
Define Variables in Playbook:
1. Open playbook: vi [Link]
2. Add vars: section before tasks: section:
3. ---
4. - name: DBserver setup
5. hosts: dbservers
6. become: yes
7. vars:
8. dbname: electric
9. dbuser: tesla
10. dbpass: ac_current
11. tasks:
12. # ... existing tasks
Use Variables in Tasks:
3. Replace hardcoded values with variables:
o Find database name: accounts
o Replace with: "{{ dbname }}"
o Find username: vprofile
o Replace with: "{{ dbuser }}"
o Find password value
o Replace with: "{{ dbpass }}"
Example task modification:
- name: Create database
[Link].mysql_db:
name: "{{ dbname }}"
state: present
login_unix_socket: /var/lib/mysql/[Link]
- name: Create database user
[Link].mysql_user:
name: "{{ dbuser }}"
password: "{{ dbpass }}"
priv: "{{ dbname }}.*:ALL"
state: present
login_unix_socket: /var/lib/mysql/[Link]
Test Playbook Execution:
4. Run playbook:
5. ansible-playbook [Link]
o Shows changed status but limited variable visibility
6. Run with verbosity to see variable usage:
7. ansible-playbook -vv [Link]
o Should show "DB electric", "User tesla" in output
Add Debug Module for Printing:
6. Add debug tasks to print variables (two methods):
Method 1 - Print variable directly:
- name: Print DB name
debug:
var: dbname
Method 2 - Print with custom message:
- name: Print DB user with message
debug:
msg: "The DB user is {{ dbuser }}"
7. Execute playbook to see debug output:
8. ansible-playbook [Link]
Register Task Output:
8. Capture module output in variable:
9. - name: Create database
10. [Link].mysql_db:
11. name: "{{ dbname }}"
12. state: present
13. login_unix_socket: /var/lib/mysql/[Link]
14. register: dbout
15. Print registered variable:
16. - name: Print DB out variable
17. debug:
18. var: dbout
19. Execute to see full JSON output:
20. ansible-playbook [Link]
o Shows complete module response in JSON format
o Useful for seeing what data is available from modules
Next Lecture:
Learn to define variables outside playbook (better practice)
Explore group_vars and host_vars directories
Understand variable precedence and organization
4. Important Details
Variable Syntax Rules:
In playbook definitions: No special syntax needed
vars: dbname: electric
When using variables: MUST use double quotes with double curly braces
name: "{{ dbname }}"
Exception in debug var: No curly braces when using var:
debug: var: dbname # Correct - no braces
In debug msg: Use curly braces inside message
debug: msg: "The name is {{ dbname }}" # Correct - with braces
Variable Definition Locations (Complete List):
1. Playbook (vars: section)
2. group_vars/all (all hosts)
3. group_vars/<groupname> (specific group)
4. host_vars/<hostname> (specific host)
5. Roles (vars and defaults directories)
6. Inventory file (for connection variables like ansible_user, ansible_host)
Fact Variables Examples:
ansible_os_family: "RedHat" or "Debian"
ansible_processor_cores: Integer (e.g., 2, 4, 8)
ansible_kernel: String (e.g., "5.10.0-8-amd64")
ansible_devices: Dictionary of device information
ansible_default_ipv4: Dictionary with address, gateway, macaddress
ansible_architecture: "x86_64" or "i386"
When to Use Variables: "Things that come again and again in our scripts, in our code, or the
things that might be of reusability... I use this playbook in this project, I use it in another
project, things will change, right? So the properties, those changes, those things we use as
variable."
Setup Module Details:
Runs automatically as first task ("gathering facts")
"You don't need to run setup module"
Generates all fact variables
Can be disabled if not needed (covered in future lectures)
Register Keyword Details:
"Register is in the same column as the module"
Indentation level: Same as module name
Syntax: register: variable_name
Stores complete JSON output from module
Variable persists for duration of playbook execution
JSON Output Structure:
All module outputs return JSON format
"By default suppressed" (not shown in standard output)
Contains: changed status, module-specific data, metadata
Access using registered variable: dbout
Debug Module Usage:
Primary purpose: Troubleshooting
Two formats:
1. var: variable_name → prints variable value only
2. msg: "text" → prints custom message (can include variables)
Not required for normal operations: Playbooks are already verbose
Similar to: echo (Bash), print() (Python)
Task Naming Alternatives:
# Method 1 - Name then module
- name: Print variable
debug:
var: myvar
# Method 2 - Module directly with hyphen
- debug:
var: myvar
Both work identically; naming provides better readability
Example Variables Defined:
dbname: electric (unusual name chosen deliberately - not "accounts")
dbuser: tesla (named after inventor Nikola Tesla)
dbpass: ac_current (alternating current reference)
Instructor's Note on Tesla: "Not the Tesla, Tesla company the Tesla inventor of alternate
current." (Clarifying Nikola Tesla vs. Tesla Motors)
Playbook Structure with Variables:
---
- name: Play name
hosts: group_name
become: yes
vars:
variable1: value1
variable2: value2
tasks:
- name: Task using variable
module_name:
parameter: "{{ variable1 }}"
- name: Debug example
debug:
var: variable1
Variable Reusability Principle: Variables enable:
Same playbook used across multiple projects
Easy value changes without modifying task logic
Centralized configuration management
Reduced duplication and errors
Verbosity Output Differences:
No flags: Shows task names, changed status
-vv: Shows actual values used ("DB electric", "User tesla")
Useful for verifying variable substitution
Complete Task Example with Register:
- name: Create database
[Link].mysql_db:
name: "{{ dbname }}"
state: present
login_unix_socket: /var/lib/mysql/[Link]
register: dbout
- name: Print complete output
debug:
var: dbout
Common Use Case - Conditional Logic: "We can use it with conditions, decision making in
our playbook" - fact variables enable:
Install package X on 64-bit, package Y on 32-bit
Different configurations for RedHat vs. Debian
Adjust settings based on CPU cores or memory
Why Not Store Connection Info as Custom Variables: Inventory file variables (ansible_user,
ansible_host, ansible_ssh_private_key_file) are different from custom variables: "That
means if you want to define some variable for all the host... Here we are not talking about
the Ansible user, the login key note, not those."
Best Practice Teaser: "I'm stressing on in because it's really not a good way. We are going to
see how to define variable outside of the playbook also, which is a good practice."
Teaching Strategy: The instructor uses a "learn bad practice first, then improve" approach—
showing in-playbook variables (simpler to understand) with the caveat "it's really not a good
way," then promising to teach proper external variable organization in the next lecture. This
scaffolded learning helps students understand the mechanics before adding organizational
complexity.
Transcript Summary: Ansible Variable Precedence and Inventory-Based Variables
1. Main Topic/Purpose
This lecture demonstrates the complete variable precedence hierarchy in Ansible through
hands-on testing, showing how variables defined in different locations (playbook,
group_vars, host_vars, command line) override each other, with emphasis on understanding
that command-line variables have the highest priority and group_vars/all is the most
common real-world location for variable definitions.
2. Key Points
Complete Variable Precedence Hierarchy (Lowest to Highest):
1. group_vars/all (lowest priority - applies to all hosts)
2. group_vars/<groupname> (applies to specific group, e.g., webservers)
3. host_vars/<hostname> (applies to specific host, e.g., web02)
4. Playbook vars (vars: section in playbook)
5. Command-line (-e flag) (highest priority - overrides everything)
The instructor methodically demonstrates each level: "Host has the highest priority. If the
variable is defined at the host level that takes the higher priority, then the priority goes to
the group vars, group name file. And the last priority goes to the all file. And all this is
superseded by the playbook." Then reveals: "Or does it? There is also one more way of
passing the variable and that is through command line."
group_vars/all is the Real-World Standard: Despite showing all precedence levels,
the instructor emphasizes practical usage: "Group vars all file is the most
commonplace of defining variable, this file. So you also in the real time start from
that group was all file. And then you can use other files based on the requirement."
Playbook variables are "very rarely" used, and command-line variables are "only we
use sometimes for testing things."
Strict Directory and File Naming Requirements: The structure is not flexible—names
must be exact:
o Directory: group_vars (not group-vars or groupvars)
o File for all hosts: all (exactly, not All or ALL)
o Directory: host_vars (not host-vars)
o Group-specific files: Must match group name from inventory exactly
o Host-specific files: Must match hostname from inventory exactly
The instructor emphasizes: "And you have to create a file inside that with the name all, A L L,
all. Okay, no different name. It's the structure of defining variables."
Three Variable Types: Simple, List, Dictionary:
o Simple: variable: value (most common)
o List: Vertical format with hyphens, access via index variable[0]
o Dictionary: Key-value pairs, access via dot notation [Link] or bracket
dict['key']
Dictionary access already demonstrated in previous lecture with registered variables: "To
access the keys inside the dictionary you have to say variable name, which is the dictionary...
dot and the key name" (e.g., USR_out.name, USR_out.comment)
Testing Methodology Reveals Precedence: The instructor uses progressive testing by
defining the same variable in multiple locations with different values, then observing
which value is used. Example pattern:
o Define in group_vars/all → runs playbook → observes value
o Define same variable in playbook → runs playbook → observes playbook
value wins
o Comment out playbook → runs playbook → observes group_vars value
returns
o Add group_vars/webservers → observes group-specific value for web hosts
o Add host_vars/web02 → observes host-specific value for that single host
o Pass command-line flag → observes command-line overrides everything
3. Action Items
Exercise 8 - Initial Group Variables:
1. Continue in exercise8 (from previous lecture)
2. Create directory structure:
3. mkdir group_vars
4. Create common variables file:
5. vi group_vars/all
6. Define variables:
7. dbname: Sky
8. dbuser: Pilot
9. dbpass: Aircraft
10. Test playbook execution:
11. ansible-playbook [Link]
o Observe: Still uses playbook variables (playbook has higher priority)
12. Comment out playbook variables:
13. # vars:
14. # dbname: electric
15. # dbuser: tesla
16. # dbpass: ac_current
17. Re-run playbook:
18. ansible-playbook [Link]
o Observe: Now uses group_vars/all variables ("DB name is Sky", "DB user
Pilot")
Exercise 9 - Complete Precedence Testing:
1. Copy exercise 8 to exercise 9:
2. cp -r exercise8 exercise9
3. cd exercise9
4. Remove database playbook and group_vars:
5. rm [Link]
6. rm -r group_vars
7. Create new precedence testing playbook:
8. vi vars_precedence.yaml
9. ---
10. - name: Variable precedence test
11. hosts: all
12. become: yes
13. vars:
14. USRNM: play_user
15. comment: variable from playbook
16. tasks:
17. - name: Create user
18. [Link]:
19. name: "{{ USRNM }}"
20. comment: "{{ comment }}"
21. register: USR_out
22.
23. - name: Print username
24. debug:
25. var: USR_out.name
26.
27. - name: Print comment
28. debug:
29. var: USR_out.comment
30. Execute initial test:
31. ansible-playbook vars_precedence.yaml
o Observe: Uses playbook variables
Add Group Variables (All Hosts):
5. Create group_vars structure:
6. mkdir group_vars
7. vi group_vars/all
8. USRNM: common_user
9. comment: variable from group_vars all file
10. Run playbook (variables still in playbook):
11. ansible-playbook vars_precedence.yaml
o Observe: Still uses playbook variables (playbook wins)
12. Comment out playbook variables:
13. # vars:
14. # USRNM: play_user
15. # comment: variable from playbook
16. Re-run playbook:
17. ansible-playbook vars_precedence.yaml
o Observe: All hosts use group_vars/all values
Add Group-Specific Variables:
9. Create webservers group file:
10. vi group_vars/webservers
11. USRNM: web_group
12. comment: variable from group_vars webservers file
13. Run playbook:
14. ansible-playbook vars_precedence.yaml
o Observe: web01 and web02 use webservers values
o Observe: db01 still uses group_vars/all values
Add Host-Specific Variables:
11. Create host_vars structure:
12. mkdir host_vars
13. vi host_vars/web02
14. USRNM: web02_user
15. comment: variable from host_vars web02 file
16. Run playbook:
17. ansible-playbook vars_precedence.yaml
o Observe: web02 uses its host-specific values
o Observe: web01 uses webservers group values
o Observe: db01 uses group_vars/all values
Test Playbook Variable Precedence:
13. Uncomment playbook variables:
14. vars:
15. USRNM: play_user
16. comment: variable from playbook
17. Run playbook:
18. ansible-playbook vars_precedence.yaml
o Observe: All hosts use playbook values (overrides all external files)
Test Command-Line Variables (Highest Priority):
15. Run with command-line variables:
16. ansible-playbook -e USRNM=CLI_user -e comment=CLI_comment
vars_precedence.yaml
o Observe: All hosts use CLI values (overrides even playbook variables)
Review Documentation:
16. Google: "Ansible using variables"
17. Review variable types:
o Simple variables
o List variables (access via variable[index])
o Dictionary variables (access via [Link] or dict['key'])
18. Review vars_files option for importing external variable files
4. Important Details
Directory Structure Requirements:
exercise9/
├── [Link]
├── inventory
├── vars_precedence.yaml
├── group_vars/
│ ├── all # Variables for all hosts
│ └── webservers # Variables for webservers group
└── host_vars/
└── web02 # Variables for web02 host only
Exact Naming Rules:
Directory names: group_vars, host_vars (underscore, not hyphen)
File for all hosts: all (lowercase, no extension)
Group files: Match group name from inventory exactly
Host files: Match hostname from inventory exactly
"Exactly this name" and "no different name" emphasized multiple times
Variable Values Used in Testing:
Location USRNM comment
Playbook play_user variable from playbook
group_vars/all common_user variable from group_vars all file
group_vars/webservers web_group variable from group_vars webservers file
host_vars/web02 web02_user variable from host_vars web02 file
Command line CLI_user CLI_comment
Precedence Testing Results:
Test 1 - All locations defined:
db01: Uses group_vars/all (no group or host file)
web01: Uses group_vars/webservers (no host file)
web02: Uses host_vars/web02 (most specific)
Test 2 - Playbook vars uncommented:
All hosts: Use playbook variables (overrides external files)
Test 3 - Command-line variables:
All hosts: Use CLI variables (overrides everything)
Command-Line Variable Syntax:
ansible-playbook -e variable=value -e var2=value2 [Link]
Multiple -e flags for multiple variables
"Very, very rarely used"
"Only we use sometimes for testing things"
List Variable Examples (from Documentation):
region:
- northeast
- southeast
- midwest
Access: region[0] → "northeast", region[1] → "southeast", region[2] → "midwest"
Dictionary Variable Examples:
foo:
field1: 1
field2: 2
Access: foo.field1 → 1, foo.field2 → 2 Alternative: foo['field1'] → 1 (but dot notation "most
commonly used")
Accessing Registered Output:
register: USR_out
# Access nested values:
USR_out.name # Username
USR_out.comment # User comment
"To access the keys inside the dictionary you have to say variable name, which is the
dictionary... dot and the key name"
vars_files Option:
vars_files:
- path/to/your/variable_file.yml
Imports variables into playbook
"This will have higher priority" (treated as playbook variables)
Useful for organizing variables in separate files while maintaining playbook
precedence
Quoting Variables: Documentation warning: "Without quotes it'll give you this error" Always
use:
name: "{{ variable }}" # Correct
Not:
name: {{ variable }} # Will error
Real-World Best Practices: "Mostly it'll be outside like group vars, host vars. Group vars all
file is the most commonplace of defining variable... So you also in the real time start from
that group was all file."
Priority Summary Diagram:
Command line -e flag (HIGHEST)
Playbook vars:
host_vars/<hostname>
group_vars/<groupname>
group_vars/all (LOWEST)
Comment Technique: In YAML, use # to comment:
# vars:
# variable: value
Simply "put hash in front of them"
Teaching Repetition Note: "I know I'm repeating the same exercise, but I want this, want to
be very clear." - The instructor deliberately repeats the testing pattern to reinforce
understanding of precedence through hands-on observation.
Why Different Values: "These are all the same variables, but their values are different" - This
is intentional to make it obvious which location Ansible is reading from during each test.
Usage Pattern: "Variables that come again and again" or "things that might be of reusability"
should be externalized to group_vars or host_vars rather than hardcoded in playbooks.
Inventory-Based Variables Clarification: "This is inventory based variables you can say so all
means all the variables for these host" - Variables in group_vars and host_vars are inventory-
based (tied to hosts/groups in inventory file).
Teaching Strategy: The instructor uses a "show, don't tell" approach for precedence—rather
than simply listing the priority order, they create the same variable in multiple locations with
intentionally different values (Sky vs. electric, common_user vs. web_group vs. web02_user),
then run the playbook after each addition to let students observe which value "wins." This
hands-on discovery makes the abstract concept of precedence concrete and memorable.
The final command-line test serves as the "gotcha" moment: "Or does it?" revealing the
ultimate override capability.
Transcript Summary: Ansible Fact Variables and the Setup Module
1. Main Topic/Purpose
This lecture explains Ansible's fact variables—runtime variables automatically generated by
the setup module that provide rich host information (OS, CPU, memory, IP addresses)—
demonstrating how to view fact variables, disable gathering when not needed, access nested
dictionary/list structures, and solve the practical problem of managing mixed OS
environments (CentOS + Ubuntu) using host-specific variables.
2. Key Points
Fact Variables are Auto-Generated Runtime Information: The "gathering facts" task
that appears at the start of every playbook execution runs the setup module, which
collects comprehensive host information in JSON format. The instructor explains:
"Like you can have the operating system name, you can have processor cores, kernel
versions, Ansible devices, the connected devices, IP address, MAC address,
architecture. So these are some examples of fact variables." These variables are
available during playbook runtime but "doesn't really use it unless we want to use
those variables."
Disable gather_facts When Not Needed: Since fact gathering executes on every
playbook run even when variables aren't used, it adds unnecessary overhead. The
instructor demonstrates: "If you want, we can disable it also, mostly we don't use it.
So if we are not using it, you can disable it." Simply add gather_facts: false before the
tasks section. After disabling, the "gathering facts" task disappears from output. If
you then try to reference fact variables, you'll get "variable is not defined" error—
proving facts aren't available without gathering.
Fact Variables Follow Nested Structure (Dictionary → Dictionary/List): The setup
module output is "a giant dictionary" with complex nesting. The instructor
emphasizes understanding the structure: "The whole output is basically a dictionary,
a giant dictionary in that you have keys like ansible facts. And its value is another
dictionary, and it also has so many keys in it. And some key value is a list, you see the
square bracket. And some key values are another dictionary, see those curly braces."
Access patterns:
o Simple: ansible_distribution (string like "CentOS" or "Ubuntu")
o Nested dictionary: ansible_memory_mb.[Link] (dot notation through
levels)
o List elements: ansible_processor[2] (bracket notation with index)
Host Variables Solve Mixed OS Environment Issues: When adding Ubuntu instance
(web03) to CentOS-only infrastructure, ping failed with "permission denied" because
Ubuntu AMI uses ubuntu user while CentOS uses ec2-user. The instructor explains
the solution: "But remember what we learned. Host variable takes higher priority...
Such cases we should have their own separate variables. The host variables will take
higher priority. So it'll check first whether the host has those values or not. If it's not,
then only it'll go for global."
Practical Use Cases for Fact Variables: Beyond just printing, fact variables enable
intelligent decision-making. The instructor provides examples: "You can use this in
your playbook for various other purpose, like condition to check whether RAM is free
or not, and then decide the task should get executed or not. You can use it to name
files, push the content into the file." Conditional logic based on facts will be covered
in next lecture. Additional use mentioned: "Ansible date and time, you can use it to
take backup of files, right."
3. Action Items
Continue in Exercise 9:
cd vprofile/exercise9
Test Existing Playbook to See gather_facts:
ansible-playbook vars_precedence.yaml
Observe: First task is "TASK [Gathering Facts]"
This runs automatically on every playbook execution
Disable gather_facts:
1. Edit playbook: vi vars_precedence.yaml
2. Add before tasks section:
3. gather_facts: false
4. Re-run playbook:
5. ansible-playbook vars_precedence.yaml
o Observe: "Gathering Facts" task no longer appears
View All Fact Variables via Ad Hoc Command:
ansible -m setup web01 -i inventory
Output: Massive JSON dictionary with all facts
Scroll through to see structure
Create Playbook to Print Fact Variables:
1. Create new playbook:
2. vi print_facts.yaml
3. Initial playbook content:
4. ---
5. - name: Print facts
6. hosts: all
7. tasks:
8. - name: Print OS name
9. debug:
10. var: ansible_distribution
11. Execute playbook:
12. ansible-playbook print_facts.yaml
o Shows OS name for each host
Test What Happens Without gather_facts:
4. Add gather_facts: false to playbook:
5. gather_facts: false
6. Execute again:
7. ansible-playbook print_facts.yaml
o Error: "variable is not defined"
o Proves fact variables require gathering
8. Remove or comment out the disable line:
9. # gather_facts: false
Add Ubuntu EC2 Instance:
7. Launch new instance:
o Name: vprofile-web03
o OS: Ubuntu 22.04
o Instance Type: [Link]
o Key Pair: client_key (same as others)
o Security Group: client-SG (existing)
o Launch instance
8. Copy private IP address
Update Inventory File:
9. Edit inventory: vi inventory
10. Add web03 host:
11. web03: ansible_host: <web03-private-IP> ansible_user: ubuntu # Will add to
host_vars instead
12. Add to webservers group:
13. webservers: hosts: web01: web02: web03: # Add this line
Test Connection (Will Fail Initially):
ansible -m ping all -i inventory
web01, web02: Success
web03: "Failed to connect to host, permission denied"
Reason: Using ec2-user (from group_vars) but Ubuntu needs ubuntu user
Fix with Host Variables:
12. Create host variables file:
13. vi host_vars/web03
14. ansible_user: ubuntu
15. Test connection again:
16. ansible -m ping all -i inventory
o All hosts: Success (including web03)
Print OS Distribution for All Hosts:
ansible-playbook print_facts.yaml
web01, web02: "CentOS" (or specific CentOS version)
web03: "Ubuntu"
Add Task to Print Nested Dictionary Value:
14. View setup output to find nested structure:
15. ansible -m setup web01 -i inventory | grep -A 10 ansible_memory_mb
16. Add task to playbook:
17. - name: Print RAM memory
18. debug:
19. var: ansible_memory_mb.[Link]
20. Execute:
21. ansible-playbook print_facts.yaml
o Shows free RAM in MB for each host
Add Task to Print List Element:
17. View setup output for list structure:
18. ansible -m setup web01 -i inventory | grep -A 5 ansible_processor
o Shows processor as list with multiple elements
19. Add task to playbook:
20. - name: Print processor name
21. debug:
22. var: ansible_processor[2]
23. Execute:
24. ansible-playbook print_facts.yaml
o Shows specific processor information
Practice Exercise:
Explore different fact variables from setup output
Practice accessing nested dictionaries and lists
Print various system information (IP addresses, kernel version, etc.)
Understand JSON structure for future conditional logic use
Next Lecture:
Learn decision-making using fact variables
Understand conditional task execution based on system state
4. Important Details
Setup Module Output Structure:
"ansible_facts": {
"ansible_architecture": "x86_64",
"ansible_distribution": "CentOS",
"ansible_memory_mb": {
"real": {
"free": 512,
"total": 1024
},
"ansible_processor": [
"string1",
"string2",
"processor_name"
],
"ansible_default_ipv4": {
"address": "[Link]",
"gateway": "[Link]",
"macaddress": "02:xx:xx:xx:xx:xx"
Common Fact Variables:
ansible_architecture: System architecture (e.g., "x86_64", "i386")
ansible_distribution: OS name (e.g., "CentOS", "Ubuntu", "Debian")
ansible_kernel: Kernel version
ansible_processor: List of processor information
ansible_processor_cores: Number of CPU cores
ansible_memory_mb: Dictionary with RAM information
ansible_devices: Connected devices information
ansible_date_time: Current date/time dictionary
ansible_default_ipv4: Primary IPv4 configuration
ansible_default_ipv6: Primary IPv6 configuration
ansible_bios_date: BIOS date
Important Clarification: "Now here the word ansible does not mean the ansible control
machine. This is all information about web zero one." - The ansible_ prefix refers to facts
about the target host, not the Ansible control machine.
Disabling gather_facts Syntax:
---
- name: Play name
hosts: all
gather_facts: false # Add this line before tasks
tasks:
# ... tasks
Position: After hosts:, before tasks:
Ad Hoc Setup Command:
ansible -m setup <hostname> -i inventory
Runs setup module directly
Shows complete JSON output
Useful for exploring available variables
Accessing Nested Structures:
Nested Dictionary (Dot Notation):
ansible_memory_mb.[Link]
Equivalent to: dictionary["ansible_memory_mb"]["real"]["free"]
List Element (Bracket Notation):
ansible_processor[2]
Accesses third element (zero-indexed)
Mixed OS User Configuration:
Problem: CentOS uses ec2-user, Ubuntu uses ubuntu
Solution Structure:
group_vars/all (or webservers):
ansible_user: ec2-user # Default for most hosts
host_vars/web03:
ansible_user: ubuntu # Override for Ubuntu host
Priority: Host variables override group variables (as covered in previous lecture)
Error When gather_facts Disabled:
TASK [Print OS name] ***
fatal: [web01]: FAILED! => {"msg": "The task includes an option with an undefined variable.
The error was: 'ansible_distribution' is undefined"}
This proves fact variables only exist when gathering is enabled.
Use Cases for Fact Variables:
1. Conditional Execution:
o Check RAM before running memory-intensive tasks
o Different commands for different OS distributions
o Architecture-specific package installations
2. File Naming:
o Include hostname in backup files
o Timestamp files using ansible_date_time
o OS-specific configuration file names
3. Content Generation:
o Populate configuration files with IP addresses
o Insert system info into monitoring configs
o Generate reports with system specifications
4. Decision Making:
o "Condition to check whether RAM is free or not, and then decide the task
should get executed or not"
o Will be covered in next lecture on conditionals
Complete print_facts.yaml Example:
---
- name: Print facts
hosts: all
# gather_facts: false # Commented out - we need facts
tasks:
- name: Print OS name
debug:
var: ansible_distribution
- name: Print RAM memory
debug:
var: ansible_memory_mb.[Link]
- name: Print processor name
debug:
var: ansible_processor[2]
JSON Format Note: "This is in JSON format" - Setup module returns structured JSON that
Ansible parses into usable variables. Understanding JSON structure from previous lectures
helps interpret this output.
Dictionary vs. List Identification:
Curly braces {}: Dictionary/object
Square brackets []: List/array
Example: "ansible_processor": [...] → List
Example: "ansible_memory_mb": {...} → Dictionary
Instructor's Advice: "In this whole lecture, I'm showing you how to access the variables, the
fact variables... This is a very good exercise" - Practice is emphasized over memorization.
Real-World Scenario: The mixed OS environment (CentOS + Ubuntu) represents common
real-world situations where infrastructure isn't homogeneous. Using host_vars to handle
differences is the proper Ansible pattern.
Performance Consideration: "Mostly we don't use it. So if we are not using it, you can
disable it." - Gathering facts adds overhead to every playbook run. If not needed, disable for
faster execution.
Backup File Use Case: "Ansible date and time, you can use it to take backup of files" -
Example of practical application:
backup_file: "/backup/{{ ansible_hostname }}_{{ ansible_date_time.date }}.[Link]"
Teaching Strategy: The instructor uses an authentic problem (adding Ubuntu instance that
fails to connect) as a teaching moment rather than pre-configuring everything correctly. This
"productive failure" demonstrates why host variables exist and how variable precedence
solves real infrastructure heterogeneity, making the abstract concept of host_vars concrete
through troubleshooting an actual error. The JSON structure exploration builds from simple
(string) to complex (nested dictionary, list indexing), preparing students for conditional logic
in the next lecture.
Transcript Summary: Conditional Execution (when) in Ansible Playbooks
1. Main Topic/Purpose
This lecture introduces conditional task execution in Ansible using the when clause,
demonstrating how to manage multi-OS environments (CentOS + Ubuntu) by executing OS-
specific tasks only when appropriate conditions are met, using the NTP service provisioning
as a practical example while previewing upcoming topics (loops, templates, handlers, roles).
2. Key Points
Conditional Execution with when Clause Prevents Cross-OS Failures: Without
conditions, OS-specific modules (yum for CentOS, apt for Ubuntu) would fail on
incompatible systems. The instructor explains the problem: "This will get executed on
all the hosts and this will be also getting executed on all the hosts. So this is going to
fail. The first Yum task will fail for the zero three, which is Ubuntu, and this is going to
fail for all the other instances except for Ubuntu." Solution: Use when:
ansible_distribution == "CentOS" to execute tasks only when the condition matches.
Syntax: when: Goes at Same Indentation as Module Name: The placement is critical
—when must be at the same level as the module name, not nested under it. The
instructor notes: "You have to give it in the same column as the module name, and
you have to give a condition." Use double equals (==) for equality comparison (single
= is an error that the instructor deliberately encounters and fixes during execution).
Fact Variables Drive Conditional Logic: Use ansible_distribution to determine OS
type. The instructor emphasizes: "This is the fact variable. We have seen this in
previous lectures." Common values: "CentOS" (capital C and S), "Ubuntu" (capital U).
Facts enable intelligent decision-making: execute yum tasks when CentOS, apt tasks
when Ubuntu, without manual intervention.
Ubuntu apt Module Requires update_cache: yes: This is a critical OS-specific
requirement. The instructor encounters and solves this: "Actually the package is
available, but thing is the Ubuntu machine, we need to run apt update before we
install the package... if you put this to yes, then it is going to run first apt update and
then apt install." Without this, package installations fail with "no package matching"
errors even when packages exist.
Complex Conditions Use Logical Operators (AND/OR): Beyond simple equality,
conditions support:
o AND operator (two ampersands && or YAML list format): Both conditions
must be true
Example: ansible_distribution == "CentOS" and
ansible_distribution_major_version == "6"
o OR operator (|| or or keyword): Either condition can be true
o Comparison operators: >=, <=, >, <, ==, !=
o Dictionary access: ansible_facts['distribution'] (alternative to
ansible_distribution)
3. Action Items
Setup Exercise 10:
cp -r exercise9 exercise10
cd exercise10
rm vars_precedence.yaml print_facts.yaml # Remove unnecessary files
Test Connectivity:
ansible -m ping all -i inventory
Verify all hosts (web01, web02, web03, db01) respond
Create Provisioning Playbook:
1. Create file: vi [Link]
2. Write initial playbook structure:
3. ---
4. - name: Provisioning servers
5. hosts: all
6. become: yes
7. tasks:
Add CentOS NTP Installation Task:
- name: Install NTP agent on CentOS
yum:
name: chrony
state: present
when: ansible_distribution == "CentOS"
Package name: chrony (NTP implementation for CentOS 9)
Module: yum (CentOS package manager)
Add Ubuntu NTP Installation Task:
- name: Install NTP agent on Ubuntu
apt:
name: ntp
state: present
update_cache: yes
when: ansible_distribution == "Ubuntu"
Package name: ntp (different from CentOS)
Module: apt (Ubuntu package manager)
Critical: update_cache: yes (runs apt update first)
Add CentOS Service Management Task:
- name: Start service on CentOS
service:
name: chronyd
state: started
enabled: yes
when: ansible_distribution == "CentOS"
Service name: chronyd (note the 'd' suffix)
Add Ubuntu Service Management Task:
- name: Start service on Ubuntu
service:
name: ntp
state: started
enabled: yes
when: ansible_distribution == "Ubuntu"
Service name: ntp (no 'd' suffix)
Note: Ubuntu auto-starts/enables services on installation
Test with Dry Run:
ansible-playbook [Link] -C
Expected Error (Deliberate Learning Moment):
Error: Using single equals = instead of double equals ==
Fix: Change all when: ansible_distribution = "..." to when: ansible_distribution == "..."
Four locations to fix (two package tasks, two service tasks)
Second Dry Run After Fix:
ansible-playbook [Link] -C
Should see "skipping" for tasks where conditions don't match
May see error: "no package matching NTP" for Ubuntu
Execute Actual Playbook (Remove -C):
ansible-playbook [Link]
Now actually runs apt update on Ubuntu
Installs missing packages
Starts/enables services
Review Output:
Tasks with unmet conditions show "skipped"
Only applicable tasks show "changed" or "ok"
Example: yum tasks skip for web03 (Ubuntu), apt tasks skip for CentOS hosts
Review Documentation:
1. Google: "Ansible when condition" or "Ansible conditionals"
2. Study examples of:
o Basic conditions
o AND operations (both conditions true)
o OR operations (either condition true)
o Comparison operators
o Dictionary access for fact variables
ChatGPT Alternative Approach:
1. Query ChatGPT: "Ansible playbook to install chrony on CentOS and NTP on Ubuntu.
Also start and enable chrony and NTP service."
2. Review generated playbook
3. Compare with hand-written version
4. Modify as needed
Instructor's Recommendation: "Instead of spending time too much on chat gpt, I
recommend you better check the documentation, read the documentation, look at the
condition different ways of giving condition."
Next Lecture Preview:
Will cover loops in Ansible
Continue with NTP service provisioning series
4. Important Details
Complete Working Playbook:
---
- name: Provisioning servers
hosts: all
become: yes
tasks:
- name: Install NTP agent on CentOS
yum:
name: chrony
state: present
when: ansible_distribution == "CentOS"
- name: Install NTP agent on Ubuntu
apt:
name: ntp
state: present
update_cache: yes
when: ansible_distribution == "Ubuntu"
- name: Start service on CentOS
service:
name: chronyd
state: started
enabled: yes
when: ansible_distribution == "CentOS"
- name: Start service on Ubuntu
service:
name: ntp
state: started
enabled: yes
when: ansible_distribution == "Ubuntu"
CentOS vs. Ubuntu Differences:
Aspect CentOS Ubuntu
Package Manager yum apt
NTP Package chrony ntp
Service Name chronyd ntp
Pre-install Update Not required update_cache: yes required
Auto-enable Service No Yes (on package install)
Condition Syntax Rules:
Placement: Same indentation as module name
Comparison: Use == (not =)
Case sensitivity: "CentOS", "Ubuntu" (capital letters matter)
No quotes around when: keyword itself
Quotes required for string values in comparison
Common Mistakes (Instructor Demonstrates):
1. Single equals: when: ansible_distribution = "CentOS" ✗
o Correct: when: ansible_distribution == "CentOS" ✓
2. Missing update_cache: apt tasks fail without it on Ubuntu
3. Wrong capitalization: "centos" vs. "CentOS" (must match fact variable exactly)
Complex Condition Examples (from Documentation):
AND Operation (Both Must Be True):
when: ansible_distribution == "CentOS" and ansible_distribution_major_version == "6"
OR Operation (Either Can Be True):
when: (ansible_distribution == "CentOS" and ansible_distribution_major_version == "6") or
(ansible_distribution == "Debian" and ansible_distribution_major_version == "7")
YAML List Format for AND:
when:
- ansible_distribution == "CentOS"
- ansible_distribution_major_version == "6"
"Both needs to be true, then only it'll be true"
Dictionary Access Alternative:
when: ansible_facts['distribution'] == "CentOS"
"You can give because Ansible underscore facts is a dictionary, in that you have a key called
distribution"
Comparison Operators Available:
== (equals)
!= (not equals)
> (greater than)
< (less than)
>= (greater than or equal)
<= (less than or equal)
Example with Comparison:
when: ansible_memory_mb.[Link] >= 2048
Dry Run (-C flag):
Purpose: "Check" mode without making actual changes
Shows what would happen
Catches syntax errors
May not catch runtime errors (like missing apt cache update)
Status Messages:
skipping: Condition not met, task not executed
changed: Task executed and made changes
ok: Task executed, no changes needed (idempotent)
failed: Task execution failed
Why Ubuntu Auto-Enables Services: "In Ubuntu, any service, when you install... it is going to
automatically start and enable" - This is Ubuntu's default behavior, different from CentOS
which requires explicit enabling.
Service Module Options:
state: started - Ensure service is running
state: stopped - Ensure service is stopped
state: restarted - Restart service
enabled: yes - Enable service at boot
enabled: no - Disable service at boot
Upcoming Topics Preview: "We'll be learning decision making in our playbook, we'll see
loops, templates for configurations or dynamic configurations. We'll see handlers and then
we'll see Ansible rules [roles]."
Instructor's Philosophy on ChatGPT: "Instead of spending time too much on chat gpt, I
recommend you better check the documentation" - Emphasizes understanding over code
generation, though acknowledges ChatGPT utility.
ChatGPT Effectiveness Depends on Clarity: "If you write the proper text, right? That means
you know what you have to do, then you get very close code, very close playbook" - You
must understand requirements to generate useful prompts.
Documentation Benefits: "Look at the condition different ways of giving condition. Those
are very common use cases that we will need as a DevOps engineer."
General Provisioning Pattern: "Any service provisioning that you have to do, any server
provisioning you have to do, how would you do it in general?" - NTP is teaching example;
principles apply to any service.
Idempotency Observation: "In CentOS, chrony was already installed and chrony service was
already started and enabled" - Shows Ansible's idempotent nature; already-configured items
show "ok" not "changed".
Teaching Strategy: The instructor uses a "problem-first" approach—showing what breaks
(tasks executing on wrong OS) before introducing the solution (when clause), making the
need for conditionals obvious rather than abstract. The deliberate syntax error (single vs.
double equals) teaches debugging skills and reinforces that even experienced developers
make mistakes. The progression from simple conditionals to previewing complex AND/OR
operations provides a learning path from immediate needs to advanced capabilities. The
ChatGPT discussion balances modern AI-assisted development with fundamental
understanding: use tools, but know what you're doing.
Comprehensive Summary: Ansible Loops Training
1. Main Topic/Purpose
This lecture (Exercise 11) teaches how to implement loops in Ansible playbooks to
efficiently install multiple packages without repeating tasks. It demonstrates moving from
installing a single package to handling multiple packages using the loop keyword.
2. Key Points
Loop Basics
Problem solved: Instead of copying tasks multiple times to install 2, 5, 10, or 20
packages, loops allow one task to handle all installations
Implementation: Add the loop keyword below the task with a list of items
Loop variable: Use {{ item }} (exact name required) as the dynamic variable that
changes with each iteration
Two Loop Types
loop: Modern, preferred method with more functionality
with_items: Older syntax, similar functionality but less flexible
Loop Capabilities
Simple lists: Loop through strings (package names, usernames, etc.)
Dictionary variables: Pass multiple key-value pairs per iteration using syntax like
{{ [Link] }} and {{ [Link] }}
Complex loops: Can retry tasks until conditions are met (advanced use cases)
3. Action Items
For Students:
1. Test the provided loop example with package installation
2. Add more tasks using loops (suggested: create multiple users in the playbook)
3. Experiment with loop variations before the next lecture
4. Explore Ansible loop documentation for advanced features
5. Optional: Use ChatGPT to generate playbook examples with loops
4. Important Details & Code Examples
Basic Loop Implementation for YUM (RHEL-based systems)
- name: Install multiple packages
yum:
name: "{{ item }}"
state: present
loop:
- chrony
- wget
- git
- zip
- unzip
Basic Loop Implementation for APT (Debian-based systems)
- name: Install multiple packages
apt:
name: "{{ item }}"
state: present
loop:
- ntp
- wget
- git
- zip
- unzip
Key Implementation Rules
Placement: The loop keyword must be aligned in the same column as the module
name
When using conditions: Place the loop below the when condition
Variable syntax: Always use double curly braces: "{{ item }}"
Dictionary Loop Example (from documentation)
- name: Add multiple users
user:
name: "{{ [Link] }}"
groups: "{{ [Link] }}"
loop:
- name: testuser1
groups: wheel
- name: testuser2
groups: users
Execution Results
When the playbook ran on host web03:
Packages ntp, wget, and git were already installed (skipped)
Package zip was not installed, so it was installed during that iteration
The task ran 5 times total (once per package in the list)
Notable Quote
"I love this feature 'cause I've used it many times... mostly we use this one only and the max,
the dictionary variables."
Additional Resources Mentioned
Ansible official documentation on loops
ChatGPT for playbook generation
Google searches for specific loop examples
Focus on basic loops first, explore complex loops when needed
Comprehensive Summary: Ansible File Operations & Template Module Training
1. Main Topic/Purpose
This lecture (Exercise 12) covers file operations in Ansible, focusing on the copy, file, and
template modules. The primary objective is to deploy and manage NTP/chrony configuration
files across CentOS and Ubuntu systems using templating for dynamic configuration
management.
2. Key Points
File Module Categories in Ansible
Archive: Archive files
Blockinfile: Add or remove content from files
Copy: Push files to target machines
Fetch: Retrieve files from remote machines (opposite of copy)
File: Manage file properties (permissions, ownership), create directories
Template: Intelligent file deployment with Jinja2 templating support
Copy Module vs Template Module
Copy Module: Takes a file and directly dumps it in the target location without any
processing
Template Module: Reads the file, processes Jinja2 templates (variables, conditions,
loops), extracts actual content, then pushes to target
When to use Template: When you need dynamic configuration files with variables or
conditions
Jinja2 Templating Benefits
Define variables once in group_vars/all file
Use variables across multiple configuration files
Change values in one place to affect all configuration files
Uses the same syntax as playbook variables: {{ variable_name }}
Problem Identified (To Be Solved in Next Lecture)
Services restart unnecessarily even when configuration hasn't changed
Adding unrelated tasks (like creating directories) triggers service restarts
Issue: Not production-ready behavior - can cause disruptions
Solution: Will be addressed using "handlers" in the next lecture
3. Action Items
Immediate Tasks:
1. Problem to address in next lecture: Fix unnecessary service restarts using handlers
2. Continue to next lecture on handlers
Practice Suggestions:
Experiment with different file modules
Practice creating templates with variables
Try using conditions and loops in templates
4. Important Details & Code Examples
Creating a Banner File with Copy Module
- name: Banner file
copy:
dest: /etc/motd
content: |
This server is managed by Ansible.
No manual changes please.
Purpose: Displays message when users log in to indicate the server is Ansible-managed
Retrieving Configuration Files from Servers
# CentOS Server
ssh -i <key> ec2-user@<centos_ip>
cat /etc/[Link]
# Copy entire content to control machine
# Ubuntu Server
ssh -i <key> ubuntu@<ubuntu_ip>
sudo -i
cat /etc/[Link]
# Copy entire content to control machine
Creating Templates Directory Structure
mkdir templates
vim templates/ntpconf_centos # Store CentOS config
vim templates/ntpconf_ubuntu # Store Ubuntu config
Note: Folder name doesn't have to be "templates" but is standard practice, especially when
using roles
Defining NTP Server Variables
File: group_vars/all
ntp0: [Link]
ntp1: [Link]
ntp2: [Link]
ntp3: [Link]
Note: Variables obtained by Googling "NTP servers in Oregon" - use servers appropriate for
your location
Using Variables in Template Files
Template file content (ntpconf_centos):
pool {{ ntp0 }} iburst
pool {{ ntp1 }} iburst
pool {{ ntp2 }} iburst
pool {{ ntp3 }} iburst
Deploying NTP Configuration with Template Module
- name: Deploy ntp agent conf on centos
template:
src: templates/ntpconf_centos
dest: /etc/[Link]
backup: yes
when: ansible_distribution == 'CentOS'
- name: Deploy ntp conf on ubuntu
template:
src: templates/ntpconf_ubuntu
dest: /etc/[Link]
backup: yes
when: ansible_distribution == 'Ubuntu'
Key Options:
src: Source template file location
dest: Destination path on target machine (name matters!)
backup: yes: Creates backup before overwriting
Restarting Services After Configuration Changes
- name: Restart chrony service on centos
service:
name: chronyd
state: restarted
when: ansible_distribution == 'CentOS'
- name: Restart ntp service on ubuntu
service:
name: ntp
state: restarted
when: ansible_distribution == 'Ubuntu'
Note: Uses state: restarted (not started)
Creating Directories with File Module
- name: Create a folder
file:
path: /opt/test21
state: directory
mode: '0755'
Key Configuration File Locations
CentOS NTP: /etc/[Link]
Ubuntu NTP: /etc/[Link]
Banner File: /etc/motd
Important Concepts Explained
NTP Architecture:
"We have the NTP agent which is going to sync the time with the NTP server... Most of the
bigger companies will maintain their own NTP server... Generally you have four NTP servers
to sync with. If not one, it syncs with the other one."
Template Module Intelligence:
"Template module is going to read your template file, look for any templating that you have
done, and then from that, extract the actual content and then push it to the target location."
Variable Centralization Benefit:
"If I want to really change the NTP server details, I don't need to change it in both
configuration files. I need to just change in the variable file... and all the configuration files
will be affected."
Problem Demonstrated
When adding a new unrelated task (creating directory), the playbook execution showed:
✅ Directory created successfully
❌ Services restarted unnecessarily (even though config unchanged)
Impact: "In a production environment, this can disrupt so many things which is really
not a good thing"
Desired Behavior: "We need to restart the service only when we need to restart the service.
Not unnecessary."
Comprehensive Summary: Ansible Handlers Training
1. Main Topic/Purpose
This lecture (Exercise 13) introduces Ansible Handlers as the solution to the problem
identified in the previous lecture: services restarting unnecessarily every time a playbook
runs. Handlers ensure services restart only when configuration changes occur, making
playbooks production-ready and preventing service disruptions.
2. Key Points
What Are Handlers?
Handlers look like tasks but behave differently: They remain dormant until explicitly
notified
Execute only on change: Handlers run only when the notifying task reports changed:
true
Efficient execution: Prevents unnecessary service restarts and operations
Not just for service restarts: Can be used for any task that should execute
conditionally based on changes
Handler Execution Logic
Tasks return JSON values: changed: true or changed: false
Handlers check this value before executing
If changed: false, the handler is never called
If changed: true, all notified handlers execute
Multiple handlers can be notified from a single task
Common Pattern in DevOps
Template + Handler combination: Most frequently used pattern
Templates manage configuration files
Handlers restart services when configurations change
This pattern is considered standard practice in production environments
Handler Analogy
"Like you see in Hollywood movies, you know we have agents, right? CIA handlers, NSA
handlers... They will be in a dormant state. Whenever there is a requirement, they will be
notified... and then they execute their task like that."
Handlers Are Versatile
Not limited to service restarts
Example use cases:
o Restart services after config changes
o Copy files after user creation
o Restart dependent services in sequence
o Any task that should trigger based on changes
3. Action Items
For Students:
1. Test handlers by making changes to configuration files
2. Experiment with both CentOS and Ubuntu handlers
3. Try notifying multiple handlers from a single task
4. Test changing both source and destination files
5. Wrap up this exercise and prepare for next lecture on Roles
Next Lecture Topic: Ansible Roles
4. Important Details & Code Examples
Converting Tasks to Handlers
Before (Regular Tasks - Problem):
tasks:
- name: Deploy ntp agent conf on centos
template:
src: templates/ntpconf_centos
dest: /etc/[Link]
backup: yes
when: ansible_distribution == 'CentOS'
- name: Restart chrony service on centos
service:
name: chronyd
state: restarted
when: ansible_distribution == 'CentOS'
Problem: Service restarts every playbook run, even without changes
After (With Handlers - Solution):
tasks:
- name: Deploy ntp agent conf on centos
template:
src: templates/ntpconf_centos
dest: /etc/[Link]
backup: yes
when: ansible_distribution == 'CentOS'
notify:
- Restart chrony service on centos
- name: Deploy ntp conf on ubuntu
template:
src: templates/ntpconf_ubuntu
dest: /etc/[Link]
backup: yes
when: ansible_distribution == 'Ubuntu'
notify:
- Restart ntp service on ubuntu
handlers:
- name: Restart chrony service on centos
service:
name: chronyd
state: restarted
when: ansible_distribution == 'CentOS'
- name: Restart ntp service on ubuntu
service:
name: ntp
state: restarted
when: ansible_distribution == 'Ubuntu'
Critical Syntax Rules
1. Alignment Requirements:
handlers: must be in the same column as tasks:
notify: must be in the same column as the module name
2. Naming Requirements:
notify:
- Restart chrony service on centos # Must match EXACTLY
handlers:
- name: Restart chrony service on centos # Same exact name
"Make sure there's no spelling mistake... otherwise it'll say handler not found."
3. Multiple Handler Notification:
notify:
- Handler name 1
- Handler name 2
- Handler name 3
Note: "It's in a list format, so you can give n number of handlers over here"
Testing Handler Behavior
Test 1: No Changes Made
ansible-playbook [Link]
Expected Output:
Deploy ntp configuration file: ok (no change)
No handler called
Services not restarted
Test 2: Change CentOS Configuration
# Add a comment or space to templates/ntpconf_centos
# (not actual configuration change, just to trigger detection)
ansible-playbook [Link]
Expected Output:
Deploy ntp agent conf on centos: changed
Handler "Restart chrony service on centos" executes
Ubuntu handler NOT called (no change to Ubuntu config)
Handler Use Case Examples
1. Service Restart (Most Common):
tasks:
- name: Deploy configuration
template:
src: [Link]
dest: /etc/myapp/[Link]
notify:
- Restart myapp service
handlers:
- name: Restart myapp service
service:
name: myapp
state: restarted
2. User Creation with File Copy:
tasks:
- name: Create user
user:
name: newuser
state: present
notify:
- Copy files to user home
handlers:
- name: Copy files to user home
copy:
src: /path/to/files
dest: /home/newuser/
owner: newuser
3. Multiple Dependent Service Restarts:
tasks:
- name: Update web server config
template:
src: [Link]
dest: /etc/nginx/[Link]
notify:
- Restart nginx
- Restart php-fpm
handlers:
- name: Restart nginx
service:
name: nginx
state: restarted
- name: Restart php-fpm
service:
name: php-fpm
state: restarted
How Handler Change Detection Works
Internal Mechanism:
# Task returns JSON with changed status
"changed": true, # or false
"msg": "Configuration updated"
Handler Logic:
If changed: true → Handler executes
If changed: false → Handler remains dormant
No manual condition checking needed
Documentation Reference Points
From Ansible official documentation mentioned:
Running operations on change: Core handler concept
Calling multiple handlers: List multiple handlers under notify
Naming handlers: Proper naming conventions
Controlling handlers: Using variables in handlers
Handlers in roles: Same usage pattern in roles (next lecture topic)
Key Instructor Insights
On Common Usage:
"From my own experience being a DevOps engineer from a long time, we use template
module a lot and along with template we use handler. It's a very common or very general
combination, templates and handler for configuration files."
On Handler Purpose:
"Once again, handler is not only for restarting service you can give any task over there...
Because generally whenever people see this they think handler is to restart service. That's
not the right answer."
On Handler State:
"Handlers will be mostly in a dormant state... Whenever there is a work... they will be
notified... and then they execute their task."
Best Practices Highlighted
1. Keep handlers at the end of the playbook for clarity
2. Use exact name matching between notify and handler definitions
3. Test with minor changes (comments/spaces) before actual config changes
4. Combine with template module for configuration management
5. Use for production readiness to avoid unnecessary service disruptions
Production Impact
Without Handlers:
Services restart on every playbook run
Can cause service disruptions
Not production-ready
Wastes resources
With Handlers:
Services restart only when needed
Production-safe operations
Efficient resource usage
Change-driven execution
Comprehensive Summary: Ansible Roles Training
1. Main Topic/Purpose
This lecture (Exercise 14-15) teaches Ansible Roles - a method to organize and modularize
playbook content for better manageability, reusability, and standardization across projects
and environments. The instructor demonstrates converting an existing playbook into a role
structure and using community roles from Ansible Galaxy.
2. Key Points
Why Use Ansible Roles?
Simplify complex playbooks: Distribute content (tasks, variables, handlers,
templates, files) into organized directories
Enable reusability: Create roles once at the organization level, reuse across different
projects and environments
Modular structure: Makes code easier to access, modify, and maintain
Standard practice: Creates organizational standards for infrastructure management
"If you are not doing reusability then there is no use of creating roles."
Role Directory Structure
Standard hierarchy created by ansible-galaxy init:
roles/
└── role-name/
├── tasks/[Link] # Task definitions
├── handlers/[Link] # Handler definitions
├── templates/ # Jinja2 templates (.j2 files)
├── files/ # Static files for copy module
├── vars/[Link] # Variables (higher priority)
├── defaults/[Link] # Default variables (lower priority)
└── meta/ # Role metadata
Variable Priority Hierarchy
defaults/[Link]: Lowest priority - use for general defaults
vars/[Link]: Higher priority
Playbook variables: Highest priority - can override role variables
Best practice: Define variables in defaults/[Link] for organization-wide roles,
override in playbooks as needed
Converting Playbooks to Roles
Easier method: Write working playbook first, then convert to roles
Process: Initialize role structure, move content to appropriate directories, update
playbook to reference roles
Smart modules (template, copy) automatically find files in role structure without
specifying paths
Ansible Galaxy Community Roles
Pros: Ready-made solutions, saves development time, learn from experts
Cons: Requires reverse engineering for modifications, complex structure
Instructor's practice: Uses Galaxy roles primarily for learning, prefers writing custom
roles for better control
"In my personal experience I use Ansible Galaxy roles very less... Ansible playbook itself is
very easy to write. We have documentation, we have ChatGPT. So one time work, we write it
from the scratch."
3. Action Items
For Students:
1. Study community roles from Ansible Galaxy to learn best practices
2. Practice converting existing playbooks to roles
3. Experiment with variable overrides at different levels
4. Explore Ansible documentation and use ChatGPT for creative solutions
5. Don't memorize - understand concepts and know where to find information
Next Lecture: Cloud automation with Ansible
4. Important Details & Code Examples
Initial Playbook Setup (Before Roles)
Adding complexity to demonstrate roles:
---
- hosts: all
become: yes
vars:
dir1: /opt/dir22
tasks:
- name: Dump file
copy:
src: files/[Link]
dest: /tmp/[Link]
- name: Create a folder
file:
path: "{{ dir1 }}"
state: directory
mode: '0755'
# ... (existing NTP tasks)
handlers:
# ... (existing handlers)
Creating required files:
mkdir files
vim files/[Link]
# Add content: @@@@####$$$$%%%%
Common Errors Encountered
Error 1: Undefined Variable:
# Task uses: {{ mydir }}
# But defined as: dir1: /opt/dir22
# Result: "Undefined variable: mydir is undefined"
Error 2: Missing Required Module Option:
# Wrong:
copy:
files: files/[Link] # Should be 'src', not 'files'
dest: /tmp/[Link]
# Correct:
copy:
src: files/[Link]
dest: /tmp/[Link]
Creating Role Structure
Initialize role:
mkdir roles
cd roles
ansible-galaxy init post-install
View created structure:
tree
# Output shows:
# post-install/
# ├── defaults/
# │ └── [Link]
# ├── files/
# ├── handlers/
# │ └── [Link]
# ├── tasks/
# │ └── [Link]
# ├── templates/
# ├── vars/
# │ └── [Link]
# └── meta/
Moving Content to Role
1. Move variables:
# Copy from group_vars/all to roles/post-install/vars/[Link]
cat group_vars/all
vim roles/post-install/vars/[Link]
# Paste variables, then:
rm -rf group_vars host_vars
2. Move files and templates:
cp -r files/* roles/post-install/files/
cp -r templates/* roles/post-install/templates/
3. Move handlers (roles/post-install/handlers/[Link]):
# Before formatting (with extra indentation):
- name: Restart chrony service on centos
service:
name: chronyd
state: restarted
when: ansible_distribution == 'CentOS'
# After removing leading spaces using Vim:
# :%s/^ // (removes 4 leading spaces)
- name: Restart chrony service on centos
service:
name: chronyd
state: restarted
when: ansible_distribution == 'CentOS'
- name: Restart ntp service on ubuntu
service:
name: ntp
state: restarted
when: ansible_distribution == 'Ubuntu'
4. Move tasks (roles/post-install/tasks/[Link]):
# After removing leading spaces: :%s/^ //
- name: Banner file
copy:
dest: /etc/motd
content: |
This server is managed by Ansible.
No manual changes please.
- name: Deploy ntp agent conf on centos
template:
src: ntpconf_centos.j2 # Note: .j2 extension added
dest: /etc/[Link]
backup: yes
when: ansible_distribution == 'CentOS'
notify:
- Restart chrony service on centos
- name: Deploy ntp conf on ubuntu
template:
src: ntpconf_ubuntu.j2 # Note: .j2 extension added
dest: /etc/[Link]
backup: yes
when: ansible_distribution == 'Ubuntu'
notify:
- Restart ntp service on ubuntu
- name: Dump file
copy:
src: [Link] # No 'files/' prefix needed
dest: /tmp/[Link]
- name: Create a folder
file:
path: "{{ dir1 }}"
state: directory
Template File Naming Convention
Rename templates with .j2 extension (standard practice):
cd roles/post-install/templates
mv ntpconf_centos ntpconf_centos.j2
mv ntpconf_ubuntu ntpconf_ubuntu.j2
Why?: Template and copy modules automatically look in role's templates/ and files/
directories
Updated Simplified Playbook
Before roles (complex):
---
- hosts: all
become: yes
vars:
dir1: /opt/dir22
tasks:
# 63 lines of tasks
handlers:
# Multiple handlers
After roles (simplified):
---
- hosts: all
become: yes
roles:
- post-install
"Look at our playbook, right? It's now so simple, right? And the directory structure is also
looking good now."
Variable Management in Roles
Moving variables from vars to defaults (best practice):
# Copy from vars/[Link] to defaults/[Link]
cat roles/post-install/vars/[Link]
vim roles/post-install/defaults/[Link]
# Paste, then remove from vars:
> roles/post-install/vars/[Link] # Clear the file
defaults/[Link]:
---
ntp0: [Link]
ntp1: [Link]
ntp2: [Link]
ntp3: [Link]
dir1: /opt/dir22
Overriding Variables in Playbook
For different regions/environments:
---
- hosts: all
become: yes
roles:
- role: post-install
vars:
ntp0: [Link] # India NTP servers
ntp1: [Link]
ntp2: [Link]
ntp3: [Link]
Result: Playbook variables override defaults, configuration files update, handlers trigger
Using Ansible Galaxy Community Roles
Search for roles:
[Link]
- Browse by category: System, Monitoring, Packaging, etc.
- Check ratings and quality scores
Install a community role:
ansible-galaxy install [Link]
# Downloads to: ~/.ansible/roles/[Link]
Use in playbook:
---
- hosts: all
become: yes
roles:
- [Link] # Executes first
- post-install # Then your custom role
View installed role structure:
tree ~/.ansible/roles/[Link]
Advanced Role Techniques (from [Link] example)
Dynamic variable loading (tasks/[Link]):
- name: Include OS-specific variables
include_vars: "{{ ansible_distribution }}.yml"
when: ansible_distribution in ['FreeBSD', 'Fedora']
- name: Include OS-specific variables for RedHat
include_vars: "[Link]"
when: ansible_os_family == 'RedHat'
Conditional task inclusion:
- name: Include RedHat tasks
include_tasks: [Link]
when: ansible_os_family == 'RedHat'
- name: Include Debian tasks
include_tasks: [Link]
when: ansible_os_family == 'Debian'
Benefits:
Separate task files per OS family
Cleaner organization for multi-platform support
Avoid long conditional chains in single file
Role Execution Output
Different format with roles:
TASK [post-install : Banner file] ****************************
ok: [web01]
ok: [web02]
TASK [post-install : Deploy ntp agent conf on centos] *******
changed: [db01]
Note: Shows [role-name : task-name] format
Key Differences: Roles vs Standard Playbooks
Aspect Standard Playbook With Roles
Structure Single/few files Organized directories
Reusability Limited High
Maintainability Harder as complexity grows Easier with separation
Variable management Mixed with tasks Separate defaults/vars
Organization-wide use Difficult Standard practice
Instructor's Recommendations
When to use roles:
Organization-level reusable components
Complex configurations requiring structure
Multiple projects with similar needs
Standardization across teams
When to write custom vs use Galaxy:
Custom: When you need full control and easy modification
Galaxy: For learning best practices, quick prototypes, or standard patterns
Learning approach:
"I use these roles to study them and see how they're writing. These are the industry experts
in writing the Ansible playbooks or roles. And we can learn from them."
Resources for Continued Learning
1. Ansible documentation: Comprehensive module and role information
2. ChatGPT: Generate playbooks and get explanations
3. Ansible Galaxy: Study community roles for patterns
4. Creativity: Apply your own solutions to requirements
"This requires more creativity and there's many, many many things that you can use in
Ansible playbook. Whenever the requirement comes, look for it, search for it."
Best Practices Summary
1. Write playbook first, then convert to roles - easier workflow
2. Use defaults/[Link] for variables - allows easy overrides
3. Follow .j2 naming convention for templates
4. Keep roles focused - one responsibility per role
5. Document variable overrides in playbooks
6. Study community roles - learn industry patterns
7. Don't memorize - understand concepts and know where to find information