RHEL/Linux Automation
Overview
Ansible excels at managing Red Hat Enterprise Linux (RHEL), CentOS, Fedora, and other Linux distributions. This guide covers Linux-specific automation tasks.
Package Management
YUM/DNF (RHEL/CentOS/Fedora)
---
- name: Manage packages with yum
hosts: rhel_servers
become: yes
tasks:
- name: Install multiple packages
yum:
name:
- httpd
- mod_ssl
- php
- mariadb-server
state: present
- name: Update all packages
yum:
name: '*'
state: latest
- name: Remove package
yum:
name: telnet
state: absent
- name: Install from specific repository
yum:
name: nginx
state: present
enablerepo: epel
APT (Debian/Ubuntu)
---
- name: Manage packages with apt
hosts: ubuntu_servers
become: yes
tasks:
- name: Update apt cache
apt:
update_cache: yes
cache_valid_time: 3600
- name: Install packages
apt:
name:
- apache2
- php
- mysql-server
state: present
- name: Upgrade all packages
apt:
upgrade: dist
System Configuration
User Management
---
- name: Manage Linux users
hosts: linux_servers
become: yes
tasks:
- name: Create user with specific UID
user:
name: webadmin
uid: 1100
group: wheel
shell: /bin/bash
home: /home/webadmin
create_home: yes
state: present
- name: Add SSH key for user
authorized_key:
user: webadmin
key: "{{ lookup('file', '/path/to/public_key.pub') }}"
state: present
- name: Add user to sudoers
lineinfile:
path: /etc/sudoers.d/webadmin
line: 'webadmin ALL=(ALL) NOPASSWD: ALL'
create: yes
mode: '0440'
validate: 'visudo -cf %s'
SELinux Management
---
- name: Manage SELinux
hosts: rhel_servers
become: yes
tasks:
- name: Set SELinux to enforcing mode
selinux:
policy: targeted
state: enforcing
- name: Set SELinux context for directory
sefcontext:
target: '/web_data(/.*)?'
setype: httpd_sys_content_t
state: present
- name: Apply SELinux context
command: restorecon -Rv /web_data
Firewall Configuration (firewalld)
---
- name: Configure firewalld
hosts: rhel_servers
become: yes
tasks:
- name: Ensure firewalld is running
service:
name: firewalld
state: started
enabled: yes
- name: Open HTTP port
firewalld:
service: http
permanent: yes
state: enabled
immediate: yes
- name: Open custom port
firewalld:
port: 8080/tcp
permanent: yes
state: enabled
immediate: yes
- name: Add rich rule
firewalld:
rich_rule: 'rule family="ipv4" source address="192.168.1.0/24" accept'
permanent: yes
state: enabled
immediate: yes
Service Management (systemd)
---
- name: Manage systemd services
hosts: linux_servers
become: yes
tasks:
- name: Start and enable service
systemd:
name: httpd
state: started
enabled: yes
daemon_reload: yes
- name: Restart service
systemd:
name: nginx
state: restarted
- name: Check service status
systemd:
name: sshd
register: service_status
- name: Display service status
debug:
var: service_status
- name: Create custom systemd service
copy:
dest: /etc/systemd/system/myapp.service
content: |
[Unit]
Description=My Application
After=network.target
[Service]
Type=simple
User=myapp
ExecStart=/usr/local/bin/myapp
Restart=on-failure
[Install]
WantedBy=multi-user.target
notify: reload systemd
handlers:
- name: reload systemd
systemd:
daemon_reload: yes
File System Operations
Managing Files and Directories
---
- name: File system operations
hosts: linux_servers
become: yes
tasks:
- name: Create directory with specific permissions
file:
path: /opt/myapp
state: directory
mode: '0755'
owner: app
group: app
- name: Create symbolic link
file:
src: /opt/myapp/current
dest: /usr/local/bin/myapp
state: link
- name: Set file attributes
file:
path: /etc/myapp/config.conf
mode: '0600'
owner: root
group: root
attributes: +i # Make file immutable
Template Management
---
- name: Deploy configuration from template
hosts: web_servers
become: yes
vars:
server_name: web.example.com
max_clients: 200
tasks:
- name: Deploy Apache config from template
template:
src: httpd.conf.j2
dest: /etc/httpd/conf/httpd.conf
mode: '0644'
validate: 'httpd -t -f %s'
notify: restart apache
handlers:
- name: restart apache
service:
name: httpd
state: restarted
Storage Management
Logical Volume Management (LVM)
---
- name: Manage LVM
hosts: storage_servers
become: yes
tasks:
- name: Create volume group
lvg:
vg: data_vg
pvs: /dev/sdb,/dev/sdc
state: present
- name: Create logical volume
lvol:
vg: data_vg
lv: app_data
size: 50G
- name: Format logical volume
filesystem:
fstype: xfs
dev: /dev/data_vg/app_data
- name: Mount logical volume
mount:
path: /mnt/app_data
src: /dev/data_vg/app_data
fstype: xfs
opts: defaults
state: mounted
Kernel and Boot Management
---
- name: Kernel management
hosts: linux_servers
become: yes
tasks:
- name: Set kernel parameters
sysctl:
name: net.ipv4.ip_forward
value: '1'
state: present
reload: yes
- name: Configure GRUB
lineinfile:
path: /etc/default/grub
regexp: '^GRUB_CMDLINE_LINUX='
line: 'GRUB_CMDLINE_LINUX="crashkernel=auto rhgb quiet"'
notify: update grub
handlers:
- name: update grub
command: grub2-mkconfig -o /boot/grub2/grub.cfg
Subscription Management (RHEL)
---
- name: Manage RHEL subscriptions
hosts: rhel_servers
become: yes
tasks:
- name: Register system with Red Hat
redhat_subscription:
state: present
username: "{{ rhn_username }}"
password: "{{ rhn_password }}"
auto_attach: yes
- name: Enable specific repositories
rhsm_repository:
name:
- rhel-8-for-x86_64-baseos-rpms
- rhel-8-for-x86_64-appstream-rpms
state: enabled
Complete LAMP Stack Deployment
---
- name: Deploy LAMP Stack on RHEL
hosts: web_servers
become: yes
vars:
mysql_root_password: "SecurePass123"
app_user: webapp
tasks:
- name: Install LAMP packages
yum:
name:
- httpd
- mariadb-server
- mariadb
- php
- php-mysqlnd
- php-fpm
state: present
- name: Start and enable Apache
systemd:
name: httpd
state: started
enabled: yes
- name: Start and enable MariaDB
systemd:
name: mariadb
state: started
enabled: yes
- name: Configure firewall for web services
firewalld:
service: "{{ item }}"
permanent: yes
state: enabled
immediate: yes
loop:
- http
- https
- name: Create application user
user:
name: "{{ app_user }}"
system: yes
shell: /sbin/nologin
- name: Create web directory
file:
path: /var/www/myapp
state: directory
owner: "{{ app_user }}"
group: apache
mode: '0755'
- name: Deploy application files
copy:
src: ./app/
dest: /var/www/myapp/
owner: "{{ app_user }}"
group: apache
- name: Configure Apache virtual host
template:
src: vhost.conf.j2
dest: /etc/httpd/conf.d/myapp.conf
notify: restart apache
- name: Set SELinux context
sefcontext:
target: '/var/www/myapp(/.*)?'
setype: httpd_sys_content_t
state: present
- name: Apply SELinux context
command: restorecon -Rv /var/www/myapp
handlers:
- name: restart apache
systemd:
name: httpd
state: restarted
Patching and Updates
---
- name: System patching workflow
hosts: linux_servers
become: yes
serial: 1 # Update one server at a time
tasks:
- name: Check for available updates
yum:
list: updates
register: updates_available
- name: Display available updates
debug:
var: updates_available
- name: Apply security updates only
yum:
name: '*'
state: latest
security: yes
- name: Check if reboot is required
stat:
path: /var/run/reboot-required
register: reboot_required
- name: Reboot if necessary
reboot:
reboot_timeout: 600
msg: "Rebooting for system updates"
when: reboot_required.stat.exists
- name: Wait for system to come back
wait_for_connection:
delay: 60
timeout: 300
RHEL System Roles
Red Hat Enterprise Linux System Roles are a collection of Ansible roles officially supported by Red Hat to automate common configuration tasks.
Available System Roles
# Install RHEL System Roles
sudo yum install rhel-system-roles
# Roles location
/usr/share/ansible/roles/
# Available roles:
# - rhel-system-roles.kdump
# - rhel-system-roles.network
# - rhel-system-roles.selinux
# - rhel-system-roles.timesync
# - rhel-system-roles.postfix
# - rhel-system-roles.firewall
# - rhel-system-roles.tuned
# - rhel-system-roles.certificate
# - rhel-system-roles.logging
# - rhel-system-roles.metrics
# - rhel-system-roles.nbde_server
# - rhel-system-roles.nbde_client
# - rhel-system-roles.storage
# - rhel-system-roles.vpn
Network Configuration with System Roles
---
- name: Configure networking with system roles
hosts: rhel_servers
become: yes
vars:
network_connections:
- name: internal
type: ethernet
interface_name: eth0
ip:
address:
- 192.168.1.100/24
gateway4: 192.168.1.1
dns:
- 8.8.8.8
- 8.8.4.4
state: up
- name: bond0
type: bond
interface_name: bond0
ip:
address:
- 10.0.0.10/24
bond:
mode: active-backup
miimon: 100
state: up
- name: bond0-port1
type: ethernet
interface_name: eth1
master: bond0
slave_type: bond
state: up
- name: bond0-port2
type: ethernet
interface_name: eth2
master: bond0
slave_type: bond
state: up
roles:
- rhel-system-roles.network
Logging Configuration with System Roles
---
- name: Configure centralized logging
hosts: rhel_servers
become: yes
vars:
logging_inputs:
- name: basic_input
type: basics
logging_outputs:
- name: central_log_server
type: forwards
target: logserver.example.com
tcp_port: 514
protocol: tcp
logging_flows:
- name: forward_to_central
inputs:
- basic_input
outputs:
- central_log_server
roles:
- rhel-system-roles.logging
Storage Management with System Roles
---
- name: Configure storage with system roles
hosts: storage_servers
become: yes
vars:
storage_pools:
- name: data_pool
disks:
- /dev/vdb
- /dev/vdc
type: lvm
volumes:
- name: app_data
size: 50g
mount_point: /mnt/app_data
fs_type: xfs
- name: db_data
size: 100g
mount_point: /mnt/db_data
fs_type: xfs
roles:
- rhel-system-roles.storage
Performance Tuning with System Roles
---
- name: Apply performance tuning profiles
hosts: rhel_servers
become: yes
vars:
tuned_profile: throughput-performance
# Available profiles:
# - balanced (default)
# - powersave
# - throughput-performance
# - latency-performance
# - network-latency
# - network-throughput
# - virtual-guest
# - virtual-host
roles:
- rhel-system-roles.tuned
Red Hat Subscription Management
Advanced Subscription Management
---
- name: Advanced RHEL subscription management
hosts: rhel_servers
become: yes
vars:
rhn_username: "{{ vault_rhn_username }}"
rhn_password: "{{ vault_rhn_password }}"
satellite_server: satellite.example.com
activation_key: "rhel8-production"
organization: "MyOrg"
tasks:
# Method 1: Register with Red Hat CDN
- name: Register with Red Hat CDN
redhat_subscription:
state: present
username: "{{ rhn_username }}"
password: "{{ rhn_password }}"
auto_attach: yes
force_register: no
# Method 2: Register with Satellite/Foreman
- name: Register with Satellite using activation key
redhat_subscription:
state: present
activationkey: "{{ activation_key }}"
org_id: "{{ organization }}"
server_hostname: "{{ satellite_server }}"
- name: Enable specific repositories
rhsm_repository:
name:
- rhel-8-for-x86_64-baseos-rpms
- rhel-8-for-x86_64-appstream-rpms
- ansible-2-for-rhel-8-x86_64-rpms
- satellite-tools-6.10-for-rhel-8-x86_64-rpms
state: enabled
purge: yes # Disable all other repos
- name: Attach specific subscription
redhat_subscription:
state: present
pool_ids:
- 8a85f99c7db4827d017dc512fcad05a1
- name: Configure subscription-manager release
command: subscription-manager release --set=8.6
changed_when: false
- name: Install katello agent (Satellite)
yum:
name: katello-agent
state: present
when: satellite_server is defined
Red Hat Insights Integration
---
- name: Configure Red Hat Insights
hosts: rhel_servers
become: yes
tasks:
- name: Install insights client
yum:
name:
- insights-client
- rhc
- rhc-worker-playbook
state: present
- name: Register with Insights
command: insights-client --register
args:
creates: /etc/insights-client/.registered
- name: Configure insights client
lineinfile:
path: /etc/insights-client/insights-client.conf
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
loop:
- regexp: '^#?auto_config='
line: 'auto_config=True'
- regexp: '^#?authmethod='
line: 'authmethod=BASIC'
- name: Enable automatic insights updates
systemd:
name: insights-client.timer
enabled: yes
state: started
- name: Run insights compliance check
command: insights-client --compliance
register: compliance_result
changed_when: false
- name: Display compliance status
debug:
var: compliance_result.stdout_lines
Enterprise Patching and Compliance
CVE-Based Patching Strategy
---
- name: CVE-based patching workflow
hosts: rhel_servers
become: yes
serial: "25%" # Patch 25% of fleet at a time
vars:
critical_cves:
- CVE-2021-44228 # Log4Shell
- CVE-2022-0847 # Dirty Pipe
patch_window_start: "02:00"
patch_window_end: "06:00"
tasks:
- name: Check current time is within patch window
assert:
that:
- ansible_date_time.hour | int >= patch_window_start.split(':')[0] | int
- ansible_date_time.hour | int < patch_window_end.split(':')[0] | int
msg: "Current time outside patch window"
- name: Create pre-patch snapshot (LVM)
community.general.lvol:
vg: rootvg
lv: root
snapshot: root_prepatch_{{ ansible_date_time.date }}
size: 10g
ignore_errors: yes
- name: Check for security advisories
command: yum updateinfo list security
register: security_advisories
changed_when: false
- name: Display security advisories
debug:
var: security_advisories.stdout_lines
- name: Apply specific CVE patches
yum:
name: "{{ item }}"
state: latest
security: yes
bugfix: no
loop: "{{ packages_for_cves | default([]) }}"
- name: Update all security patches
yum:
name: '*'
state: latest
security: yes
exclude:
- kernel* # Exclude kernel for now
- name: Check if kernel update available
shell: yum list updates kernel | grep -i kernel
register: kernel_updates
failed_when: false
changed_when: false
- name: Update kernel if available
yum:
name: kernel
state: latest
when: kernel_updates.rc == 0
register: kernel_updated
- name: Install yum-utils for needs-restarting
yum:
name: yum-utils
state: present
- name: Check what services need restart
command: needs-restarting -s
register: services_needing_restart
changed_when: false
failed_when: false
- name: Restart required services
systemd:
name: "{{ item }}"
state: restarted
loop: "{{ services_needing_restart.stdout_lines }}"
when:
- services_needing_restart.stdout_lines | length > 0
- item not in ['dbus', 'systemd']
- name: Check if system reboot required
command: needs-restarting -r
register: reboot_required
failed_when: false
changed_when: reboot_required.rc == 1
- name: Schedule reboot if kernel updated
at:
command: /sbin/reboot
count: 5
units: minutes
when: kernel_updated.changed
- name: Send patch report
mail:
host: smtp.example.com
port: 587
to: sysadmin@example.com
subject: "Patch Report: {{ inventory_hostname }}"
body: |
Server: {{ inventory_hostname }}
Patches Applied: {{ security_advisories.stdout_lines | length }}
Kernel Updated: {{ kernel_updated.changed }}
Reboot Required: {{ reboot_required.rc == 1 }}
Services Restarted: {{ services_needing_restart.stdout_lines | join(', ') }}
delegate_to: localhost
Compliance Scanning and Remediation
---
- name: Security compliance scanning with OpenSCAP
hosts: rhel_servers
become: yes
vars:
scap_security_guide: /usr/share/xml/scap/ssg/content
compliance_profile: xccdf_org.ssgproject.content_profile_pci-dss
# Available profiles:
# - xccdf_org.ssgproject.content_profile_cis
# - xccdf_org.ssgproject.content_profile_pci-dss
# - xccdf_org.ssgproject.content_profile_stig
# - xccdf_org.ssgproject.content_profile_ospp
tasks:
- name: Install OpenSCAP tools
yum:
name:
- scap-security-guide
- openscap-scanner
- openscap-utils
state: present
- name: Create results directory
file:
path: /var/log/scap-results
state: directory
mode: '0755'
- name: Run compliance scan
command: >
oscap xccdf eval
--profile {{ compliance_profile }}
--results /var/log/scap-results/scan-{{ ansible_date_time.date }}.xml
--report /var/log/scap-results/scan-{{ ansible_date_time.date }}.html
{{ scap_security_guide }}/ssg-rhel8-ds.xml
register: compliance_scan
failed_when: false
changed_when: false
- name: Fetch compliance report
fetch:
src: /var/log/scap-results/scan-{{ ansible_date_time.date }}.html
dest: ./compliance-reports/{{ inventory_hostname }}/
flat: yes
- name: Generate remediation playbook
command: >
oscap xccdf generate fix
--profile {{ compliance_profile }}
--fix-type ansible
--output /tmp/remediation.yml
/var/log/scap-results/scan-{{ ansible_date_time.date }}.xml
register: remediation_generated
- name: Fetch remediation playbook
fetch:
src: /tmp/remediation.yml
dest: ./remediation-playbooks/{{ inventory_hostname }}.yml
flat: yes
when: remediation_generated.rc == 0
- name: Parse scan results
xml:
path: /var/log/scap-results/scan-{{ ansible_date_time.date }}.xml
xpath: //rule-result[@idref]
content: attribute
register: scan_results
- name: Generate compliance summary
template:
src: compliance_summary.j2
dest: /var/log/scap-results/summary-{{ ansible_date_time.date }}.txt
High Availability Clustering
Pacemaker + Corosync Cluster Setup
---
- name: Configure RHEL HA cluster
hosts: cluster_nodes
become: yes
vars:
cluster_name: production_cluster
cluster_password: "{{ vault_cluster_password }}"
cluster_vip: 192.168.1.100
cluster_nodes:
- node1.example.com
- node2.example.com
- node3.example.com
tasks:
- name: Install HA cluster packages
yum:
name:
- pcs
- pacemaker
- corosync
- fence-agents-all
- resource-agents
state: present
- name: Enable and start pcsd service
systemd:
name: pcsd
enabled: yes
state: started
- name: Set hacluster user password
user:
name: hacluster
password: "{{ cluster_password | password_hash('sha512') }}"
- name: Configure firewall for cluster
firewalld:
service: high-availability
permanent: yes
state: enabled
immediate: yes
- name: Authenticate cluster nodes
command: >
pcs host auth {{ cluster_nodes | join(' ') }}
-u hacluster -p {{ cluster_password }}
run_once: true
delegate_to: "{{ groups['cluster_nodes'][0] }}"
- name: Create cluster
command: >
pcs cluster setup {{ cluster_name }}
{{ cluster_nodes | join(' ') }}
--force
run_once: true
delegate_to: "{{ groups['cluster_nodes'][0] }}"
- name: Start cluster on all nodes
command: pcs cluster start --all
run_once: true
delegate_to: "{{ groups['cluster_nodes'][0] }}"
- name: Enable cluster on boot
command: pcs cluster enable --all
run_once: true
delegate_to: "{{ groups['cluster_nodes'][0] }}"
- name: Configure cluster properties
pcs_property:
name: "{{ item.name }}"
value: "{{ item.value }}"
loop:
- { name: 'stonith-enabled', value: 'false' }
- { name: 'no-quorum-policy', value: 'ignore' }
run_once: true
delegate_to: "{{ groups['cluster_nodes'][0] }}"
- name: Create virtual IP resource
command: >
pcs resource create ClusterVIP ocf:heartbeat:IPaddr2
ip={{ cluster_vip }} cidr_netmask=24
op monitor interval=30s
run_once: true
delegate_to: "{{ groups['cluster_nodes'][0] }}"
- name: Create web service resource
command: >
pcs resource create WebServer ocf:heartbeat:apache
configfile=/etc/httpd/conf/httpd.conf
statusurl="http://localhost/server-status"
op monitor interval=30s
run_once: true
delegate_to: "{{ groups['cluster_nodes'][0] }}"
- name: Constrain resources to run together
command: >
pcs constraint colocation add WebServer with ClusterVIP INFINITY
run_once: true
delegate_to: "{{ groups['cluster_nodes'][0] }}"
- name: Ensure VIP starts before web server
command: >
pcs constraint order ClusterVIP then WebServer
run_once: true
delegate_to: "{{ groups['cluster_nodes'][0] }}"
- name: Check cluster status
command: pcs status
register: cluster_status
changed_when: false
- name: Display cluster status
debug:
var: cluster_status.stdout_lines
Advanced SELinux Management
Custom SELinux Policy Creation
---
- name: Create custom SELinux policy
hosts: rhel_servers
become: yes
tasks:
- name: Install SELinux policy tools
yum:
name:
- policycoreutils-python-utils
- selinux-policy-devel
- setroubleshoot-server
state: present
- name: Create custom application directory
file:
path: /opt/customapp
state: directory
mode: '0755'
- name: Set custom SELinux file context
sefcontext:
target: '/opt/customapp(/.*)?'
setype: httpd_sys_rw_content_t
state: present
- name: Apply SELinux context
command: restorecon -Rv /opt/customapp
changed_when: false
- name: Allow custom port for httpd
seport:
ports: 8888
proto: tcp
setype: http_port_t
state: present
- name: Set SELinux booleans
seboolean:
name: "{{ item.name }}"
state: "{{ item.state }}"
persistent: yes
loop:
- { name: 'httpd_can_network_connect', state: true }
- { name: 'httpd_can_network_connect_db', state: true }
- { name: 'httpd_can_sendmail', state: true }
- name: Create custom SELinux module from audit logs
shell: |
grep customapp /var/log/audit/audit.log | audit2allow -M customapp
semodule -i customapp.pp
args:
creates: /etc/selinux/targeted/modules/active/modules/customapp.pp
- name: Check SELinux denials
command: ausearch -m avc -ts recent
register: selinux_denials
failed_when: false
changed_when: false
- name: Generate SELinux troubleshooting report
shell: sealert -a /var/log/audit/audit.log > /tmp/selinux-report.txt
when: selinux_denials.rc == 0
changed_when: false
- name: Fetch SELinux report
fetch:
src: /tmp/selinux-report.txt
dest: ./selinux-reports/{{ inventory_hostname }}.txt
flat: yes
when: selinux_denials.rc == 0
Container Support with Podman
Podman Container Management
---
- name: Manage containers with Podman on RHEL
hosts: container_hosts
become: yes
tasks:
- name: Install Podman and tools
yum:
name:
- podman
- buildah
- skopeo
- podman-compose
state: present
- name: Configure container registries
copy:
dest: /etc/containers/registries.conf
content: |
unqualified-search-registries = ["registry.access.redhat.com", "docker.io"]
[[registry]]
location = "registry.access.redhat.com"
insecure = false
[[registry]]
location = "registry.example.com"
insecure = true
- name: Login to Red Hat registry
containers.podman.podman_login:
username: "{{ vault_rh_registry_user }}"
password: "{{ vault_rh_registry_password }}"
registry: registry.redhat.io
- name: Pull container image
containers.podman.podman_image:
name: registry.redhat.io/rhel8/httpd-24
tag: latest
state: present
- name: Run web server container
containers.podman.podman_container:
name: webapp
image: registry.redhat.io/rhel8/httpd-24:latest
state: started
ports:
- "8080:8080"
volumes:
- /opt/webapp:/var/www/html:Z
env:
HTTPD_LOG_LEVEL: info
restart_policy: always
- name: Create pod with multiple containers
containers.podman.podman_pod:
name: application_pod
state: started
ports:
- "80:80"
- "3306:3306"
- name: Run database container in pod
containers.podman.podman_container:
name: db
image: registry.redhat.io/rhel8/mariadb-103
state: started
pod: application_pod
env:
MYSQL_ROOT_PASSWORD: "{{ vault_db_password }}"
MYSQL_DATABASE: appdb
- name: Run app container in pod
containers.podman.podman_container:
name: app
image: myapp:latest
state: started
pod: application_pod
env:
DB_HOST: localhost
DB_PORT: 3306
- name: Generate systemd service for container
containers.podman.podman_generate_systemd:
name: webapp
dest: /etc/systemd/system/
restart_policy: always
- name: Enable container systemd service
systemd:
name: container-webapp
enabled: yes
daemon_reload: yes
- name: Build custom image with Buildah
command: |
buildah bud -t myapp:latest -f Containerfile .
args:
chdir: /opt/myapp
register: build_result
changed_when: "'Successfully tagged' in build_result.stdout"
- name: Push image to local registry
containers.podman.podman_image:
name: myapp
tag: latest
push: yes
push_args:
dest: registry.example.com/myapp:latest
Identity Management Integration
FreeIPA/IdM Client Configuration
---
- name: Join systems to FreeIPA domain
hosts: rhel_servers
become: yes
vars:
ipa_server: ipa.example.com
ipa_domain: example.com
ipa_realm: EXAMPLE.COM
ipa_admin_user: admin
ipa_admin_password: "{{ vault_ipa_admin_password }}"
tasks:
- name: Install IPA client packages
yum:
name:
- ipa-client
- sssd
- sssd-tools
- krb5-workstation
state: present
- name: Check if already joined to domain
stat:
path: /etc/ipa/default.conf
register: ipa_conf
- name: Join system to IPA domain
command: >
ipa-client-install
--server={{ ipa_server }}
--domain={{ ipa_domain }}
--realm={{ ipa_realm }}
--principal={{ ipa_admin_user }}
--password={{ ipa_admin_password }}
--mkhomedir
--unattended
when: not ipa_conf.stat.exists
- name: Configure SSSD for IPA
lineinfile:
path: /etc/sssd/sssd.conf
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
state: present
loop:
- regexp: '^cache_credentials ='
line: 'cache_credentials = True'
- regexp: '^krb5_store_password_if_offline ='
line: 'krb5_store_password_if_offline = True'
notify: restart sssd
- name: Enable automatic home directory creation
lineinfile:
path: /etc/pam.d/system-auth
line: 'session optional pam_mkhomedir.so skel=/etc/skel umask=0077'
insertafter: 'session.*required.*pam_unix.so'
- name: Configure sudo rules from IPA
lineinfile:
path: /etc/nsswitch.conf
regexp: '^sudoers:'
line: 'sudoers: files sss'
- name: Test IPA authentication
command: id admin@{{ ipa_domain }}
register: ipa_test
changed_when: false
failed_when: ipa_test.rc != 0
handlers:
- name: restart sssd
systemd:
name: sssd
state: restarted
Advanced Storage Solutions
Stratis Storage Management
---
- name: Configure Stratis storage
hosts: storage_servers
become: yes
tasks:
- name: Install Stratis packages
yum:
name:
- stratis-cli
- stratisd
state: present
- name: Enable and start stratisd
systemd:
name: stratisd
enabled: yes
state: started
- name: Create Stratis pool
command: stratis pool create mypool /dev/vdb /dev/vdc
args:
creates: /stratis/mypool
- name: Create Stratis filesystem
command: stratis filesystem create mypool myfs
args:
creates: /stratis/mypool/myfs
- name: Get filesystem UUID
shell: blkid /stratis/mypool/myfs | awk -F'"' '{print $2}'
register: stratis_uuid
changed_when: false
- name: Mount Stratis filesystem
mount:
path: /mnt/stratis_data
src: "UUID={{ stratis_uuid.stdout }}"
fstype: xfs
opts: defaults,x-systemd.requires=stratisd.service
state: mounted
- name: Add disk to existing pool
command: stratis pool add-data mypool /dev/vdd
register: pool_expand
changed_when: pool_expand.rc == 0
failed_when: false
- name: Create snapshot
command: stratis filesystem snapshot mypool myfs myfs_snap
args:
creates: /stratis/mypool/myfs_snap
VDO (Virtual Data Optimizer)
---
- name: Configure VDO for deduplication and compression
hosts: storage_servers
become: yes
tasks:
- name: Install VDO packages
yum:
name:
- vdo
- kmod-kvdo
state: present
- name: Create VDO volume
command: >
vdo create
--name=vdo1
--device=/dev/sdb
--vdoLogicalSize=10T
--sparseIndex=enabled
args:
creates: /dev/mapper/vdo1
- name: Format VDO volume
filesystem:
fstype: xfs
dev: /dev/mapper/vdo1
opts: -K
- name: Mount VDO volume
mount:
path: /mnt/vdo_data
src: /dev/mapper/vdo1
fstype: xfs
opts: defaults,x-systemd.requires=vdo.service
state: mounted
- name: Check VDO statistics
command: vdo status --name=vdo1
register: vdo_stats
changed_when: false
- name: Display VDO savings
debug:
msg: "VDO savings: {{ vdo_stats.stdout }}"
Performance Tuning and Optimization
Advanced Tuned Profiles
---
- name: Configure custom tuned profiles
hosts: rhel_servers
become: yes
tasks:
- name: Install tuned
yum:
name: tuned
state: present
- name: Enable and start tuned
systemd:
name: tuned
enabled: yes
state: started
- name: Create custom tuned profile
copy:
dest: /etc/tuned/custom-web-server/tuned.conf
content: |
[main]
summary=Custom profile for web servers
include=throughput-performance
[sysctl]
net.ipv4.tcp_fin_timeout=30
net.ipv4.tcp_tw_reuse=1
net.core.somaxconn=4096
net.core.netdev_max_backlog=5000
vm.swappiness=10
vm.dirty_ratio=40
vm.dirty_background_ratio=10
[vm]
transparent_hugepages=madvise
[cpu]
governor=performance
energy_perf_bias=performance
notify: restart tuned
- name: Activate custom tuned profile
command: tuned-adm profile custom-web-server
register: tuned_activate
changed_when: "'Tuning activated' in tuned_activate.stdout"
- name: Verify active profile
command: tuned-adm active
register: active_profile
changed_when: false
- name: Display active profile
debug:
var: active_profile.stdout
handlers:
- name: restart tuned
systemd:
name: tuned
state: restarted
Kernel Tuning for Database Servers
---
- name: Tune kernel for database workloads
hosts: database_servers
become: yes
tasks:
- name: Configure kernel parameters for databases
sysctl:
name: "{{ item.name }}"
value: "{{ item.value }}"
state: present
reload: yes
loop:
# Memory settings
- { name: 'vm.swappiness', value: '1' }
- { name: 'vm.dirty_ratio', value: '15' }
- { name: 'vm.dirty_background_ratio', value: '5' }
- { name: 'vm.overcommit_memory', value: '2' }
- { name: 'vm.overcommit_ratio', value: '90' }
# Huge pages
- { name: 'vm.nr_hugepages', value: '1024' }
# Networking
- { name: 'net.core.rmem_max', value: '134217728' }
- { name: 'net.core.wmem_max', value: '134217728' }
- { name: 'net.ipv4.tcp_rmem', value: '4096 87380 67108864' }
- { name: 'net.ipv4.tcp_wmem', value: '4096 65536 67108864' }
# Semaphores for database
- { name: 'kernel.sem', value: '250 32000 100 128' }
- { name: 'kernel.shmmax', value: '68719476736' }
- { name: 'kernel.shmall', value: '4294967296' }
- name: Set I/O scheduler for database disks
shell: echo deadline > /sys/block/{{ item }}/queue/scheduler
loop:
- sdb
- sdc
when: ansible_devices[item] is defined
- name: Configure transparent huge pages
lineinfile:
path: /etc/default/grub
regexp: '^GRUB_CMDLINE_LINUX='
line: 'GRUB_CMDLINE_LINUX="... transparent_hugepage=never"'
notify: update grub
handlers:
- name: update grub
command: grub2-mkconfig -o /boot/grub2/grub.cfg
Security Hardening
CIS Benchmark Implementation
---
- name: Apply CIS RHEL 8 hardening
hosts: rhel_servers
become: yes
tasks:
# 1. Filesystem Configuration
- name: Ensure separate partition for /tmp
mount:
path: /tmp
src: tmpfs
fstype: tmpfs
opts: defaults,nodev,nosuid,noexec
state: mounted
# 2. Disable unused filesystems
- name: Disable unused filesystems
lineinfile:
path: /etc/modprobe.d/CIS.conf
line: "install {{ item }} /bin/true"
create: yes
loop:
- cramfs
- freevxfs
- jffs2
- hfs
- hfsplus
- udf
# 3. Configure system accounting (auditd)
- name: Install auditd
yum:
name: audit
state: present
- name: Configure audit rules
copy:
dest: /etc/audit/rules.d/cis.rules
content: |
# Monitor date/time modifications
-a always,exit -F arch=b64 -S adjtimex -S settimeofday -k time-change
-a always,exit -F arch=b64 -S clock_settime -k time-change
-w /etc/localtime -p wa -k time-change
# Monitor user/group modifications
-w /etc/group -p wa -k identity
-w /etc/passwd -p wa -k identity
-w /etc/gshadow -p wa -k identity
-w /etc/shadow -p wa -k identity
# Monitor network modifications
-a always,exit -F arch=b64 -S sethostname -S setdomainname -k system-locale
-w /etc/issue -p wa -k system-locale
-w /etc/issue.net -p wa -k system-locale
-w /etc/hosts -p wa -k system-locale
-w /etc/sysconfig/network -p wa -k system-locale
# Monitor privileged commands
-a always,exit -F path=/usr/bin/sudo -F perm=x -F auid>=1000 -F auid!=4294967295 -k privileged
notify: restart auditd
# 4. Configure SSH hardening
- name: Harden SSH configuration
lineinfile:
path: /etc/ssh/sshd_config
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
loop:
- { regexp: '^#?Protocol', line: 'Protocol 2' }
- { regexp: '^#?LogLevel', line: 'LogLevel VERBOSE' }
- { regexp: '^#?X11Forwarding', line: 'X11Forwarding no' }
- { regexp: '^#?MaxAuthTries', line: 'MaxAuthTries 4' }
- { regexp: '^#?IgnoreRhosts', line: 'IgnoreRhosts yes' }
- { regexp: '^#?HostbasedAuthentication', line: 'HostbasedAuthentication no' }
- { regexp: '^#?PermitRootLogin', line: 'PermitRootLogin no' }
- { regexp: '^#?PermitEmptyPasswords', line: 'PermitEmptyPasswords no' }
- { regexp: '^#?PermitUserEnvironment', line: 'PermitUserEnvironment no' }
- { regexp: '^#?ClientAliveInterval', line: 'ClientAliveInterval 300' }
- { regexp: '^#?ClientAliveCountMax', line: 'ClientAliveCountMax 0' }
- { regexp: '^#?LoginGraceTime', line: 'LoginGraceTime 60' }
- { regexp: '^#?MACs', line: 'MACs hmac-sha2-512,hmac-sha2-256' }
notify: restart sshd
# 5. Password policy
- name: Configure password quality requirements
lineinfile:
path: /etc/security/pwquality.conf
regexp: "{{ item.regexp }}"
line: "{{ item.line }}"
loop:
- { regexp: '^#?\s*minlen', line: 'minlen = 14' }
- { regexp: '^#?\s*dcredit', line: 'dcredit = -1' }
- { regexp: '^#?\s*ucredit', line: 'ucredit = -1' }
- { regexp: '^#?\s*ocredit', line: 'ocredit = -1' }
- { regexp: '^#?\s*lcredit', line: 'lcredit = -1' }
# 6. Account lockout policy
- name: Configure account lockout
lineinfile:
path: /etc/pam.d/password-auth
line: 'auth required pam_faillock.so preauth silent audit deny=5 unlock_time=900'
insertbefore: '^auth\s+sufficient'
# 7. Disable core dumps
- name: Disable core dumps
pam_limits:
domain: '*'
limit_type: hard
limit_item: core
value: '0'
- name: Disable core dumps in sysctl
sysctl:
name: fs.suid_dumpable
value: '0'
state: present
handlers:
- name: restart auditd
service:
name: auditd
state: restarted
- name: restart sshd
systemd:
name: sshd
state: restarted
Disaster Recovery and Backup
System Backup with ReaR
---
- name: Configure Relax-and-Recover (ReaR) backup
hosts: rhel_servers
become: yes
vars:
backup_server: backup.example.com
backup_path: /backup/{{ inventory_hostname }}
tasks:
- name: Install ReaR
yum:
name: rear
state: present
- name: Configure ReaR for network backup
copy:
dest: /etc/rear/local.conf
content: |
OUTPUT=ISO
BACKUP=NETFS
BACKUP_URL=nfs://{{ backup_server }}{{ backup_path }}
BACKUP_PROG_EXCLUDE=( '/tmp/*' '/var/tmp/*' '/var/crash/*' )
NETFS_KEEP_OLD_BACKUP_COPY=yes
AUTOEXCLUDE_MULTIPATH=n
BACKUP_TYPE=incremental
FULLBACKUPDAY="Sat"
- name: Create backup directory on NFS server
file:
path: "{{ backup_path }}"
state: directory
mode: '0755'
delegate_to: "{{ backup_server }}"
- name: Perform initial backup
command: rear -v mkbackup
async: 3600
poll: 0
register: backup_job
- name: Create backup cron job
cron:
name: "ReaR system backup"
minute: "0"
hour: "2"
weekday: "0,3,6" # Sunday, Wednesday, Saturday
job: "/usr/sbin/rear mkbackup"
user: root
- name: Test backup restore
command: rear -v recover
async: 1800
poll: 0
when: test_restore | default(false)
Migration and Upgrade
In-Place Upgrade from RHEL 7 to RHEL 8
---
- name: Perform RHEL 7 to 8 upgrade
hosts: rhel7_servers
become: yes
serial: 1
tasks:
- name: Verify RHEL 7 version
assert:
that:
- ansible_distribution == 'RedHat'
- ansible_distribution_major_version == '7'
msg: "This playbook is only for RHEL 7 systems"
- name: Install leapp upgrade tool
yum:
name:
- leapp
- leapp-repository
state: present
- name: Run pre-upgrade assessment
command: leapp preupgrade
register: preupgrade_result
failed_when: false
- name: Display pre-upgrade report
debug:
var: preupgrade_result.stdout_lines
- name: Check for inhibitors
shell: grep -i inhibitor /var/log/leapp/leapp-report.txt
register: inhibitors
failed_when: false
changed_when: false
- name: Fail if inhibitors found
fail:
msg: "Upgrade inhibitors found. Please review /var/log/leapp/leapp-report.txt"
when: inhibitors.rc == 0
- name: Create pre-upgrade backup
command: rear mkbackup
async: 3600
poll: 60
- name: Perform upgrade
command: leapp upgrade
async: 7200
poll: 60
register: upgrade_result
- name: Reboot to complete upgrade
reboot:
reboot_timeout: 1800
msg: "Rebooting to complete RHEL 8 upgrade"
- name: Wait for system to come back
wait_for_connection:
delay: 60
timeout: 600
- name: Verify upgrade success
assert:
that:
- ansible_distribution_major_version == '8'
msg: "Upgrade to RHEL 8 failed"
- name: Clean up upgrade artifacts
file:
path: "{{ item }}"
state: absent
loop:
- /root/tmp_leapp_py3
- /var/log/leapp
Monitoring and Logging
Centralized Logging with rsyslog
---
- name: Configure centralized logging
hosts: rhel_servers
become: yes
vars:
log_server: logserver.example.com
log_port: 514
tasks:
- name: Install rsyslog
yum:
name: rsyslog
state: present
- name: Configure rsyslog client
copy:
dest: /etc/rsyslog.d/remote.conf
content: |
# Send all logs to central server
*.* @@{{ log_server }}:{{ log_port }}
# Local disk queue for reliability
$ActionQueueFileName queue
$ActionQueueMaxDiskSpace 1g
$ActionQueueSaveOnShutdown on
$ActionQueueType LinkedList
$ActionResumeRetryCount -1
- name: Enable and restart rsyslog
systemd:
name: rsyslog
enabled: yes
state: restarted
- name: Configure firewall for rsyslog
firewalld:
port: "{{ log_port }}/tcp"
permanent: yes
state: enabled
immediate: yes
delegate_to: "{{ log_server }}"
Best Practices Summary
RHEL Enterprise Automation Best Practices
- Subscription Management: Use Satellite/Foreman for centralized management
- Patching: Implement CVE-based patching with proper testing windows
- Security: Enable SELinux in enforcing mode, never disable it
- Compliance: Use OpenSCAP for automated compliance scanning
- High Availability: Use Pacemaker for critical services
- System Roles: Leverage official RHEL System Roles for standardization
- Containers: Use Podman for rootless container management
- Storage: Use Stratis and VDO for modern storage management
- Monitoring: Integrate with Red Hat Insights for proactive issue detection
- Backup: Implement ReaR for disaster recovery
- Performance: Use tuned profiles for workload optimization
- Identity: Integrate with FreeIPA/IdM for centralized authentication