Ansible for Kubernetes: Complete Deployment Guide
Teach me Ansible |
2025-02-05 |
30 min read
Master Kubernetes automation with Ansible. This comprehensive guide covers deploying, managing, and orchestrating Kubernetes clusters and applications using Ansible's kubernetes.core collection.
Why Use Ansible with Kubernetes?
While Kubernetes has its own declarative model with YAML manifests, Ansible brings powerful advantages:
- Unified Automation - Manage infrastructure, Kubernetes, and applications with one tool
- Templating Power - Jinja2 templates for complex, reusable configurations
- Secret Management - Ansible Vault for secure credential handling
- Orchestration - Complex deployment workflows across multiple clusters
- Idempotency - Safe, repeatable operations
- Facts Gathering - Query cluster state before making changes
Prerequisites
Install Required Tools
# Install Ansible
pip install ansible
# Install Kubernetes Python client
pip install kubernetes
# Install OpenShift client (optional, for oc commands)
pip install openshift
# Verify kubectl is installed
kubectl version --client
Install kubernetes.core Collection
# Install from Ansible Galaxy
ansible-galaxy collection install kubernetes.core
# Or specify version in requirements.yml
cat > requirements.yml << EOF
collections:
- name: kubernetes.core
version: ">=2.4.0"
EOF
ansible-galaxy collection install -r requirements.yml
Connecting to Kubernetes
Using kubeconfig
---
- name: Deploy to Kubernetes
hosts: localhost
gather_facts: no
vars:
kubeconfig: ~/.kube/config
context: my-cluster
tasks:
- name: Get cluster info
kubernetes.core.k8s_cluster_info:
kubeconfig: "{{ kubeconfig }}"
context: "{{ context }}"
register: cluster_info
- name: Show cluster version
debug:
msg: "Kubernetes version: {{ cluster_info.version.server.kubernetes.gitVersion }}"
Using In-Cluster Configuration
# When running from within a Kubernetes pod
- name: Use in-cluster auth
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: my-config
namespace: default
data:
key: value
# No kubeconfig needed when running in cluster
Managing Kubernetes Resources
Deploying Namespaces
---
- name: Create namespaces
hosts: localhost
tasks:
- name: Create development namespace
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Namespace
metadata:
name: development
labels:
environment: dev
- name: Create production namespace
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Namespace
metadata:
name: production
labels:
environment: prod
Deploying ConfigMaps and Secrets
- name: Create ConfigMap
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
namespace: development
data:
database.host: postgres.development.svc.cluster.local
database.port: "5432"
app.debug: "true"
- name: Create Secret from Ansible Vault
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Secret
metadata:
name: app-secrets
namespace: development
type: Opaque
stringData:
database.password: "{{ vault_db_password }}"
api.key: "{{ vault_api_key }}"
Deploying Deployments
- name: Deploy application
kubernetes.core.k8s:
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
namespace: development
labels:
app: webapp
spec:
replicas: 3
selector:
matchLabels:
app: webapp
template:
metadata:
labels:
app: webapp
spec:
containers:
- name: webapp
image: nginx:1.21
ports:
- containerPort: 80
env:
- name: DATABASE_HOST
valueFrom:
configMapKeyRef:
name: app-config
key: database.host
- name: DATABASE_PASSWORD
valueFrom:
secretKeyRef:
name: app-secrets
key: database.password
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
Creating Services
- name: Create Service
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Service
metadata:
name: webapp-service
namespace: development
spec:
type: LoadBalancer
selector:
app: webapp
ports:
- protocol: TCP
port: 80
targetPort: 80
Deploying Ingress
- name: Create Ingress
kubernetes.core.k8s:
state: present
definition:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: webapp-ingress
namespace: development
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
spec:
ingressClassName: nginx
tls:
- hosts:
- webapp.example.com
secretName: webapp-tls
rules:
- host: webapp.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: webapp-service
port:
number: 80
Using YAML Files
Apply Manifest Files
- name: Apply Kubernetes manifests
kubernetes.core.k8s:
state: present
src: /path/to/manifest.yaml
namespace: development
- name: Apply multiple manifests
kubernetes.core.k8s:
state: present
src: "{{ item }}"
namespace: development
loop:
- deployment.yaml
- service.yaml
- ingress.yaml
Using Templates
Create a Jinja2 template templates/deployment.yaml.j2:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ app_name }}
namespace: {{ namespace }}
labels:
app: {{ app_name }}
environment: {{ environment }}
spec:
replicas: {{ replicas }}
selector:
matchLabels:
app: {{ app_name }}
template:
metadata:
labels:
app: {{ app_name }}
spec:
containers:
- name: {{ app_name }}
image: {{ image }}:{{ image_tag }}
ports:
- containerPort: {{ container_port }}
env:
{% for key, value in env_vars.items() %}
- name: {{ key }}
value: "{{ value }}"
{% endfor %}
Use in playbook:
- name: Deploy from template
kubernetes.core.k8s:
state: present
template: deployment.yaml.j2
namespace: "{{ namespace }}"
vars:
app_name: myapp
namespace: production
environment: prod
replicas: 5
image: myregistry.io/myapp
image_tag: v1.2.3
container_port: 8080
env_vars:
DATABASE_HOST: postgres-prod
CACHE_HOST: redis-prod
LOG_LEVEL: info
Querying Kubernetes Resources
Get Resource Information
- name: Get all pods
kubernetes.core.k8s_info:
kind: Pod
namespace: development
register: pod_list
- name: Show pod names
debug:
msg: "{{ item.metadata.name }}"
loop: "{{ pod_list.resources }}"
- name: Get specific deployment
kubernetes.core.k8s_info:
kind: Deployment
name: webapp
namespace: development
register: deployment
- name: Show deployment status
debug:
msg: "Replicas: {{ deployment.resources[0].status.replicas }}, Available: {{ deployment.resources[0].status.availableReplicas }}"
Wait for Resources
- name: Deploy application
kubernetes.core.k8s:
state: present
src: deployment.yaml
wait: yes
wait_timeout: 300
wait_condition:
type: Available
status: "True"
- name: Wait for job completion
kubernetes.core.k8s_info:
kind: Job
name: data-migration
namespace: production
wait: yes
wait_timeout: 600
wait_condition:
type: Complete
status: "True"
Scaling Applications
- name: Scale deployment up
kubernetes.core.k8s_scale:
kind: Deployment
name: webapp
namespace: production
replicas: 10
wait: yes
- name: Scale down during off-peak
kubernetes.core.k8s_scale:
kind: Deployment
name: webapp
namespace: production
replicas: 2
when: ansible_date_time.hour|int < 6 or ansible_date_time.hour|int > 22
Rolling Updates and Rollbacks
- name: Update application image
kubernetes.core.k8s:
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
namespace: production
spec:
template:
spec:
containers:
- name: webapp
image: myapp:v2.0.0
wait: yes
wait_condition:
type: Progressing
status: "True"
- name: Check rollout status
kubernetes.core.k8s_info:
kind: Deployment
name: webapp
namespace: production
register: deployment
- name: Rollback if failed
kubernetes.core.k8s:
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: webapp
namespace: production
spec:
template:
spec:
containers:
- name: webapp
image: myapp:v1.9.0
when: deployment.resources[0].status.conditions | selectattr('type', 'equalto', 'Progressing') | map(attribute='status') | first != 'True'
Managing Persistent Storage
- name: Create PersistentVolumeClaim
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
namespace: production
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: fast-ssd
- name: Deploy StatefulSet with storage
kubernetes.core.k8s:
state: present
definition:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: postgres
namespace: production
spec:
serviceName: postgres
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:14
ports:
- containerPort: 5432
volumeMounts:
- name: data
mountPath: /var/lib/postgresql/data
env:
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: password
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 10Gi
storageClassName: fast-ssd
Helm Integration
- name: Add Helm repository
kubernetes.core.helm_repository:
name: bitnami
repo_url: https://charts.bitnami.com/bitnami
- name: Deploy PostgreSQL via Helm
kubernetes.core.helm:
name: postgres
chart_ref: bitnami/postgresql
release_namespace: production
create_namespace: yes
values:
auth:
username: myuser
password: "{{ vault_postgres_password }}"
database: myapp
primary:
persistence:
size: 20Gi
metrics:
enabled: true
- name: Update Helm release
kubernetes.core.helm:
name: postgres
chart_ref: bitnami/postgresql
chart_version: 12.1.0
release_namespace: production
values:
primary:
persistence:
size: 50Gi
- name: Uninstall Helm release
kubernetes.core.helm:
name: postgres
release_namespace: production
state: absent
Complete Deployment Example
Full playbook for deploying a 3-tier application:
---
- name: Deploy 3-tier application to Kubernetes
hosts: localhost
gather_facts: no
vars:
namespace: myapp-prod
app_version: "1.2.3"
tasks:
- name: Create namespace
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Namespace
metadata:
name: "{{ namespace }}"
labels:
environment: production
- name: Deploy PostgreSQL
kubernetes.core.helm:
name: postgres
chart_ref: bitnami/postgresql
release_namespace: "{{ namespace }}"
values:
auth:
username: appuser
password: "{{ vault_db_password }}"
database: appdb
primary:
persistence:
size: 20Gi
- name: Create Redis deployment
kubernetes.core.k8s:
state: present
definition:
apiVersion: apps/v1
kind: Deployment
metadata:
name: redis
namespace: "{{ namespace }}"
spec:
replicas: 1
selector:
matchLabels:
app: redis
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis:7-alpine
ports:
- containerPort: 6379
- name: Create Redis service
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Service
metadata:
name: redis
namespace: "{{ namespace }}"
spec:
selector:
app: redis
ports:
- port: 6379
targetPort: 6379
- name: Deploy backend API
kubernetes.core.k8s:
state: present
template: backend-deployment.yaml.j2
namespace: "{{ namespace }}"
vars:
replicas: 3
image_tag: "{{ app_version }}"
- name: Deploy frontend
kubernetes.core.k8s:
state: present
template: frontend-deployment.yaml.j2
namespace: "{{ namespace }}"
vars:
replicas: 2
image_tag: "{{ app_version }}"
- name: Create Ingress
kubernetes.core.k8s:
state: present
definition:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: app-ingress
namespace: "{{ namespace }}"
annotations:
cert-manager.io/cluster-issuer: letsencrypt-prod
spec:
ingressClassName: nginx
tls:
- hosts:
- myapp.example.com
secretName: myapp-tls
rules:
- host: myapp.example.com
http:
paths:
- path: /api
pathType: Prefix
backend:
service:
name: backend
port:
number: 8080
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
- name: Wait for deployment
kubernetes.core.k8s_info:
kind: Deployment
namespace: "{{ namespace }}"
label_selectors:
- app in (backend, frontend)
wait: yes
wait_timeout: 300
- name: Verify deployment
kubernetes.core.k8s_info:
kind: Pod
namespace: "{{ namespace }}"
register: pods
- name: Show deployed pods
debug:
msg: "{{ pods.resources | length }} pods running"
Best Practices
1. Use Inventory for Multiple Clusters
# inventory/kubernetes
[development]
dev-cluster kubeconfig=~/.kube/dev-config context=dev-context
[production]
prod-cluster kubeconfig=~/.kube/prod-config context=prod-context
[production:vars]
replicas=5
[development:vars]
replicas=1
2. Organize with Roles
roles/
├── k8s-namespace/
├── k8s-database/
├── k8s-backend/
└── k8s-frontend/
3. Use Tags for Selective Deployment
- name: Deploy database
kubernetes.core.k8s:
state: present
src: database.yaml
tags: [database, infra]
- name: Deploy application
kubernetes.core.k8s:
state: present
src: app.yaml
tags: [application, app]
# Run only database deployment
# ansible-playbook site.yml --tags database
4. Implement Health Checks
- name: Check pod health
kubernetes.core.k8s_info:
kind: Pod
namespace: production
label_selectors:
- app=webapp
register: pods
- name: Verify all pods are healthy
assert:
that:
- pods.resources | selectattr('status.phase', 'equalto', 'Running') | list | length == pods.resources | length
fail_msg: "Not all pods are running"
Conclusion
Ansible provides powerful automation capabilities for Kubernetes, enabling you to:
- Deploy and manage complex applications with simple playbooks
- Template Kubernetes manifests for different environments
- Integrate with Helm for package management
- Query and validate cluster state
- Orchestrate multi-cluster deployments
Combine Ansible's orchestration power with Kubernetes' container platform to build robust, automated deployment pipelines.
Next Steps
Explore our Kubernetes topic page for more examples, or try deploying to a cluster in our Interactive Playground.