Kyverno: Policy-as-Code for Kubernetes in AWS EKS - Features, Deployment, and Comparison with Alternatives
READER BEWARE: THE FOLLOWING WRITTEN ENTIRELY BY AI WITHOUT HUMAN EDITING.
Introduction
Kubernetes has become the de facto standard for container orchestration, but with great power comes great responsibility. Managing security, compliance, and operational policies across Kubernetes clusters is a significant challenge, especially in production environments like AWS EKS (Elastic Kubernetes Service). Enter Kyverno, a cloud-native policy engine designed specifically for Kubernetes that simplifies policy management without requiring specialized policy languages.
This comprehensive guide explores Kyverno’s features, how to leverage it in AWS EKS environments, and how it compares to alternative solutions like OPA/Gatekeeper, Kubewarden, and Polaris. We’ll cover everything from basic installation to advanced policy patterns, monitoring, and best practices for production deployments.
What is Kyverno?
Kyverno (Greek for “govern”) is a policy engine designed specifically for Kubernetes. Unlike traditional policy engines that require learning specialized query languages, Kyverno policies are written in YAML and use familiar Kubernetes resource patterns, making it accessible to all Kubernetes users.
Core Principles
- Kubernetes-Native: Policies are Kubernetes resources defined with Custom Resource Definitions (CRDs)
- No New Language: Write policies in YAML using familiar Kubernetes syntax
- Declarative: Define what you want, not how to achieve it
- Dynamic: Policies can mutate, validate, and generate resources
- GitOps-Friendly: Policies are stored as code and managed through standard Kubernetes workflows
Key Features
Kyverno provides four primary policy capabilities:
- Validation: Verify resources match required patterns and reject non-compliant resources
- Mutation: Modify resources on-the-fly before they’re created (e.g., add default labels)
- Generation: Automatically create supporting resources (e.g., NetworkPolicies, LimitRanges)
- Verification: Validate image signatures and attestations for supply chain security
Official Website: https://kyverno.io
Official Documentation: https://kyverno.io/docs/
Why Kyverno for AWS EKS?
AWS EKS provides a managed Kubernetes control plane, but cluster administrators still need to enforce policies for security, compliance, and operational best practices. Kyverno is particularly well-suited for EKS environments because:
1. Native Kubernetes Integration
Kyverno integrates seamlessly with EKS clusters without requiring external dependencies or separate policy servers. It operates as an admission controller within the cluster.
2. AWS-Specific Policy Use Cases
- Enforce AWS resource tagging standards on Kubernetes resources
- Ensure workloads use AWS IAM roles for service accounts (IRSA)
- Validate EBS volume configurations for cost optimization
- Enforce AWS security best practices (e.g., IMDSv2)
3. Compliance and Governance
- Implement Pod Security Standards (PSS) replacing deprecated Pod Security Policies
- Enforce organizational security requirements
- Audit cluster configurations for compliance frameworks (CIS, NIST, PCI-DSS)
4. GitOps Compatibility
Kyverno policies can be managed alongside application manifests in Git, enabling consistent policy deployment across multiple EKS clusters using tools like ArgoCD or Flux.
5. Multi-Cluster Management
For organizations running multiple EKS clusters across regions or accounts, Kyverno provides consistent policy enforcement with centralized policy management.
Kyverno Architecture
Understanding Kyverno’s architecture helps in deploying and troubleshooting it effectively.
┌─────────────────────────────────────────────────────────────────┐
│ Kubernetes API Server │
└────────────────────────────┬────────────────────────────────────┘
│
Admission Webhook
│
┌────────────────────────────▼────────────────────────────────────┐
│ Kyverno Admission Controller │
│ ┌────────────┐ ┌──────────────┐ ┌────────────────────────┐ │
│ │ Validation │ │ Mutation │ │ Generation │ │
│ │ Engine │ │ Engine │ │ Engine │ │
│ └────────────┘ └──────────────┘ └────────────────────────┘ │
│ │
│ ┌───────────────────────────────────────────────────────────┐ │
│ │ Policy Evaluation & Enforcement │ │
│ └───────────────────────────────────────────────────────────┘ │
└──────────────────────────────────────────────────────────────────┘
│
│ Reads Policies
▼
┌─────────────────────────────────────────────────────────────────┐
│ Policy Resources (CRDs) │
│ • ClusterPolicy • Policy • PolicyReport │
│ • ClusterPolicyReport • AdmissionReport │
└─────────────────────────────────────────────────────────────────┘
Components
- Admission Controller: Intercepts resource creation/update requests
- Policy Engine: Evaluates resources against defined policies
- Background Controller: Scans existing resources for compliance
- Reports Server: Generates compliance reports
- Cleanup Controller: Manages resource lifecycle based on policies
Installing Kyverno in AWS EKS
Prerequisites
Before installing Kyverno on AWS EKS, ensure you have:
- An AWS EKS cluster (version 1.24 or higher recommended)
- kubectl configured to access your cluster
- Helm 3.x installed
- Cluster admin permissions
Installation Method 1: Helm (Recommended)
Helm provides the most flexible installation method with easy upgrade paths.
# Add the Kyverno Helm repository
helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update
# Create namespace for Kyverno
kubectl create namespace kyverno
# Install Kyverno
helm install kyverno kyverno/kyverno \
--namespace kyverno \
--create-namespace \
--set admissionController.replicas=3 \
--set backgroundController.replicas=2 \
--set cleanupController.replicas=2 \
--set reportsController.replicas=2
Installation Method 2: YAML Manifests
For simpler deployments or air-gapped environments:
# Install latest stable release
kubectl create -f https://github.com/kyverno/kyverno/releases/download/v1.11.0/install.yaml
Production-Grade Helm Values for AWS EKS
For production EKS clusters, use this comprehensive Helm values configuration:
# kyverno-values.yaml
admissionController:
# High availability setup
replicas: 3
# Resource allocation
resources:
limits:
cpu: 1000m
memory: 1Gi
requests:
cpu: 500m
memory: 512Mi
# Pod disruption budget for availability
podDisruptionBudget:
enabled: true
minAvailable: 2
# Affinity for spreading across nodes
podAntiAffinity: soft
# Enable high availability for webhook
webhooks:
timeoutSeconds: 10
failurePolicy: Fail
backgroundController:
replicas: 2
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 250m
memory: 256Mi
reportsController:
replicas: 2
resources:
limits:
cpu: 500m
memory: 512Mi
requests:
cpu: 250m
memory: 256Mi
cleanupController:
replicas: 2
resources:
limits:
cpu: 200m
memory: 256Mi
requests:
cpu: 100m
memory: 128Mi
# Enable metrics for monitoring
metricsService:
enabled: true
type: ClusterIP
port: 8000
# Configure policy reports
policyReports:
enabled: true
# Security context
securityContext:
runAsNonRoot: true
runAsUser: 10001
seccompProfile:
type: RuntimeDefault
# Service account annotations for AWS IRSA (optional)
admissionController:
serviceAccount:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_ID:role/kyverno-role
# Node selection for EKS node groups
nodeSelector:
kubernetes.io/os: linux
# Tolerations for dedicated node groups (optional)
tolerations:
- key: "kyverno"
operator: "Equal"
value: "true"
effect: "NoSchedule"
# Enable NetworkPolicy (requires CNI support)
networkPolicy:
enabled: true
# Pod annotations for monitoring
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8000"
Install with custom values:
helm install kyverno kyverno/kyverno \
--namespace kyverno \
--create-namespace \
--values kyverno-values.yaml
Verification
Verify the installation:
# Check Kyverno pods are running
kubectl get pods -n kyverno
# Expected output shows all controllers running:
# NAME READY STATUS RESTARTS AGE
# kyverno-admission-controller-xyz-123 1/1 Running 0 2m
# kyverno-admission-controller-xyz-456 1/1 Running 0 2m
# kyverno-admission-controller-xyz-789 1/1 Running 0 2m
# kyverno-background-controller-abc-111 1/1 Running 0 2m
# kyverno-background-controller-abc-222 1/1 Running 0 2m
# kyverno-cleanup-controller-def-333 1/1 Running 0 2m
# kyverno-reports-controller-ghi-444 1/1 Running 0 2m
# Check Kyverno webhook configurations
kubectl get validatingwebhookconfigurations | grep kyverno
kubectl get mutatingwebhookconfigurations | grep kyverno
# Verify Kyverno CRDs are installed
kubectl get crd | grep kyverno
# Check Kyverno version
kubectl get deployment -n kyverno kyverno-admission-controller \
-o jsonpath='{.spec.template.spec.containers[0].image}'
Kyverno Policy Types and Examples
Kyverno supports two policy scopes:
- ClusterPolicy: Applies cluster-wide to all namespaces
- Policy: Applies only to the namespace where it’s defined
Policy Structure
Every Kyverno policy follows this structure:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: policy-name
annotations:
policies.kyverno.io/title: Human-Readable Title
policies.kyverno.io/category: Security
policies.kyverno.io/severity: high
policies.kyverno.io/description: |
Detailed policy description
spec:
# Validation mode: Audit or Enforce
validationFailureAction: Enforce
# Apply to background scanning
background: true
# Failure policy for admission webhook
failurePolicy: Fail
# Policy rules
rules:
- name: rule-name
# Resource matching
match:
any:
- resources:
kinds:
- Pod
# Policy logic
validate:
message: "Validation failure message"
pattern:
# Expected pattern
Validation Policies
Validation policies enforce requirements on resources. Non-compliant resources are rejected (Enforce mode) or flagged (Audit mode).
Example 1: Require Resource Limits for Pods
Prevents resource exhaustion by ensuring all containers define resource limits:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-resources
annotations:
policies.kyverno.io/title: Require Resource Limits
policies.kyverno.io/category: Best Practices
policies.kyverno.io/severity: medium
policies.kyverno.io/description: |
All containers must define CPU and memory limits to prevent
resource exhaustion and ensure proper scheduling.
spec:
validationFailureAction: Enforce
background: true
rules:
- name: validate-resources
match:
any:
- resources:
kinds:
- Pod
validate:
message: "All containers must define CPU and memory limits"
pattern:
spec:
containers:
- name: "*"
resources:
limits:
memory: "?*"
cpu: "?*"
requests:
memory: "?*"
cpu: "?*"
Example 2: Enforce AWS IMDSv2
Ensures pods running on EC2 instances use IMDSv2 for security:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-imdsv2
annotations:
policies.kyverno.io/title: Require IMDSv2
policies.kyverno.io/category: AWS Security
policies.kyverno.io/severity: high
policies.kyverno.io/description: |
Enforces IMDSv2 by requiring the hop limit annotation on pods.
This prevents SSRF attacks from accessing EC2 instance metadata.
spec:
validationFailureAction: Enforce
background: true
rules:
- name: require-imds-hop-limit
match:
any:
- resources:
kinds:
- Pod
namespaceSelector:
matchLabels:
enforce-imdsv2: "true"
validate:
message: >-
Pods must set the annotation
eks.amazonaws.com/compute-type: ec2 must also set
annotation iam.amazonaws.com/imds-v2: required
pattern:
metadata:
annotations:
iam.amazonaws.com/imds-v2: "required"
Example 3: Require AWS IRSA (IAM Roles for Service Accounts)
Enforces use of IRSA instead of instance profiles for better security:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-irsa
annotations:
policies.kyverno.io/title: Require IRSA for AWS Access
policies.kyverno.io/category: AWS Security
policies.kyverno.io/severity: high
policies.kyverno.io/description: |
Requires service accounts to use IRSA annotations for AWS access
instead of relying on instance profiles. This provides better
security through least-privilege access.
spec:
validationFailureAction: Enforce
background: false
rules:
- name: check-irsa-annotation
match:
any:
- resources:
kinds:
- Pod
selector:
matchLabels:
requires-aws-access: "true"
validate:
message: >-
Pods requiring AWS access must use a ServiceAccount with
eks.amazonaws.com/role-arn annotation configured
pattern:
spec:
serviceAccountName: "?*"
deny:
conditions:
any:
- key: "{{ request.object.spec.serviceAccountName }}"
operator: Equals
value: "default"
Example 4: Disallow Privileged Containers
Implements Pod Security Standards to prevent privileged escalation:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-privileged-containers
annotations:
policies.kyverno.io/title: Disallow Privileged Containers
policies.kyverno.io/category: Pod Security Standards (Baseline)
policies.kyverno.io/severity: high
policies.kyverno.io/description: |
Privileged containers run with host-level permissions and should
be avoided. This policy prevents privileged containers from running.
spec:
validationFailureAction: Enforce
background: true
rules:
- name: check-privileged
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Privileged containers are not allowed"
pattern:
spec:
containers:
- name: "*"
securityContext:
privileged: "false"
=(initContainers):
- name: "*"
securityContext:
privileged: "false"
=(ephemeralContainers):
- name: "*"
securityContext:
privileged: "false"
Example 5: Restrict Container Registries
Ensure containers come only from approved registries:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: restrict-image-registries
annotations:
policies.kyverno.io/title: Restrict Image Registries
policies.kyverno.io/category: Supply Chain Security
policies.kyverno.io/severity: high
policies.kyverno.io/description: |
Only allows images from approved container registries
(ECR, approved corporate registry)
spec:
validationFailureAction: Enforce
background: true
rules:
- name: validate-registries
match:
any:
- resources:
kinds:
- Pod
validate:
message: >-
Images must come from approved registries:
- *.dkr.ecr.*.amazonaws.com (ECR)
- ghcr.io/my-org/* (GitHub Container Registry)
- registry.company.com/* (Corporate Registry)
foreach:
- list: "request.object.spec.containers"
deny:
conditions:
all:
- key: "{{ element.image }}"
operator: NotIn
value:
- "*.dkr.ecr.*.amazonaws.com/*"
- "ghcr.io/my-org/*"
- "registry.company.com/*"
Mutation Policies
Mutation policies modify resources automatically before they’re persisted, adding defaults, labels, or configurations.
Example 1: Add Default Resource Limits
Automatically add resource limits to containers that don’t specify them:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-default-resources
annotations:
policies.kyverno.io/title: Add Default Resource Limits
policies.kyverno.io/category: Best Practices
policies.kyverno.io/severity: low
policies.kyverno.io/description: |
Automatically adds default resource requests and limits to
containers that don't specify them.
spec:
background: false
rules:
- name: add-default-resources
match:
any:
- resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
containers:
- (name): "*"
resources:
limits:
+(memory): "512Mi"
+(cpu): "500m"
requests:
+(memory): "256Mi"
+(cpu): "250m"
Example 2: Add AWS Resource Tags as Labels
Propagate AWS tags to Kubernetes labels for consistency:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-aws-tags-labels
annotations:
policies.kyverno.io/title: Add AWS Tags as Labels
policies.kyverno.io/category: AWS Best Practices
policies.kyverno.io/description: |
Adds standard AWS tags as Kubernetes labels for cost
tracking and resource management.
spec:
background: false
rules:
- name: add-cost-center-label
match:
any:
- resources:
kinds:
- Deployment
- StatefulSet
- DaemonSet
mutate:
patchStrategicMerge:
metadata:
labels:
+(aws.cost-center): "{{ request.object.metadata.labels.\"cost-center\" || 'unassigned' }}"
+(aws.environment): "{{ request.namespace }}"
+(aws.managed-by): "kubernetes"
Example 3: Add Security Context Defaults
Enforce security best practices by adding default security contexts:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: add-security-context
annotations:
policies.kyverno.io/title: Add Default Security Context
policies.kyverno.io/category: Security
policies.kyverno.io/severity: medium
spec:
background: false
rules:
- name: add-pod-security-context
match:
any:
- resources:
kinds:
- Pod
mutate:
patchStrategicMerge:
spec:
securityContext:
+(runAsNonRoot): true
+(seccompProfile):
type: RuntimeDefault
containers:
- (name): "*"
securityContext:
+(allowPrivilegeEscalation): false
+(capabilities):
drop:
- ALL
Generation Policies
Generation policies automatically create supporting resources when a new resource is created.
Example 1: Auto-Generate NetworkPolicies
Automatically create NetworkPolicy for new namespaces:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: generate-networkpolicy
annotations:
policies.kyverno.io/title: Generate Default NetworkPolicy
policies.kyverno.io/category: Network Security
policies.kyverno.io/description: |
Automatically creates a default-deny NetworkPolicy
when a new namespace is created.
spec:
background: true
rules:
- name: generate-deny-all-policy
match:
any:
- resources:
kinds:
- Namespace
generate:
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
name: default-deny-all
namespace: "{{request.object.metadata.name}}"
synchronize: true
data:
spec:
podSelector: {}
policyTypes:
- Ingress
- Egress
Example 2: Auto-Generate LimitRanges
Create resource quotas for new namespaces:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: generate-limitrange
annotations:
policies.kyverno.io/title: Generate LimitRange for Namespaces
policies.kyverno.io/category: Resource Management
spec:
background: true
rules:
- name: generate-limitrange
match:
any:
- resources:
kinds:
- Namespace
selector:
matchLabels:
environment: production
generate:
apiVersion: v1
kind: LimitRange
name: default-limitrange
namespace: "{{request.object.metadata.name}}"
synchronize: true
data:
spec:
limits:
- type: Container
default:
cpu: 500m
memory: 512Mi
defaultRequest:
cpu: 250m
memory: 256Mi
max:
cpu: "2"
memory: 2Gi
min:
cpu: 100m
memory: 128Mi
Example 3: Auto-Generate IRSA ServiceAccount
Create a service account with IRSA annotation for AWS access:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: generate-irsa-serviceaccount
annotations:
policies.kyverno.io/title: Generate IRSA ServiceAccount
policies.kyverno.io/category: AWS Integration
spec:
background: true
rules:
- name: create-irsa-sa
match:
any:
- resources:
kinds:
- Namespace
selector:
matchLabels:
aws-access: "enabled"
generate:
apiVersion: v1
kind: ServiceAccount
name: aws-access-sa
namespace: "{{request.object.metadata.name}}"
synchronize: true
data:
metadata:
annotations:
eks.amazonaws.com/role-arn: "arn:aws:iam::ACCOUNT_ID:role/eks-{{request.object.metadata.name}}-role"
Image Verification Policies
Kyverno can verify container image signatures and attestations using Sigstore/Cosign.
Example 1: Verify Image Signatures
Ensure only signed images are deployed:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: verify-image-signatures
annotations:
policies.kyverno.io/title: Verify Image Signatures
policies.kyverno.io/category: Supply Chain Security
policies.kyverno.io/severity: critical
spec:
validationFailureAction: Enforce
webhookTimeoutSeconds: 30
rules:
- name: verify-signature
match:
any:
- resources:
kinds:
- Pod
verifyImages:
- imageReferences:
- "*.dkr.ecr.*.amazonaws.com/*"
attestors:
- count: 1
entries:
- keys:
publicKeys: |-
-----BEGIN PUBLIC KEY-----
MFkwEwYHKoZIzj0CAQYIKoZIzj0DAQcDQgAE...
-----END PUBLIC KEY-----
Example 2: Verify SBOM Attestations
Require software bill of materials (SBOM) attestations:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: verify-sbom-attestation
annotations:
policies.kyverno.io/title: Verify SBOM Attestation
policies.kyverno.io/category: Supply Chain Security
spec:
validationFailureAction: Enforce
webhookTimeoutSeconds: 30
rules:
- name: verify-sbom
match:
any:
- resources:
kinds:
- Pod
verifyImages:
- imageReferences:
- "registry.company.com/*"
attestations:
- predicateType: https://spdx.dev/Document
attestors:
- count: 1
entries:
- keys:
publicKeys: |-
-----BEGIN PUBLIC KEY-----
...
-----END PUBLIC KEY-----
Policy Modes: Enforce vs Audit
Kyverno supports two validation failure actions:
Enforce Mode
Resources that violate policies are rejected:
spec:
validationFailureAction: Enforce
Use for:
- Production security requirements
- Critical compliance policies
- Blocking known vulnerabilities
Audit Mode
Resources are allowed but violations are reported:
spec:
validationFailureAction: Audit
Use for:
- Testing new policies before enforcement
- Generating compliance reports
- Non-critical recommendations
Mixed Mode Strategy
Start with Audit mode, analyze reports, then switch to Enforce:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-labels
annotations:
policies.kyverno.io/title: Require Standard Labels
spec:
# Start with Audit mode
validationFailureAction: Audit
rules:
- name: require-labels
match:
any:
- resources:
kinds:
- Deployment
validate:
message: "All deployments must have app, version, and team labels"
pattern:
metadata:
labels:
app: "?*"
version: "?*"
team: "?*"
After analyzing policy reports and fixing violations:
# Switch to Enforce mode
kubectl patch clusterpolicy require-labels -p '{"spec":{"validationFailureAction":"Enforce"}}'
Policy Reports and Monitoring
Kyverno generates reports showing policy compliance status.
PolicyReport and ClusterPolicyReport
Kyverno creates reports for resources in each namespace:
# View policy reports in a namespace
kubectl get policyreport -n production
# View cluster-wide policy report
kubectl get clusterpolicyreport
# Get detailed report
kubectl describe policyreport polr-ns-production -n production
Example PolicyReport output:
apiVersion: wgpolicyk8s.io/v1alpha2
kind: PolicyReport
metadata:
name: polr-ns-production
namespace: production
summary:
pass: 45
fail: 3
warn: 2
error: 0
skip: 1
results:
- policy: require-resources
rule: validate-resources
result: fail
scored: true
source: kyverno
message: "All containers must define CPU and memory limits"
resources:
- apiVersion: v1
kind: Pod
name: myapp-xyz
namespace: production
Querying Policy Reports
Use kubectl to query policy violations:
# Find all failed policy checks
kubectl get policyreport -A -o json | \
jq -r '.items[] | select(.summary.fail > 0) |
.metadata.namespace + "/" + .metadata.name + " - Failures: " + (.summary.fail|tostring)'
# List specific policy failures
kubectl get policyreport -n production -o json | \
jq -r '.items[].results[] | select(.result == "fail") |
.policy + " - " + .rule + " - " + (.resources[].name)'
Integrating with Prometheus
Kyverno exposes Prometheus metrics:
# ServiceMonitor for Prometheus Operator
apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: kyverno
namespace: kyverno
spec:
selector:
matchLabels:
app.kubernetes.io/name: kyverno
endpoints:
- port: metrics
interval: 30s
Key metrics:
kyverno_policy_rule_results_total: Policy evaluation results by policy/rulekyverno_policy_execution_duration_seconds: Policy execution timekyverno_admission_requests_total: Total admission requestskyverno_policy_changes_total: Policy CRD changes
Grafana Dashboard
Import Kyverno’s official Grafana dashboard:
# Dashboard ID: 17034
# Available at: https://grafana.com/grafana/dashboards/17034
CloudWatch Integration (AWS-Specific)
Export Kyverno metrics to CloudWatch for centralized monitoring:
# Install Prometheus to CloudWatch exporter
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm install prometheus-to-cloudwatch prometheus-community/prometheus-cloudwatch-exporter \
--namespace monitoring \
--set aws.role=arn:aws:iam::ACCOUNT_ID:role/prometheus-cloudwatch-role \
--set config.remote_write[0].url=https://aps-workspaces.us-east-1.amazonaws.com/workspaces/ws-xxx/api/v1/remote_write
Comparing Kyverno with Alternatives
Several policy engines exist for Kubernetes. Here’s how Kyverno compares to the main alternatives:
Open Policy Agent (OPA) / Gatekeeper
OPA is a general-purpose policy engine, while Gatekeeper is its Kubernetes-native implementation.
| Feature | Kyverno | OPA/Gatekeeper |
|---|---|---|
| Language | YAML (Kubernetes-native) | Rego (specialized policy language) |
| Learning Curve | Low (familiar YAML) | High (requires learning Rego) |
| Mutation Support | Native | Via mutation webhooks |
| Generation | Native | Not supported |
| Image Verification | Native (Sigstore/Cosign) | Requires external tools |
| Policy Reports | Built-in | Via separate tools |
| Performance | Good | Excellent |
| Community | Growing (CNCF Incubating) | Mature (CNCF Graduated) |
| Use Case | Kubernetes-focused | General-purpose policies |
When to use OPA/Gatekeeper:
- You need policies beyond Kubernetes (e.g., API gateways, service meshes)
- You have existing Rego expertise
- You need extremely complex policy logic
- You’re already using OPA elsewhere in your stack
When to use Kyverno:
- Your policies are Kubernetes-only
- Your team is more comfortable with YAML
- You need mutation and generation features
- You want easier policy authoring and maintenance
- You need built-in image verification
Example comparison:
OPA/Gatekeeper Rego policy:
package kubernetes.admission
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
not container.resources.limits.memory
msg := sprintf("Container %v must define memory limits", [container.name])
}
Equivalent Kyverno policy:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: require-memory-limits
spec:
validationFailureAction: Enforce
rules:
- name: check-memory-limits
match:
any:
- resources:
kinds:
- Pod
validate:
message: "All containers must define memory limits"
pattern:
spec:
containers:
- name: "*"
resources:
limits:
memory: "?*"
Kubewarden
Kubewarden is a policy engine that supports multiple languages (Rust, Go, etc.) using WebAssembly.
| Feature | Kyverno | Kubewarden |
|---|---|---|
| Language | YAML | Multiple (Rust, Go, etc. compiled to WASM) |
| Learning Curve | Low | Medium to High |
| Mutation Support | Native | Native |
| Generation | Native | Not supported |
| Image Verification | Native | Via policies |
| Policy Reports | Built-in | Via external tools |
| Performance | Good | Excellent (WASM) |
| Community | Larger | Growing (CNCF Sandbox) |
| Flexibility | Kubernetes-focused | Highly flexible |
When to use Kubewarden:
- You need maximum performance via WASM
- Your team prefers traditional programming languages
- You want to reuse existing code in policies
- You need custom policy logic beyond declarative patterns
When to use Kyverno:
- You prefer declarative YAML policies
- You need generation policies
- You want faster policy development
- You don’t need custom programming in policies
Polaris
Polaris is a validation and best practices checker, not a full policy engine.
| Feature | Kyverno | Polaris |
|---|---|---|
| Enforcement | Admission controller | Dashboard/CLI only (passive) |
| Mutation | Yes | No |
| Generation | Yes | No |
| Real-time Blocking | Yes | No |
| Reporting | Yes | Yes (primary feature) |
| Custom Policies | Extensive | Limited |
When to use Polaris:
- You only need auditing/reporting, not enforcement
- You want simple best-practice checks
- You’re starting with policy management
When to use Kyverno:
- You need active policy enforcement
- You need mutation and generation
- You want comprehensive policy capabilities
Summary Recommendation
Choose Kyverno if:
- You’re Kubernetes-focused
- Your team prefers YAML over specialized languages
- You need mutation, generation, and image verification
- You want easier policy authoring and maintenance
Choose OPA/Gatekeeper if:
- You need policies beyond Kubernetes
- You have complex policy logic requirements
- You have Rego expertise
- You need maximum flexibility
Choose Kubewarden if:
- You need maximum performance
- Your team prefers traditional programming
- You want to leverage existing code
Choose Polaris if:
- You only need reporting, not enforcement
- You’re starting with basic policy checks
Best Practices for Kyverno in AWS EKS
1. High Availability Setup
Deploy Kyverno with multiple replicas across availability zones:
admissionController:
replicas: 3
podAntiAffinity: soft
topologySpreadConstraints:
- maxSkew: 1
topologyKey: topology.kubernetes.io/zone
whenUnsatisfiable: DoNotSchedule
2. Resource Planning
Allocate appropriate resources based on cluster size:
| Cluster Size | Admission Controller Resources | Background Controller Resources |
|---|---|---|
| Small (<50 nodes) | 500m CPU, 512Mi RAM | 250m CPU, 256Mi RAM |
| Medium (50-200 nodes) | 1000m CPU, 1Gi RAM | 500m CPU, 512Mi RAM |
| Large (>200 nodes) | 2000m CPU, 2Gi RAM | 1000m CPU, 1Gi RAM |
3. Policy Organization
Structure policies by category:
policies/
├── security/
│ ├── pod-security-standards.yaml
│ ├── network-policies.yaml
│ └── image-verification.yaml
├── aws/
│ ├── require-irsa.yaml
│ ├── enforce-imdsv2.yaml
│ └── ebs-encryption.yaml
├── compliance/
│ ├── require-labels.yaml
│ ├── require-resources.yaml
│ └── audit-logging.yaml
└── best-practices/
├── add-security-context.yaml
└── restrict-registries.yaml
4. GitOps Integration
Manage policies as code with ArgoCD:
# argo-app-kyverno-policies.yaml
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: kyverno-policies
namespace: argocd
spec:
project: platform
source:
repoURL: https://github.com/company/k8s-policies
targetRevision: main
path: kyverno/policies
destination:
server: https://kubernetes.default.svc
namespace: kyverno
syncPolicy:
automated:
prune: true
selfHeal: true
5. Testing Policies
Test policies before applying:
# Install Kyverno CLI
kubectl krew install kyverno
# Test a policy against resources
kyverno apply policy.yaml --resource deployment.yaml
# Validate policy syntax
kyverno validate policy.yaml
6. Exception Management
Use policy exceptions for legitimate special cases:
apiVersion: kyverno.io/v2beta1
kind: PolicyException
metadata:
name: allow-privileged-for-monitoring
namespace: monitoring
spec:
exceptions:
- policyName: disallow-privileged-containers
ruleNames:
- check-privileged
match:
any:
- resources:
kinds:
- Pod
namespaces:
- monitoring
names:
- prometheus-*
- grafana-*
7. Gradual Rollout
Roll out policies gradually:
- Week 1: Deploy in Audit mode, generate reports
- Week 2: Analyze violations, communicate with teams
- Week 3: Fix violations, prepare for enforcement
- Week 4: Switch to Enforce mode with exceptions as needed
8. Monitoring and Alerting
Set up alerts for policy failures:
# PrometheusRule for Kyverno alerts
apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
name: kyverno-alerts
namespace: monitoring
spec:
groups:
- name: kyverno
interval: 30s
rules:
- alert: KyvernoPolicyFailureRateHigh
expr: |
rate(kyverno_policy_rule_results_total{result="fail"}[5m]) > 0.1
for: 5m
labels:
severity: warning
annotations:
summary: "High policy failure rate detected"
description: "Kyverno policy {{ $labels.policy }} is failing frequently"
- alert: KyvernoWebhookLatencyHigh
expr: |
histogram_quantile(0.99,
rate(kyverno_policy_execution_duration_seconds_bucket[5m])
) > 1
for: 10m
labels:
severity: warning
annotations:
summary: "Kyverno webhook latency is high"
description: "99th percentile latency is {{ $value }}s"
9. Backup and Disaster Recovery
Back up Kyverno policies regularly:
# Backup all policies
kubectl get clusterpolicies -o yaml > kyverno-policies-backup.yaml
kubectl get policies -A -o yaml >> kyverno-policies-backup.yaml
# Store in S3
aws s3 cp kyverno-policies-backup.yaml s3://backup-bucket/kyverno/$(date +%Y%m%d)/
10. Documentation
Document policies with annotations:
metadata:
annotations:
policies.kyverno.io/title: Human-Readable Title
policies.kyverno.io/category: Category
policies.kyverno.io/severity: low|medium|high|critical
policies.kyverno.io/subject: Pod, Deployment, etc.
policies.kyverno.io/description: |
Detailed description explaining:
- What the policy does
- Why it's important
- When to use exceptions
policies.kyverno.io/references: |
- Link to internal documentation
- Link to relevant compliance framework
Troubleshooting Common Issues
Issue 1: Webhook Timeout
Symptom: Resources fail to create with timeout errors
Solution:
# Increase webhook timeout
spec:
webhookTimeoutSeconds: 30
# Or in Helm values:
admissionController:
webhooks:
timeoutSeconds: 30
Issue 2: Policy Not Applying
Symptom: Resources violate policies but are not blocked
Causes and Solutions:
- Check validation failure action:
kubectl get clusterpolicy <policy-name> -o jsonpath='{.spec.validationFailureAction}'
# Should be "Enforce", not "Audit"
- Check resource matching:
# View policy details
kubectl describe clusterpolicy <policy-name>
# Test policy matching
kyverno apply <policy-file> --resource <resource-file>
- Check webhook configuration:
kubectl get validatingwebhookconfigurations | grep kyverno
kubectl describe validatingwebhookconfiguration <webhook-name>
Issue 3: Background Scanning Not Working
Solution:
# Ensure background is enabled in policy
spec:
background: true
# Check background controller is running
kubectl get pods -n kyverno | grep background
# View background controller logs
kubectl logs -n kyverno <background-controller-pod>
Issue 4: High Memory Usage
Solution:
# Increase memory limits
admissionController:
resources:
limits:
memory: 2Gi
# Enable memory optimizations
admissionController:
container:
env:
- name: GOMEMLIMIT
value: "1800MiB"
Issue 5: Policy Reports Not Generated
Solution:
# Ensure reports controller is running
kubectl get pods -n kyverno | grep reports
# Check if policy reports are enabled
kubectl get crd policyreports.wgpolicyk8s.io
# View reports controller logs
kubectl logs -n kyverno <reports-controller-pod>
Advanced Use Cases
1. Multi-Tenancy with Kyverno
Implement namespace isolation:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: enforce-tenant-isolation
spec:
validationFailureAction: Enforce
rules:
- name: restrict-cross-namespace-access
match:
any:
- resources:
kinds:
- Ingress
- Service
validate:
message: "Cross-namespace access requires approval"
foreach:
- list: "request.object.spec.?.backend.?.service.?.namespace"
deny:
conditions:
all:
- key: "{{ element }}"
operator: NotEquals
value: "{{ request.namespace }}"
2. Cost Optimization
Prevent expensive resource configurations:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: restrict-expensive-instances
spec:
validationFailureAction: Enforce
rules:
- name: limit-ebs-volume-size
match:
any:
- resources:
kinds:
- PersistentVolumeClaim
validate:
message: "PVC size must not exceed 1Ti without approval"
pattern:
spec:
resources:
requests:
storage: "<1Ti"
3. Compliance Automation
Implement CIS Kubernetes Benchmark:
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: cis-benchmark-5-2-6
annotations:
policies.kyverno.io/title: CIS 5.2.6 - Minimize admission of root containers
spec:
validationFailureAction: Enforce
rules:
- name: check-runasnonroot
match:
any:
- resources:
kinds:
- Pod
validate:
message: "Running as root is not allowed (CIS 5.2.6)"
pattern:
spec:
securityContext:
runAsNonRoot: true
Conclusion
Kyverno provides a powerful, Kubernetes-native approach to policy management that’s particularly well-suited for AWS EKS environments. Its YAML-based policy language, combined with validation, mutation, generation, and image verification capabilities, makes it an excellent choice for teams looking to enforce security, compliance, and operational standards.
Key Takeaways
- Ease of Use: Kyverno’s YAML-based policies are accessible to all Kubernetes users, eliminating the need to learn specialized policy languages
- Comprehensive Features: Validation, mutation, generation, and image verification cover all policy needs
- AWS EKS Integration: Native support for AWS-specific use cases like IRSA, IMDSv2, and ECR
- Production-Ready: High availability, monitoring, and reporting capabilities suitable for enterprise deployments
- Active Community: Growing CNCF project with strong community support
When to Use Kyverno
Kyverno is ideal when:
- Your policies are Kubernetes-focused
- Your team prefers declarative YAML over programming languages
- You need mutation and generation in addition to validation
- You want integrated image verification and supply chain security
- You’re implementing Pod Security Standards
- You need GitOps-friendly policy management
Getting Started
- Install Kyverno in a development EKS cluster
- Start with audit-mode policies to understand your environment
- Implement validation policies for critical security requirements
- Add mutation policies to enforce defaults
- Use generation policies for automation
- Monitor with policy reports and metrics
- Gradually move to enforce mode
- Expand to production clusters
Next Steps
- Explore the Kyverno Policy Library for pre-built policies
- Join the Kyverno Slack community (#kyverno channel)
- Contribute to Kyverno on GitHub
- Read the Kyverno documentation
- Check out Kyverno with ArgoCD patterns