Learn how to extract container images from Kubernetes cluster nodes and migrate them to private
registries like Azure Container Registry (ACR), Harbor, or Docker Hub. This comprehensive guide
covers manual workflows, automation scripts, and enterprise best practices for container image
lifecycle management.
Why Migrate Container Images?
Organizations migrate container images to private registries for several critical reasons:
- Security & Compliance: Control who can access images, scan for vulnerabilities, and enforce policies
- Availability: Eliminate dependency on public registry uptime and rate limits
- Performance: Reduce latency by hosting images closer to workloads
- Image Preservation: Backup images that may be removed from public registries
- Air-Gapped Environments: Support disconnected networks in defense/healthcare
- Cost Control: Avoid Docker Hub rate limits (200 pulls/6hrs for free tier)
Real-World Scenario: PostgreSQL Image Crisis
In my homelab Kubernetes cluster, I encountered a critical situation: the PostgreSQL image
(bitnami/postgresql:17.6.0-debian-12-r0) was running production workloads but
no longer existed in Docker Hub. Bitnami had removed this specific tag, making
redeployment impossible.
The solution? Export the running image directly from the Kubernetes node and migrate it to
Azure Container Registry. This process ensured business continuity and eliminated future
dependency on public registry availability.
Architecture Overview
┌─────────────────┐
│ Kubernetes Node │ ──── SSH ────> Export image with containerd
│ (containerd) │ (k3s ctr / ctr / crictl)
└─────────────────┘
│
│ SCP (copy .tar file)
▼
┌─────────────────┐
│ Local Mac │ ──── Load ────> Import into Docker
│ (Docker) │
└─────────────────┘
│
│ Push
▼
┌─────────────────┐
│ Container Reg. │ ──── Store ───> ACR / Docker Hub / Harbor
│ (ACR/Harbor) │
└─────────────────┘
Step-by-Step Migration Process
Step 1: Identify the Target Image
First, locate which node is running the container with your target image:
# List all pods with images and nodes
kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{.metadata.namespace}{"/"}{.metadata.name}{"\t"}{.spec.containers[*].image}{"\t"}{.spec.nodeName}{"\n"}{end}' | column -t
# Example output:
# keycloak/postgres-0 bitnami/postgresql:17.6.0-debian-12-r0 worker3
# Get specific pod details
kubectl get pod postgres-0 -n keycloak -o wide
Step 2: Export from Kubernetes Node
SSH into the node and export the image using the container runtime CLI. For K3s (which uses containerd):
# SSH into the node
ssh ubuntu@worker3
# Export the image to tar file
sudo k3s ctr images export /tmp/postgresql.tar docker.io/bitnami/postgresql:17.6.0-debian-12-r0
# Verify export
ls -lh /tmp/postgresql.tar
# Output: -rw-r--r-- 1 root root 105M Nov 8 10:23 /tmp/postgresql.tar
Step 3: Transfer to Local Machine
Copy the exported tar file from the node to your local workstation:
# Using SCP
scp ubuntu@worker3:/tmp/postgresql.tar /tmp/postgresql.tar
# For large images, use compression:
ssh ubuntu@worker3 "sudo cat /tmp/postgresql.tar | gzip" | gunzip > /tmp/postgresql.tar
# Or use rsync for resume capability:
rsync -avz --progress ubuntu@worker3:/tmp/postgresql.tar /tmp/
Step 4: Load into Docker
Import the tar file into your local Docker environment:
# Load the image
docker load -i /tmp/postgresql.tar
# Output:
# Loaded image: docker.io/bitnami/postgresql:17.6.0-debian-12-r0
# Verify it loaded successfully
docker images | grep postgresql
Step 5: Tag for Target Registry
Tag the image with your private registry URL:
# Azure Container Registry (ACR)
docker tag bitnami/postgresql:17.6.0-debian-12-r0 jdevnet.azurecr.io/postgresql:17.6.0-debian-12-r0
# Docker Hub (replace 'username' with your Docker Hub username)
docker tag bitnami/postgresql:17.6.0-debian-12-r0 username/postgresql:17.6.0-debian-12-r0
# Harbor (self-hosted)
docker tag bitnami/postgresql:17.6.0-debian-12-r0 harbor.example.com/myproject/postgresql:17.6.0-debian-12-r0
# GitHub Container Registry
docker tag bitnami/postgresql:17.6.0-debian-12-r0 ghcr.io/username/postgresql:17.6.0-debian-12-r0
Step 6: Authenticate and Push
Login to your target registry and push the image:
# Azure Container Registry
az acr login --name jdevnet
docker push jdevnet.azurecr.io/postgresql:17.6.0-debian-12-r0
# Docker Hub
docker login --username myusername
docker push myusername/postgresql:17.6.0-debian-12-r0
# Harbor
docker login harbor.example.com --username admin
docker push harbor.example.com/myproject/postgresql:17.6.0-debian-12-r0
Automation Script for Production
For enterprise environments, automate the entire process with this reusable bash script:
#!/bin/bash
set -e
# Configuration
NODE_USER="ubuntu"
NODE_HOST="worker3"
SOURCE_IMAGE="docker.io/bitnami/postgresql:17.6.0-debian-12-r0"
REGISTRY_URL="jdevnet.azurecr.io"
TARGET_REPO="postgresql"
TARGET_TAG="17.6.0-debian-12-r0"
TEMP_FILE="image-export.tar"
TARGET_IMAGE="${REGISTRY_URL}/${TARGET_REPO}:${TARGET_TAG}"
echo "========================================="
echo "Container Image Migration Script"
echo "========================================="
echo "Source: ${SOURCE_IMAGE}"
echo "Target: ${TARGET_IMAGE}"
# Export from node
echo "📦 Exporting image from ${NODE_HOST}..."
ssh ${NODE_USER}@${NODE_HOST} "sudo k3s ctr images export /tmp/${TEMP_FILE} ${SOURCE_IMAGE}"
# Copy to local
echo "📥 Copying to local machine..."
scp ${NODE_USER}@${NODE_HOST}:/tmp/${TEMP_FILE} /tmp/${TEMP_FILE}
# Cleanup node
echo "🧹 Cleaning up node..."
ssh ${NODE_USER}@${NODE_HOST} "sudo rm /tmp/${TEMP_FILE}"
# Load into Docker
echo "📦 Loading into Docker..."
docker load -i /tmp/${TEMP_FILE}
# Tag for registry
echo "🏷️ Tagging for registry..."
docker tag ${SOURCE_IMAGE} ${TARGET_IMAGE}
# Push to registry
echo "🚀 Pushing to registry..."
docker push ${TARGET_IMAGE}
# Cleanup local
echo "🧹 Cleaning up local files..."
rm /tmp/${TEMP_FILE}
docker rmi ${SOURCE_IMAGE} ${TARGET_IMAGE}
echo "✅ Migration Complete!"
echo "Image available at: ${TARGET_IMAGE}"
Container Runtime Compatibility
Different Kubernetes distributions use different container runtimes. Here's how to export
images from each:
K3s (containerd)
sudo k3s ctr images export \
/tmp/image.tar \
docker.io/library/nginx:latest
Standard containerd
sudo ctr -n k8s.io images export \
/tmp/image.tar \
docker.io/library/nginx:latest
Docker
sudo docker save \
-o /tmp/image.tar \
nginx:latest
CRI-O
sudo podman save \
-o /tmp/image.tar \
nginx:latest
Bulk Migration Strategy
For migrating multiple images at scale, use a structured approach with inventory tracking:
#!/bin/bash
# Image inventory (format: "node|source_image|target_name")
IMAGES=(
"worker1|docker.io/nginx:latest|nginx:latest"
"worker2|quay.io/prometheus/prometheus:v2.40.0|prometheus:v2.40.0"
"worker3|docker.io/bitnami/postgresql:17.6.0|postgresql:17.6.0"
"worker1|elastic/elasticsearch:9.1.4|elastic/elasticsearch:9.1.4"
"worker2|elastic/kibana:9.1.4|elastic/kibana:9.1.4"
)
REGISTRY="jdevnet.azurecr.io"
NODE_USER="ubuntu"
SUCCESS_COUNT=0
FAIL_COUNT=0
for entry in "${IMAGES[@]}"; do
IFS='|' read -r node source target <<< "$entry"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Migrating: $source from $node"
if ssh ${NODE_USER}@${node} "sudo k3s ctr images export /tmp/temp.tar ${source}" && \
scp ${NODE_USER}@${node}:/tmp/temp.tar /tmp/temp.tar && \
docker load -i /tmp/temp.tar && \
docker tag ${source} ${REGISTRY}/${target} && \
docker push ${REGISTRY}/${target}; then
echo "✅ SUCCESS: $target"
((SUCCESS_COUNT++))
else
echo "❌ FAILED: $target"
((FAIL_COUNT++))
fi
# Cleanup
ssh ${NODE_USER}@${node} "sudo rm -f /tmp/temp.tar" 2>/dev/null
rm -f /tmp/temp.tar
docker rmi ${source} ${REGISTRY}/${target} 2>/dev/null
done
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Migration Summary:"
echo " ✅ Success: $SUCCESS_COUNT"
echo " ❌ Failed: $FAIL_COUNT"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
Registry-Specific Configuration
Azure Container Registry (ACR)
ACR provides enterprise-grade features with three pricing tiers:
- Basic: $5/month, 10GB storage, perfect for small teams
- Standard: $20/month, 100GB storage, webhooks for CI/CD
- Premium: $167/month, geo-replication, private endpoints, firewall rules
# Enable ACR authentication
az acr update --name jdevnet --admin-enabled true
# Get credentials
az acr credential show --name jdevnet
# Create Kubernetes secret for image pulls
kubectl create secret docker-registry acr-secret \
--docker-server=jdevnet.azurecr.io \
--docker-username=jdevnet \
--docker-password= \
--namespace=default
Harbor (Self-Hosted)
Harbor offers enterprise features with no licensing costs:
- Built-in vulnerability scanning with Trivy/Clair
- Role-based access control (RBAC)
- Image replication across registries
- Helm chart repository
Security Best Practices
When migrating container images, follow these security guidelines:
- Enable Vulnerability Scanning: Use ACR's integrated scanning or Harbor with Trivy
- Implement Image Signing: Use Docker Content Trust or Notary v2
- Restrict Registry Access: Use firewall rules or private endpoints (ACR Premium)
- Rotate Credentials: Regularly rotate registry passwords and Kubernetes secrets
- Audit Image Pulls: Monitor registry access logs for unauthorized activity
- Tag Immutability: Prevent tag overwriting in production registries
Important: Always test migrated images in development environments before
production deployment. Verify architecture compatibility (AMD64 vs ARM64) and that all
image layers transferred correctly. Some images may have platform-specific dependencies
that fail when migrated between architectures.
Troubleshooting Common Issues
Issue: "Permission denied" during export
Solution: Ensure you're using sudo when running containerd commands:
sudo k3s ctr images export /tmp/image.tar
Issue: Out of disk space on node
Solution: Stream export directly without storing on node:
ssh user@node "sudo k3s ctr images export - " > /tmp/image.tar
Issue: Large image transfer taking too long
Solution: Use compression during transfer:
ssh user@node "sudo k3s ctr images export - | gzip" > /tmp/image.tar.gz
gunzip -c /tmp/image.tar.gz | docker load
Issue: Multi-architecture image problems
Solution: Specify platform when tagging:
docker tag --platform linux/arm64 source:tag target:tag
Integration with CI/CD Pipelines
Automate image migration in GitHub Actions, GitLab CI, or Azure DevOps:
# GitHub Actions Example
name: Migrate Container Images
on:
workflow_dispatch:
schedule:
- cron: '0 2 * * 0' # Weekly on Sunday at 2 AM
jobs:
migrate:
runs-on: ubuntu-latest
steps:
- name: Export from cluster node
run: |
ssh ${{ secrets.NODE_USER }}@${{ secrets.NODE_HOST }} \
"sudo k3s ctr images export - ${{ env.SOURCE_IMAGE }}" > /tmp/image.tar
- name: Load into Docker
run: docker load -i /tmp/image.tar
- name: Login to ACR
uses: azure/docker-login@v1
with:
login-server: ${{ secrets.ACR_SERVER }}
username: ${{ secrets.ACR_USERNAME }}
password: ${{ secrets.ACR_PASSWORD }}
- name: Tag and Push
run: |
docker tag ${{ env.SOURCE_IMAGE }} ${{ secrets.ACR_SERVER }}/${{ env.TARGET_IMAGE }}
docker push ${{ secrets.ACR_SERVER }}/${{ env.TARGET_IMAGE }}
Cost-Benefit Analysis
Understanding the financial and operational impact of private registries:
Costs
- ACR Basic: $5/month
- Harbor (self-hosted): Infrastructure costs only
- Storage: ~$0.02/GB/month (ACR)
- Bandwidth: Free within Azure regions
Benefits
- No Docker Hub rate limits (200 pulls/6hrs)
- Reduced deployment latency (30-60% faster)
- 99.9% SLA for enterprise registries
- Vulnerability scanning included
Conclusion
Migrating container images from Kubernetes nodes to private registries is a critical skill for
infrastructure engineers managing production workloads. This process ensures business continuity,
enhances security, and eliminates dependency on public registry availability.
In my homelab environment, migrating 12 critical images to Azure Container Registry took under
2 hours and cost $5/month. This investment provided complete control over the container supply
chain, vulnerability scanning, and eliminated Docker Hub rate limiting issues.
For defense, healthcare, or financial services environments with strict compliance requirements,
private registries are non-negotiable. The combination of ACR's enterprise features and Harbor's
self-hosted flexibility provides options for every organization's security posture and budget.
Key Takeaways
- ✅ Private registries eliminate public registry dependencies and rate limits
- ✅ Use
k3s ctr export for K3s nodes, ctr for standard containerd
- ✅ Automate bulk migrations with bash scripts or CI/CD pipelines
- ✅ Enable vulnerability scanning and image signing for security
- ✅ ACR Basic tier ($5/month) sufficient for small teams and homelabs
- ✅ Always test migrated images before production deployment
Resources