Technical Defense Blog
技術防衛ブログ
Infrastructure & Security

Deep dives into infrastructure, security research, and defense technology

Security October 2025

Compliance as Code: Automating RHEL Security with Ansible

15 min read
Compliance Ansible RHEL STIG Security

Red Hat's Compliance as Code approach transforms security hardening from manual checklists into automated, version-controlled Ansible playbooks. Learn how to implement DISA STIG compliance across RHEL infrastructure with simple true/false configuration switches.

What is Compliance as Code?

Compliance as Code extends the Infrastructure as Code (IaC) paradigm to security compliance. Instead of manually applying security controls from lengthy PDF documents (like DISA STIGs or CIS Benchmarks), compliance requirements are codified into automated, repeatable, and testable configurations.

Red Hat's approach leverages Ansible automation to apply security baselines consistently across entire fleets of RHEL systems, ensuring:

  • Consistency: Same security posture across all systems
  • Repeatability: Deploy compliant systems at scale
  • Version Control: Track compliance changes over time with Git
  • Auditability: Automated compliance reporting and evidence collection
  • Agility: Rapidly update configurations as requirements evolve

DISA STIG Automation with Ansible

The ansible-role-rhel9-stig repository provides a production-ready Ansible role that implements all DISA STIG controls for RHEL 9. This role transforms a 400+ page security technical implementation guide into automated configuration management.

Key Features:

  • Comprehensive Coverage: All DISA STIG V1R1 controls for RHEL 9
  • Idempotent: Safe to run multiple times without side effects
  • Customizable: Toggle individual controls via simple YAML variables
  • Reporting: Generates compliance reports post-execution
  • CI/CD Ready: Integrate into automated deployment pipelines

Implementation Architecture

Traditional Approach

  • Read 400+ page STIG PDF
  • Manually configure each control
  • Document changes in spreadsheet
  • Hope configurations don't drift
  • Manual re-checks for compliance
  • Weeks to harden a fleet

Compliance as Code

  • Edit YAML configuration file
  • Run Ansible playbook
  • Git tracks all changes
  • Automated drift detection
  • Continuous compliance validation
  • Minutes to harden a fleet

Configuration Example

The power of Compliance as Code lies in its simplicity. Here's how to customize STIG controls in the defaults/main.yml file:

# Enable/disable specific STIG controls
rhel9stig_cat1_patch: true   # Apply Category 1 (Critical) controls
rhel9stig_cat2_patch: true   # Apply Category 2 (High) controls
rhel9stig_cat3_patch: false  # Skip Category 3 (Medium) controls

# Customize specific controls for your organization
rhel9stig_sssd_offline_cred_expiration: 1  # Days before cached credentials expire
rhel9stig_pass_min_length: 15              # Minimum password length
rhel9stig_pass_max_days: 60                # Maximum password age

# Toggle individual STIG rules
rhel9stig_rule_010010: true  # RHEL 9 must be a vendor-supported release
rhel9stig_rule_010020: true  # RHEL 9 must display Standard Mandatory DoD Notice
rhel9stig_rule_020010: true  # RHEL 9 must implement NIST FIPS-validated cryptography

# Skip rules that don't apply to your environment
rhel9stig_rule_230285: false # Skip if not using GUI
rhel9stig_rule_230310: false # Skip if wireless not present

Deployment Workflow

Integrating STIG automation into your infrastructure deployment pipeline:

  1. Define Baseline: Fork the ansible-role-rhel9-stig repository
  2. Customize Defaults: Edit defaults/main.yml to match organizational requirements
  3. Version Control: Commit configurations to private Git repository
  4. Test in Dev: Apply playbook to development VMs and validate
  5. Automate: Integrate into CI/CD pipeline for all RHEL deployments
  6. Continuous Compliance: Schedule periodic playbook runs to prevent drift
  7. Audit: Generate compliance reports for security assessments

Benefits for Defense Infrastructure

In defense and intelligence environments, Compliance as Code provides critical capabilities:

  • Rapid ATO: Accelerate Authority to Operate with automated evidence collection
  • Zero Trust: Enforce least-privilege configurations consistently
  • Incident Response: Quickly re-harden compromised systems
  • Supply Chain Security: Validate configurations in disconnected networks
  • Continuous Monitoring: Detect and remediate drift automatically

Important: Always test STIG automation in non-production environments first. Some controls may break application functionality or cause system instability. Use --check mode to preview changes before applying.

Conclusion

Red Hat's Compliance as Code approach represents a paradigm shift in how organizations manage security compliance. By codifying DISA STIGs into Ansible automation, security engineers can harden infrastructure at scale with simple configuration changes. The defaults/main.yml file becomes your organization's security policy—version-controlled, auditable, and automatically enforced.

Resources

Security October 2025

PowerSTIG: Automating Windows Security Compliance with PowerShell DSC

18 min read
Windows PowerShell DSC STIG Compliance

PowerSTIG automates DISA Security Technical Implementation Guide (STIG) compliance for Windows environments using PowerShell Desired State Configuration (DSC). This powerful framework enables automated hardening of Windows Server, SQL Server, IIS, DNS Server, and other Microsoft technologies at enterprise scale.

What is PowerSTIG?

PowerSTIG is an open-source PowerShell module developed by Microsoft that converts DISA STIGs into PowerShell DSC configurations. Instead of manually implementing hundreds of security controls from PDF documents, PowerSTIG enables declarative configuration management for Windows security baselines.

PowerSTIG supports comprehensive STIG automation across the Microsoft ecosystem:

  • Windows Server: 2012 R2, 2016, 2019, 2022
  • SQL Server: 2012, 2016, 2017, 2019, 2022
  • Internet Information Services (IIS): 8.5, 10.0
  • DNS Server: Windows Server DNS
  • Active Directory: Domain and Forest STIGs
  • Microsoft Office: Office 2016, Office 2019, Office 365
  • Internet Explorer & Edge: Browser security configurations
  • Windows Defender: Antivirus and firewall settings

PowerShell DSC Architecture

PowerShell Desired State Configuration (DSC) is the foundation of PowerSTIG's automation capabilities. DSC enables declarative configuration: you define the desired state of a system, and DSC ensures that state is achieved and maintained.

Traditional STIG Implementation

  • Read 300+ page STIG document
  • Manually configure Group Policy Objects
  • Apply registry changes via scripts
  • Document compliance in spreadsheets
  • Manual audits every 90 days
  • Configuration drift over time
  • Weeks to months for full compliance

PowerSTIG Automation

  • Define compliance in PowerShell DSC
  • Automated policy application
  • Self-documenting configurations
  • Version-controlled in Git
  • Continuous compliance monitoring
  • Automatic drift remediation
  • Hours to achieve compliance

Installation and Setup

PowerSTIG is distributed via the PowerShell Gallery and requires PowerShell 5.1 or later. Basic installation workflow:

# Install PowerSTIG module from PowerShell Gallery
Install-Module -Name PowerSTIG -Scope AllUsers -Force

# Verify installation
Get-Module -Name PowerSTIG -ListAvailable

# Import the module
Import-Module PowerSTIG

# List available STIG configurations
Get-StigList

# Example output:
# TechnologyRole     TechnologyVersion    StigVersion
# --------------     -----------------    -----------
# WindowsServer      2019                 2.4
# SqlServer          2016Instance         2.6
# IISServer          10.0                 1.6
# DnsServer          2012R2               1.11

Basic Configuration Example

Here's a complete example of applying Windows Server 2019 STIG compliance using PowerSTIG:

# Windows Server 2019 STIG Configuration
configuration WindowsServer2019_STIG
{
    param (
        [string[]]$ComputerName = 'localhost'
    )
    
    Import-DscResource -ModuleName PowerSTIG
    
    Node $ComputerName
    {
        WindowsServer BaseLine
        {
            OsVersion   = '2019'
            OsRole      = 'MS'        # Member Server
            StigVersion = '2.4'
            
            # Exception handling for specific rules
            Exception   = @{
                'V-205625' = @{
                    # Allow custom password policy
                    ValueData = '90'
                }
                'V-205630' = @{
                    # Skip if conflicting with application requirements
                    Skip = $true
                }
            }
            
            # Organization-specific values
            OrgSettings = @{
                'V-205735' = @{
                    # Custom legal banner text
                    ValueData = 'Authorized Use Only - DoD System'
                }
            }
        }
    }
}

# Generate the MOF configuration file
WindowsServer2019_STIG -ComputerName 'WEB-SERVER-01' -OutputPath 'C:\DSC\Configs'

# Apply the configuration
Start-DscConfiguration -Path 'C:\DSC\Configs' -Wait -Verbose -Force

# Monitor compliance status
Get-DscConfigurationStatus

SQL Server STIG Example

PowerSTIG excels at securing complex Microsoft services. Here's SQL Server 2019 STIG automation:

# SQL Server 2019 Instance STIG Configuration
configuration SqlServer2019_STIG
{
    param (
        [string]$SqlInstance = 'MSSQLSERVER',
        [string]$SqlVersion  = '2019'
    )
    
    Import-DscResource -ModuleName PowerSTIG
    
    Node localhost
    {
        SqlServer InstanceCompliance
        {
            SqlVersion     = $SqlVersion
            SqlRole        = 'Instance'
            ServerInstance = $SqlInstance
            StigVersion    = '2.6'
            
            # SQL-specific exceptions
            Exception = @{
                'V-214015' = @{
                    # Allow mixed mode authentication for legacy apps
                    Identity = 'APPLICATION_USER'
                }
            }
            
            # Organization settings
            OrgSettings = @{
                'V-213989' = @{
                    # Custom audit log size
                    ValueData = '10240'  # MB
                }
            }
        }
    }
}

# Apply SQL Server STIG
SqlServer2019_STIG -OutputPath 'C:\DSC\SQL' 
Start-DscConfiguration -Path 'C:\DSC\SQL' -Wait -Verbose

Enterprise Deployment Workflow

In production environments, PowerSTIG integrates with enterprise configuration management:

  1. Centralized Configuration: Store DSC configurations in Git repository
  2. Pull Server Architecture: Deploy DSC Pull Server for distributed management
  3. Automated Enrollment: New servers automatically register with Pull Server
  4. Continuous Compliance: DSC Local Configuration Manager checks compliance every 15 minutes
  5. Drift Remediation: Non-compliant configurations automatically corrected
  6. Reporting: Centralized compliance dashboards with Azure Monitor or SCOM
  7. Exception Management: Organization-specific exceptions version-controlled

Advanced Features

PowerSTIG provides enterprise-grade capabilities for complex Windows environments:

  • Composite Configurations: Apply multiple STIGs simultaneously (e.g., Windows + IIS + SQL)
  • Role-Based Configs: Different baselines for domain controllers vs member servers
  • Partial Configurations: Split large configs across multiple MOF files for scalability
  • Credential Encryption: Secure sensitive data with certificates
  • Compliance Reporting: Generate audit reports in JSON, XML, or HTML
  • Integration with Azure Automation: Cloud-based DSC for hybrid environments

Real-World Use Cases

Scenario 1: DoD Cloud Migration

  • Automated STIG compliance for 500+ Windows Server VMs in Azure Government
  • Reduced ATO preparation from 6 months to 3 weeks
  • Continuous monitoring with Azure Security Center integration

Scenario 2: Financial Services SQL Hardening

  • Applied SQL Server 2019 STIGs across 200 database instances
  • Automated compliance audits for PCI-DSS requirements
  • Drift detection caught unauthorized configuration changes within minutes

Scenario 3: Healthcare IIS Web Farm

  • Standardized IIS 10.0 STIG configurations across 50 web servers
  • HIPAA compliance automation with custom organization values
  • Zero-touch deployment with Azure DevOps pipelines

Integration with Azure Automation

PowerSTIG integrates seamlessly with Azure Automation State Configuration for cloud-native compliance:

# Upload PowerSTIG configuration to Azure Automation
Import-AzAutomationDscConfiguration `
    -SourcePath 'C:\DSC\WindowsServer2019_STIG.ps1' `
    -ResourceGroupName 'rg-compliance' `
    -AutomationAccountName 'aa-stig-automation' `
    -Published

# Compile configuration in Azure
Start-AzAutomationDscCompilationJob `
    -ConfigurationName 'WindowsServer2019_STIG' `
    -ResourceGroupName 'rg-compliance' `
    -AutomationAccountName 'aa-stig-automation'

# Register VM with Azure Automation DSC
Register-AzAutomationDscNode `
    -ResourceGroupName 'rg-compliance' `
    -AutomationAccountName 'aa-stig-automation' `
    -AzureVMName 'web-server-01' `
    -NodeConfigurationName 'WindowsServer2019_STIG.localhost'

Important: PowerSTIG configurations can cause application compatibility issues. Always test in development environments first. Use the -WhatIf parameter to preview changes, and implement exception handling for application-specific requirements. Some STIG controls may require application code changes or infrastructure redesign.

Best Practices for PowerSTIG Deployment

  • Start with Member Servers: Test on non-critical systems before domain controllers
  • Use Version Control: Store all DSC configurations and exceptions in Git
  • Implement Progressive Rollout: Apply STIGs to 10% of fleet, validate, then expand
  • Monitor Application Impact: Track application errors before/after STIG application
  • Document Exceptions: Maintain business justification for all STIG rule skips
  • Automate Reporting: Schedule weekly compliance reports for stakeholders
  • Plan for Reboots: Many STIG controls require system restarts

Conclusion

PowerSTIG revolutionizes Windows security compliance by transforming manual STIG implementation into automated, repeatable configurations. Using PowerShell DSC's declarative approach, security engineers can harden entire Windows fleets in hours instead of months. The combination of version control, continuous monitoring, and automatic remediation creates a robust compliance framework suitable for the most demanding defense and enterprise environments.

For organizations running Microsoft infrastructure, PowerSTIG is essential for achieving and maintaining DISA STIG compliance at scale. When paired with Azure Automation or on-premises DSC Pull Servers, it provides enterprise-grade configuration management that meets the rigorous security requirements of DoD, federal agencies, and regulated industries.

Resources

Security October 2025

OAuth 2.0 vs OpenID Connect: Understanding Modern Authentication

12 min read
OAuth2 OIDC Security Authentication

A comprehensive analysis of OAuth 2.0 and OpenID Connect (OIDC), exploring their differences, use cases, and security implications in modern enterprise environments.

Understanding the Fundamentals

OAuth 2.0 is an authorization framework designed to grant third-party applications limited access to resources without exposing user credentials. OIDC, built on top of OAuth 2.0, adds an authentication layer that provides identity verification.

Key Differences

OAuth 2.0

  • Purpose: Authorization (access delegation)
  • Output: Access tokens for API access
  • Use Case: "App X wants to access your Google Drive"
  • Token Type: Opaque access tokens

OpenID Connect

  • Purpose: Authentication (identity verification)
  • Output: ID tokens + access tokens
  • Use Case: "Sign in with Google"
  • Token Type: JWT ID tokens with user claims

Security Considerations

When implementing OAuth 2.0 or OIDC, several security best practices must be followed:

  • Use PKCE (Proof Key for Code Exchange) for public clients to prevent authorization code interception
  • Validate redirect URIs strictly to prevent open redirect vulnerabilities
  • Implement state parameters to prevent CSRF attacks
  • Use short-lived access tokens with refresh token rotation
  • Validate JWT signatures and claims (iss, aud, exp) rigorously

Enterprise Implementation with Keycloak

In my homelab environment, I've deployed Keycloak as the identity provider for centralizing authentication across JupyterHub, Rancher, and Elasticsearch. This setup demonstrates:

  • Centralized user management and SSO
  • Role-based access control (RBAC) integration
  • Multi-factor authentication (MFA) enforcement
  • Token refresh and session management

Common Pitfalls

Don't use OAuth 2.0 for authentication alone. OAuth access tokens are not designed to convey user identity. Always use OIDC ID tokens for authentication and OAuth access tokens for API authorization.

Conclusion

Understanding the distinction between OAuth 2.0 and OIDC is crucial for implementing secure authentication and authorization systems. OAuth 2.0 handles what you can access, while OIDC handles who you are. Modern applications typically use both: OIDC for login and OAuth 2.0 for API access.

Infrastructure November 2025

Container Image Migration: From Kubernetes Nodes to Private Registries

20 min read
Kubernetes Containers ACR Infrastructure DevOps

Learn how to extract container images from Kubernetes cluster nodes and migrate them to private registries like Azure Container Registry (ACR), Harbor, or Docker Hub. This comprehensive guide covers manual workflows, automation scripts, and enterprise best practices for container image lifecycle management.

Why Migrate Container Images?

Organizations migrate container images to private registries for several critical reasons:

  • Security & Compliance: Control who can access images, scan for vulnerabilities, and enforce policies
  • Availability: Eliminate dependency on public registry uptime and rate limits
  • Performance: Reduce latency by hosting images closer to workloads
  • Image Preservation: Backup images that may be removed from public registries
  • Air-Gapped Environments: Support disconnected networks in defense/healthcare
  • Cost Control: Avoid Docker Hub rate limits (200 pulls/6hrs for free tier)

Real-World Scenario: PostgreSQL Image Crisis

In my homelab Kubernetes cluster, I encountered a critical situation: the PostgreSQL image (bitnami/postgresql:17.6.0-debian-12-r0) was running production workloads but no longer existed in Docker Hub. Bitnami had removed this specific tag, making redeployment impossible.

The solution? Export the running image directly from the Kubernetes node and migrate it to Azure Container Registry. This process ensured business continuity and eliminated future dependency on public registry availability.

Architecture Overview

┌─────────────────┐
│ Kubernetes Node │ ──── SSH ────> Export image with containerd
│  (containerd)   │                (k3s ctr / ctr / crictl)
└─────────────────┘
         │
         │ SCP (copy .tar file)
         ▼
┌─────────────────┐
│   Local Mac     │ ──── Load ────> Import into Docker
│    (Docker)     │
└─────────────────┘
         │
         │ Push
         ▼
┌─────────────────┐
│ Container Reg.  │ ──── Store ───> ACR / Docker Hub / Harbor
│  (ACR/Harbor)   │
└─────────────────┘

Step-by-Step Migration Process

Step 1: Identify the Target Image

First, locate which node is running the container with your target image:

# List all pods with images and nodes
kubectl get pods --all-namespaces -o jsonpath='{range .items[*]}{.metadata.namespace}{"/"}{.metadata.name}{"\t"}{.spec.containers[*].image}{"\t"}{.spec.nodeName}{"\n"}{end}' | column -t

# Example output:
# keycloak/postgres-0    bitnami/postgresql:17.6.0-debian-12-r0    worker3

# Get specific pod details
kubectl get pod postgres-0 -n keycloak -o wide

Step 2: Export from Kubernetes Node

SSH into the node and export the image using the container runtime CLI. For K3s (which uses containerd):

# SSH into the node
ssh ubuntu@worker3

# Export the image to tar file
sudo k3s ctr images export /tmp/postgresql.tar docker.io/bitnami/postgresql:17.6.0-debian-12-r0

# Verify export
ls -lh /tmp/postgresql.tar
# Output: -rw-r--r-- 1 root root 105M Nov 8 10:23 /tmp/postgresql.tar

Step 3: Transfer to Local Machine

Copy the exported tar file from the node to your local workstation:

# Using SCP
scp ubuntu@worker3:/tmp/postgresql.tar /tmp/postgresql.tar

# For large images, use compression:
ssh ubuntu@worker3 "sudo cat /tmp/postgresql.tar | gzip" | gunzip > /tmp/postgresql.tar

# Or use rsync for resume capability:
rsync -avz --progress ubuntu@worker3:/tmp/postgresql.tar /tmp/

Step 4: Load into Docker

Import the tar file into your local Docker environment:

# Load the image
docker load -i /tmp/postgresql.tar

# Output:
# Loaded image: docker.io/bitnami/postgresql:17.6.0-debian-12-r0

# Verify it loaded successfully
docker images | grep postgresql

Step 5: Tag for Target Registry

Tag the image with your private registry URL:

# Azure Container Registry (ACR)
docker tag bitnami/postgresql:17.6.0-debian-12-r0 jdevnet.azurecr.io/postgresql:17.6.0-debian-12-r0

# Docker Hub (replace 'username' with your Docker Hub username)
docker tag bitnami/postgresql:17.6.0-debian-12-r0 username/postgresql:17.6.0-debian-12-r0

# Harbor (self-hosted)
docker tag bitnami/postgresql:17.6.0-debian-12-r0 harbor.example.com/myproject/postgresql:17.6.0-debian-12-r0

# GitHub Container Registry
docker tag bitnami/postgresql:17.6.0-debian-12-r0 ghcr.io/username/postgresql:17.6.0-debian-12-r0

Step 6: Authenticate and Push

Login to your target registry and push the image:

# Azure Container Registry
az acr login --name jdevnet
docker push jdevnet.azurecr.io/postgresql:17.6.0-debian-12-r0

# Docker Hub
docker login --username myusername
docker push myusername/postgresql:17.6.0-debian-12-r0

# Harbor
docker login harbor.example.com --username admin
docker push harbor.example.com/myproject/postgresql:17.6.0-debian-12-r0

Automation Script for Production

For enterprise environments, automate the entire process with this reusable bash script:

#!/bin/bash
set -e

# Configuration
NODE_USER="ubuntu"
NODE_HOST="worker3"
SOURCE_IMAGE="docker.io/bitnami/postgresql:17.6.0-debian-12-r0"
REGISTRY_URL="jdevnet.azurecr.io"
TARGET_REPO="postgresql"
TARGET_TAG="17.6.0-debian-12-r0"
TEMP_FILE="image-export.tar"

TARGET_IMAGE="${REGISTRY_URL}/${TARGET_REPO}:${TARGET_TAG}"

echo "========================================="
echo "Container Image Migration Script"
echo "========================================="
echo "Source: ${SOURCE_IMAGE}"
echo "Target: ${TARGET_IMAGE}"

# Export from node
echo "📦 Exporting image from ${NODE_HOST}..."
ssh ${NODE_USER}@${NODE_HOST} "sudo k3s ctr images export /tmp/${TEMP_FILE} ${SOURCE_IMAGE}"

# Copy to local
echo "📥 Copying to local machine..."
scp ${NODE_USER}@${NODE_HOST}:/tmp/${TEMP_FILE} /tmp/${TEMP_FILE}

# Cleanup node
echo "🧹 Cleaning up node..."
ssh ${NODE_USER}@${NODE_HOST} "sudo rm /tmp/${TEMP_FILE}"

# Load into Docker
echo "📦 Loading into Docker..."
docker load -i /tmp/${TEMP_FILE}

# Tag for registry
echo "🏷️  Tagging for registry..."
docker tag ${SOURCE_IMAGE} ${TARGET_IMAGE}

# Push to registry
echo "🚀 Pushing to registry..."
docker push ${TARGET_IMAGE}

# Cleanup local
echo "🧹 Cleaning up local files..."
rm /tmp/${TEMP_FILE}
docker rmi ${SOURCE_IMAGE} ${TARGET_IMAGE}

echo "✅ Migration Complete!"
echo "Image available at: ${TARGET_IMAGE}"

Container Runtime Compatibility

Different Kubernetes distributions use different container runtimes. Here's how to export images from each:

K3s (containerd)

sudo k3s ctr images export \
  /tmp/image.tar \
  docker.io/library/nginx:latest

Standard containerd

sudo ctr -n k8s.io images export \
  /tmp/image.tar \
  docker.io/library/nginx:latest

Docker

sudo docker save \
  -o /tmp/image.tar \
  nginx:latest

CRI-O

sudo podman save \
  -o /tmp/image.tar \
  nginx:latest

Bulk Migration Strategy

For migrating multiple images at scale, use a structured approach with inventory tracking:

#!/bin/bash

# Image inventory (format: "node|source_image|target_name")
IMAGES=(
  "worker1|docker.io/nginx:latest|nginx:latest"
  "worker2|quay.io/prometheus/prometheus:v2.40.0|prometheus:v2.40.0"
  "worker3|docker.io/bitnami/postgresql:17.6.0|postgresql:17.6.0"
  "worker1|elastic/elasticsearch:9.1.4|elastic/elasticsearch:9.1.4"
  "worker2|elastic/kibana:9.1.4|elastic/kibana:9.1.4"
)

REGISTRY="jdevnet.azurecr.io"
NODE_USER="ubuntu"
SUCCESS_COUNT=0
FAIL_COUNT=0

for entry in "${IMAGES[@]}"; do
  IFS='|' read -r node source target <<< "$entry"
  
  echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
  echo "Migrating: $source from $node"
  
  if ssh ${NODE_USER}@${node} "sudo k3s ctr images export /tmp/temp.tar ${source}" && \
     scp ${NODE_USER}@${node}:/tmp/temp.tar /tmp/temp.tar && \
     docker load -i /tmp/temp.tar && \
     docker tag ${source} ${REGISTRY}/${target} && \
     docker push ${REGISTRY}/${target}; then
    
    echo "✅ SUCCESS: $target"
    ((SUCCESS_COUNT++))
  else
    echo "❌ FAILED: $target"
    ((FAIL_COUNT++))
  fi
  
  # Cleanup
  ssh ${NODE_USER}@${node} "sudo rm -f /tmp/temp.tar" 2>/dev/null
  rm -f /tmp/temp.tar
  docker rmi ${source} ${REGISTRY}/${target} 2>/dev/null
done

echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
echo "Migration Summary:"
echo "  ✅ Success: $SUCCESS_COUNT"
echo "  ❌ Failed: $FAIL_COUNT"
echo "━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"

Registry-Specific Configuration

Azure Container Registry (ACR)

ACR provides enterprise-grade features with three pricing tiers:

  • Basic: $5/month, 10GB storage, perfect for small teams
  • Standard: $20/month, 100GB storage, webhooks for CI/CD
  • Premium: $167/month, geo-replication, private endpoints, firewall rules
# Enable ACR authentication
az acr update --name jdevnet --admin-enabled true

# Get credentials
az acr credential show --name jdevnet

# Create Kubernetes secret for image pulls
kubectl create secret docker-registry acr-secret \
  --docker-server=jdevnet.azurecr.io \
  --docker-username=jdevnet \
  --docker-password= \
  --namespace=default

Harbor (Self-Hosted)

Harbor offers enterprise features with no licensing costs:

  • Built-in vulnerability scanning with Trivy/Clair
  • Role-based access control (RBAC)
  • Image replication across registries
  • Helm chart repository

Security Best Practices

When migrating container images, follow these security guidelines:

  • Enable Vulnerability Scanning: Use ACR's integrated scanning or Harbor with Trivy
  • Implement Image Signing: Use Docker Content Trust or Notary v2
  • Restrict Registry Access: Use firewall rules or private endpoints (ACR Premium)
  • Rotate Credentials: Regularly rotate registry passwords and Kubernetes secrets
  • Audit Image Pulls: Monitor registry access logs for unauthorized activity
  • Tag Immutability: Prevent tag overwriting in production registries

Important: Always test migrated images in development environments before production deployment. Verify architecture compatibility (AMD64 vs ARM64) and that all image layers transferred correctly. Some images may have platform-specific dependencies that fail when migrated between architectures.

Troubleshooting Common Issues

Issue: "Permission denied" during export

Solution: Ensure you're using sudo when running containerd commands:

sudo k3s ctr images export /tmp/image.tar 

Issue: Out of disk space on node

Solution: Stream export directly without storing on node:

ssh user@node "sudo k3s ctr images export - " > /tmp/image.tar

Issue: Large image transfer taking too long

Solution: Use compression during transfer:

ssh user@node "sudo k3s ctr images export -  | gzip" > /tmp/image.tar.gz
gunzip -c /tmp/image.tar.gz | docker load

Issue: Multi-architecture image problems

Solution: Specify platform when tagging:

docker tag --platform linux/arm64 source:tag target:tag

Integration with CI/CD Pipelines

Automate image migration in GitHub Actions, GitLab CI, or Azure DevOps:

# GitHub Actions Example
name: Migrate Container Images
on:
  workflow_dispatch:
  schedule:
    - cron: '0 2 * * 0'  # Weekly on Sunday at 2 AM

jobs:
  migrate:
    runs-on: ubuntu-latest
    steps:
      - name: Export from cluster node
        run: |
          ssh ${{ secrets.NODE_USER }}@${{ secrets.NODE_HOST }} \
            "sudo k3s ctr images export - ${{ env.SOURCE_IMAGE }}" > /tmp/image.tar
      
      - name: Load into Docker
        run: docker load -i /tmp/image.tar
      
      - name: Login to ACR
        uses: azure/docker-login@v1
        with:
          login-server: ${{ secrets.ACR_SERVER }}
          username: ${{ secrets.ACR_USERNAME }}
          password: ${{ secrets.ACR_PASSWORD }}
      
      - name: Tag and Push
        run: |
          docker tag ${{ env.SOURCE_IMAGE }} ${{ secrets.ACR_SERVER }}/${{ env.TARGET_IMAGE }}
          docker push ${{ secrets.ACR_SERVER }}/${{ env.TARGET_IMAGE }}

Cost-Benefit Analysis

Understanding the financial and operational impact of private registries:

Costs

  • ACR Basic: $5/month
  • Harbor (self-hosted): Infrastructure costs only
  • Storage: ~$0.02/GB/month (ACR)
  • Bandwidth: Free within Azure regions

Benefits

  • No Docker Hub rate limits (200 pulls/6hrs)
  • Reduced deployment latency (30-60% faster)
  • 99.9% SLA for enterprise registries
  • Vulnerability scanning included

Conclusion

Migrating container images from Kubernetes nodes to private registries is a critical skill for infrastructure engineers managing production workloads. This process ensures business continuity, enhances security, and eliminates dependency on public registry availability.

In my homelab environment, migrating 12 critical images to Azure Container Registry took under 2 hours and cost $5/month. This investment provided complete control over the container supply chain, vulnerability scanning, and eliminated Docker Hub rate limiting issues.

For defense, healthcare, or financial services environments with strict compliance requirements, private registries are non-negotiable. The combination of ACR's enterprise features and Harbor's self-hosted flexibility provides options for every organization's security posture and budget.

Key Takeaways

  • ✅ Private registries eliminate public registry dependencies and rate limits
  • ✅ Use k3s ctr export for K3s nodes, ctr for standard containerd
  • ✅ Automate bulk migrations with bash scripts or CI/CD pipelines
  • ✅ Enable vulnerability scanning and image signing for security
  • ✅ ACR Basic tier ($5/month) sufficient for small teams and homelabs
  • ✅ Always test migrated images before production deployment

Resources

Edge Computing December 2025

Building an Autonomous Edge AI Pipeline: From Sensor to Insight

10 min read
Edge AI Jetson Orin Kubernetes YOLOv8 Elasticsearch

Deploying real-time computer vision at the edge requires a robust architecture that balances performance, reliability, and scalability. This article explores the high-level design of an autonomous detection pipeline built on the NVIDIA Jetson Orin Nano platform.

The Challenge: Real-Time Intelligence at the Edge

Traditional surveillance systems rely on centralized processing, sending massive amounts of video data to the cloud for analysis. This approach introduces latency, consumes bandwidth, and creates a single point of failure.

Our goal was to build a decentralized "Edge AI" system where intelligence lives on the device itself. The system needed to:

  • Process Video Locally: Run AI inference on-device without cloud dependency.
  • Operate Autonomously: Continue functioning even if the network is severed.
  • Sync Intelligently: Transmit only metadata (detections) to the central command, not raw video.

System Architecture

The solution leverages a microservices architecture orchestrated by Kubernetes (K3s) and Docker.

1. The Sensor Node (Raspberry Pi Zero)

A lightweight, low-power "eye" that captures video and streams it over the local network via UDP. It performs no analysis, keeping its footprint minimal and energy-efficient.

2. The Edge Brain (NVIDIA Jetson Orin Nano)

The core of the system. The Jetson runs a custom Python agent inside a Docker container that:

  • Ingests: Receives the raw UDP video stream.
  • Analyzes: Uses the YOLOv8 model (You Only Look Once) to detect objects in real-time.
  • Filters: Applies confidence thresholds to reduce false positives.
  • Visualizes: Draws bounding boxes for local debugging (optional).

3. The Data Lake (Elasticsearch & Kibana)

Instead of storing video, we store insights. Every detection is converted into a JSON document containing the object type, confidence score, timestamp, and location. This data is indexed in Elasticsearch, allowing for:

  • Real-time Dashboards: Visualize threat levels and activity patterns in Kibana.
  • Historical Analysis: Query past events instantly.
  • Alerting: Trigger notifications based on specific criteria (e.g., "Person detected after midnight").

Real-World Applications & Impact

While this architecture is technically fascinating, its primary value lies in its versatility across critical domains, from disaster response to industrial operations.

Disaster Response & Security

In the aftermath of natural disasters, power and communications are often lost. This system can be deployed off-grid to monitor evacuated neighborhoods. By detecting "Person" or "Vehicle" classes in restricted zones, it alerts authorities to potential looting or unauthorized access without requiring human patrols.

Supply Chain Visibility

In logistics hubs, the system tracks the movement of assets and vehicles in real-time. By analyzing dwell times and traffic flow, operators can identify bottlenecks and predict delivery delays, enhancing overall supply chain resilience and predictability.

Industrial Safety

For hazardous work environments, the model can be adapted to detect Personal Protective Equipment (PPE). It automatically flags safety violations or unauthorized entry into dangerous zones, reducing liability and preventing accidents before they occur.

Environmental Protection

Deployed in protected nature reserves, the system can monitor for illegal activities like poaching or logging. By training models on specific wildlife or vehicle types, rangers receive real-time intelligence on intrusions in remote areas, acting as a force multiplier for conservation efforts.

Energy Independence

The low-power architecture (5W for Pi, 15W for Jetson) enables diverse power options beyond the grid. From solar arrays and Power-over-Ethernet (PoE) to battery backup HATs, this system promotes energy equity, allowing deployment in infrastructure-poor regions where consistent electricity is a luxury.

Cost-Effective Scalability

Traditional surveillance towers cost thousands of dollars. This distributed approach uses commodity hardware ($15 Pi Zero sensors) feeding into a single centralized processor. This allows for wide-area coverage at a fraction of the cost, making it viable for resource-constrained organizations.

Key Technical Hurdles

Building this pipeline wasn't without challenges. We had to solve several integration issues:

Networking

Exposing the Elasticsearch database to the edge agent required bypassing standard Kubernetes ClusterIP restrictions. We utilized a NodePort service strategy to create a stable entry point for the external agent.

Security

Securing the connection between the edge agent and the database involved implementing custom SSL contexts to handle self-signed certificates in a development environment, ensuring encrypted transport without complex PKI infrastructure.

Conclusion: The Power of Decentralization

This project demonstrates that enterprise-grade AI surveillance is now accessible to independent researchers and hobbyists. By combining the raw power of the Jetson Orin Nano with the flexibility of Kubernetes and the analytical depth of the Elastic Stack, we've built a system that rivals commercial offerings at a fraction of the cost.

More importantly, this architecture represents a shift towards decentralized intelligence. Advanced surveillance and predictive capabilities are no longer the exclusive domain of large corporations or state actors. By leveraging open-source software and commodity hardware, communities and organizations can deploy their own sovereign security infrastructure—transparent, auditable, and under their direct control.

Infrastructure Coming Soon

Building a Production K3s Cluster: From Bare Metal to GitOps

Est. 20 min read
Kubernetes K3s GitOps Homelab

A comprehensive guide to deploying a production-grade K3s cluster on bare metal, including automated certificate management, persistent storage with NFS, Rancher installation, and GitOps workflows with Ansible automation.

In Progress
Machine Learning Coming Soon

NVIDIA Jetson Orin Nano: Edge AI for Real-Time Computer Vision

Est. 15 min read
Edge AI NVIDIA Computer Vision TensorRT

Exploring the NVIDIA Jetson Orin Nano's capabilities for edge AI workloads, including TensorRT optimization, real-time object detection, and integration with Kubernetes for distributed inference.

Hardware Pending
Data Science Coming Soon

Health Data Analytics: Mining Apple Health with Elasticsearch

Est. 18 min read
Data Science Python Elasticsearch Health Analytics

Building a comprehensive health data analytics pipeline using Apple Health export, Python data transformation, and Elasticsearch time-series analysis with Kibana visualizations.

In Progress