The Complete NGINX on Ubuntu Series: Part 19 – Container Deployment with Docker and Kubernetes

The Complete NGINX on Ubuntu Series: Part 19 – Container Deployment with Docker and Kubernetes

Welcome to Part 19 of our comprehensive NGINX on Ubuntu series! We’ll containerize NGINX using Docker and deploy it with Kubernetes for scalable, orchestrated container management.

Container Deployment Fundamentals

Container deployment provides portability, scalability, and consistency across environments. Docker packages NGINX with its dependencies, while Kubernetes orchestrates containers at scale.

graph TD
    A[Container Strategy] --> B[Docker Images]
    A --> C[Kubernetes Orchestration]
    A --> D[Service Mesh]
    A --> E[Scaling & Management]
    
    B --> F[Base ImageCustom ConfigMulti-stage Build]
    C --> G[PodsServicesIngress Controllers]
    D --> H[Traffic ManagementLoad BalancingSecurity Policies]
    E --> I[Auto-scalingRolling UpdatesHealth Checks]
    
    J[Container Benefits] --> K[Portability]
    J --> L[Scalability]
    J --> M[Consistency]
    J --> N[Efficiency]
    
    style A fill:#e1f5fe
    style J fill:#e8f5e8
    style F fill:#fff3e0
    style G fill:#e3f2fd
    style H fill:#e8f5e8
    style I fill:#fff3e0

Docker Setup and NGINX Image

# Install Docker
sudo apt update
sudo apt install -y docker.io docker-compose
sudo systemctl enable docker
sudo usermod -aG docker $USER

# Create project structure
mkdir -p /opt/nginx-containers/{docker,kubernetes,configs}
cd /opt/nginx-containers
# Create custom NGINX Dockerfile
cat > docker/Dockerfile << 'EOF'
# Multi-stage NGINX container build
FROM nginx:alpine AS base

# Install additional packages
RUN apk add --no-cache \
    curl \
    bash \
    certbot \
    certbot-nginx

# Create directories
RUN mkdir -p /var/cache/nginx /var/log/nginx /etc/nginx/ssl

FROM base AS config
# Copy custom configurations
COPY configs/nginx.conf /etc/nginx/nginx.conf
COPY configs/default.conf /etc/nginx/conf.d/default.conf

FROM config AS final
# Add health check
HEALTHCHECK --interval=30s --timeout=3s --start-period=5s --retries=3 \
    CMD curl -f http://localhost/health || exit 1

# Expose ports
EXPOSE 80 443

# Start NGINX
CMD ["nginx", "-g", "daemon off;"]
EOF
# Create optimized NGINX configuration for containers
cat > configs/nginx.conf << 'EOF'
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log warn;
pid /var/run/nginx.pid;

events {
    worker_connections 1024;
    use epoll;
    multi_accept on;
}

http {
    include /etc/nginx/mime.types;
    default_type application/octet-stream;

    log_format main '$remote_addr - $remote_user [$time_local] "$request" '
                    '$status $body_bytes_sent "$http_referer" '
                    '"$http_user_agent" "$http_x_forwarded_for"';

    access_log /var/log/nginx/access.log main;

    sendfile on;
    tcp_nopush on;
    tcp_nodelay on;
    keepalive_timeout 65;
    types_hash_max_size 2048;

    gzip on;
    gzip_vary on;
    gzip_comp_level 6;
    gzip_types text/plain text/css application/json application/javascript;

    include /etc/nginx/conf.d/*.conf;
}
EOF
# Create default virtual host for container
cat > configs/default.conf << 'EOF'
server {
    listen 80;
    server_name _;
    root /usr/share/nginx/html;
    index index.html;

    location / {
        try_files $uri $uri/ =404;
    }

    location /health {
        access_log off;
        return 200 "healthy\n";
        add_header Content-Type text/plain;
    }

    location = /favicon.ico {
        log_not_found off;
        access_log off;
    }
}
EOF

Docker Compose Configuration

# Create Docker Compose for local development
cat > docker-compose.yml << 'EOF'
version: '3.8'

services:
  nginx:
    build: 
      context: .
      dockerfile: docker/Dockerfile
    ports:
      - "80:80"
      - "443:443"
    volumes:
      - ./configs:/etc/nginx/conf.d:ro
      - ./logs:/var/log/nginx
      - ./html:/usr/share/nginx/html:ro
    environment:
      - NGINX_ENVSUBST_OUTPUT_DIR=/etc/nginx/conf.d
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost/health"]
      interval: 30s
      timeout: 10s
      retries: 3
    restart: unless-stopped
    networks:
      - nginx-network

  nginx-exporter:
    image: nginx/nginx-prometheus-exporter:latest
    ports:
      - "9113:9113"
    command:
      - '-nginx.scrape-uri=http://nginx/nginx_status'
    depends_on:
      - nginx
    networks:
      - nginx-network

networks:
  nginx-network:
    driver: bridge
EOF

Kubernetes Deployment

graph TD
    A[Kubernetes Deployment] --> B[ConfigMaps]
    A --> C[Deployments]
    A --> D[Services]
    A --> E[Ingress]
    
    B --> F[NGINX ConfigSSL CertificatesEnvironment Variables]
    C --> G[Pod ReplicasRolling UpdatesResource Limits]
    D --> H[Load BalancingService DiscoveryPort Exposure]
    E --> I[External AccessTLS TerminationPath Routing]
    
    style A fill:#e1f5fe
    style F fill:#fff3e0
    style G fill:#e3f2fd
    style H fill:#e8f5e8
    style I fill:#fff3e0
# Install Kubernetes (k3s for lightweight setup)
curl -sfL https://get.k3s.io | sh -
sudo chmod 644 /etc/rancher/k3s/k3s.yaml
export KUBECONFIG=/etc/rancher/k3s/k3s.yaml
# Create Kubernetes namespace and ConfigMap
cat > kubernetes/01-namespace.yaml << 'EOF'
apiVersion: v1
kind: Namespace
metadata:
  name: nginx-system
  labels:
    name: nginx-system
EOF
# Create ConfigMap for NGINX configuration
cat > kubernetes/02-configmap.yaml << 'EOF'
apiVersion: v1
kind: ConfigMap
metadata:
  name: nginx-config
  namespace: nginx-system
data:
  nginx.conf: |
    user nginx;
    worker_processes auto;
    error_log /var/log/nginx/error.log warn;
    pid /var/run/nginx.pid;
    
    events {
        worker_connections 1024;
        use epoll;
    }
    
    http {
        include /etc/nginx/mime.types;
        default_type application/octet-stream;
        
        sendfile on;
        keepalive_timeout 65;
        gzip on;
        
        include /etc/nginx/conf.d/*.conf;
    }
  
  default.conf: |
    server {
        listen 80;
        server_name _;
        root /usr/share/nginx/html;
        
        location / {
            try_files $uri $uri/ =404;
        }
        
        location /health {
            access_log off;
            return 200 "healthy\n";
            add_header Content-Type text/plain;
        }
    }
EOF
# Create NGINX Deployment
cat > kubernetes/03-deployment.yaml << 'EOF'
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  namespace: nginx-system
  labels:
    app: nginx
spec:
  replicas: 3
  strategy:
    type: RollingUpdate
    rollingUpdate:
      maxUnavailable: 1
      maxSurge: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:alpine
        ports:
        - containerPort: 80
        resources:
          requests:
            memory: "64Mi"
            cpu: "50m"
          limits:
            memory: "128Mi"
            cpu: "100m"
        volumeMounts:
        - name: nginx-config
          mountPath: /etc/nginx/nginx.conf
          subPath: nginx.conf
        - name: nginx-config
          mountPath: /etc/nginx/conf.d/default.conf
          subPath: default.conf
        livenessProbe:
          httpGet:
            path: /health
            port: 80
          initialDelaySeconds: 30
          periodSeconds: 10
        readinessProbe:
          httpGet:
            path: /health
            port: 80
          initialDelaySeconds: 5
          periodSeconds: 5
      volumes:
      - name: nginx-config
        configMap:
          name: nginx-config
EOF
# Create Service and Ingress
cat > kubernetes/04-service.yaml << 'EOF'
apiVersion: v1
kind: Service
metadata:
  name: nginx-service
  namespace: nginx-system
  labels:
    app: nginx
spec:
  selector:
    app: nginx
  ports:
  - protocol: TCP
    port: 80
    targetPort: 80
  type: ClusterIP
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx-ingress
  namespace: nginx-system
  annotations:
    kubernetes.io/ingress.class: "traefik"
    cert-manager.io/cluster-issuer: "letsencrypt-prod"
spec:
  tls:
  - hosts:
    - nginx.example.com
    secretName: nginx-tls
  rules:
  - host: nginx.example.com
    http:
      paths:
      - path: /
        pathType: Prefix
        backend:
          service:
            name: nginx-service
            port:
              number: 80
EOF

Horizontal Pod Autoscaler

# Create HPA for auto-scaling
cat > kubernetes/05-hpa.yaml << 'EOF'
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: nginx-hpa
  namespace: nginx-system
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: nginx-deployment
  minReplicas: 2
  maxReplicas: 10
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80
  behavior:
    scaleDown:
      stabilizationWindowSeconds: 300
      policies:
      - type: Percent
        value: 50
        periodSeconds: 60
    scaleUp:
      stabilizationWindowSeconds: 60
      policies:
      - type: Percent
        value: 100
        periodSeconds: 60
EOF

Helm Chart for NGINX

# Create Helm chart structure
helm create nginx-chart
cd nginx-chart

# Update values.yaml
cat > values.yaml << 'EOF'
replicaCount: 3

image:
  repository: nginx
  pullPolicy: IfNotPresent
  tag: "alpine"

nameOverride: ""
fullnameOverride: ""

service:
  type: ClusterIP
  port: 80

ingress:
  enabled: true
  className: "traefik"
  annotations:
    cert-manager.io/cluster-issuer: letsencrypt-prod
  hosts:
    - host: nginx.example.com
      paths:
        - path: /
          pathType: Prefix
  tls:
    - secretName: nginx-tls
      hosts:
        - nginx.example.com

resources:
  limits:
    cpu: 100m
    memory: 128Mi
  requests:
    cpu: 50m
    memory: 64Mi

autoscaling:
  enabled: true
  minReplicas: 2
  maxReplicas: 10
  targetCPUUtilizationPercentage: 70

nodeSelector: {}
tolerations: []
affinity: {}
EOF

Container Management Scripts

# Create container management script
cat > /usr/local/bin/nginx-container-mgmt.sh << 'EOF'
#!/bin/bash

# NGINX Container Management
DOCKER_IMAGE="nginx:custom"
KUBE_NAMESPACE="nginx-system"

docker_build() {
    echo "Building custom NGINX Docker image..."
    cd /opt/nginx-containers
    docker build -f docker/Dockerfile -t $DOCKER_IMAGE .
}

docker_run() {
    echo "Running NGINX container..."
    docker-compose up -d
    echo "Container started. Access: http://localhost"
}

docker_stop() {
    echo "Stopping NGINX container..."
    docker-compose down
}

kube_deploy() {
    echo "Deploying to Kubernetes..."
    kubectl apply -f kubernetes/
    kubectl rollout status deployment/nginx-deployment -n $KUBE_NAMESPACE
}

kube_status() {
    echo "=== Kubernetes Status ==="
    kubectl get pods -n $KUBE_NAMESPACE
    kubectl get services -n $KUBE_NAMESPACE
    kubectl get ingress -n $KUBE_NAMESPACE
}

kube_scale() {
    local replicas=${1:-3}
    echo "Scaling to $replicas replicas..."
    kubectl scale deployment nginx-deployment --replicas=$replicas -n $KUBE_NAMESPACE
}

kube_logs() {
    kubectl logs -l app=nginx -n $KUBE_NAMESPACE --tail=50
}

case "${1:-help}" in
    docker-build)
        docker_build
        ;;
    docker-run)
        docker_run
        ;;
    docker-stop)
        docker_stop
        ;;
    kube-deploy)
        kube_deploy
        ;;
    kube-status)
        kube_status
        ;;
    kube-scale)
        kube_scale "$2"
        ;;
    kube-logs)
        kube_logs
        ;;
    *)
        echo "Usage: $0 {docker-build|docker-run|docker-stop|kube-deploy|kube-status|kube-scale|kube-logs}"
        ;;
esac

# Make executable: sudo chmod +x /usr/local/bin/nginx-container-mgmt.sh
EOF

Testing Container Deployment

# Test container deployments

# 1. Build and test Docker image
/usr/local/bin/nginx-container-mgmt.sh docker-build
/usr/local/bin/nginx-container-mgmt.sh docker-run

# Test Docker deployment
curl http://localhost/health

# 2. Deploy to Kubernetes
/usr/local/bin/nginx-container-mgmt.sh kube-deploy

# 3. Check Kubernetes status
/usr/local/bin/nginx-container-mgmt.sh kube-status

# 4. Test auto-scaling
/usr/local/bin/nginx-container-mgmt.sh kube-scale 5

# 5. View logs
/usr/local/bin/nginx-container-mgmt.sh kube-logs

# 6. Test with load
kubectl run -i --tty load-generator --rm --image=busybox --restart=Never -- \
  /bin/sh -c "while sleep 0.01; do wget -q -O- http://nginx-service.nginx-system.svc.cluster.local; done"

# 7. Deploy with Helm (alternative)
helm install nginx-release ./nginx-chart

# 8. Monitor scaling
watch kubectl get hpa nginx-hpa -n nginx-system

What’s Next?

Excellent! You’ve successfully containerized NGINX with Docker and deployed it using Kubernetes with auto-scaling, health checks, and ingress controllers. Your NGINX deployment is now cloud-native and highly scalable.

Coming up in Part 20: NGINX Edge Computing and IoT Applications

References


This is Part 19 of our 22-part NGINX series. Your NGINX is now containerized and orchestrated! Next, we’ll explore edge computing applications. Questions? Share them in the comments!

Written by:

385 Posts

View All Posts
Follow Me :