Skip to content

Migrating From the Community‑Based to F5‑Based Wallarm Ingress Controller

This topic explains why and how to migrate from the Wallarm Ingress Controller based on the Community Ingress NGINX to the new controller based on F5 NGINX Ingress Controller.

Why the migration is required

Previously, Wallarm provided an Ingress Controller based on the Community Ingress NGINX.

In November 2025, the Kubernetes community announced the retirement of this project due to growing maintenance challenges and unresolved technical issues.

Wallarm will fully support this controller (including new feature releases) until March 2026. After that date, the controller will remain functional but will no longer receive updates, bug fixes, or security patches.

Continuing to use it after March 2026 may expose your environment to unresolved defects and security vulnerabilities.

To ensure ongoing support and security, we strongly recommend migrating to a supported deployment option, such as the Wallarm Ingress Controller based on the F5 NGINX Ingress Controller. The sections below describe the migration steps and their benefits.

About the new Ingress Controller

The new Wallarm Ingress Controller is based on the F5 NGINX Ingress Controller and is the recommended replacement for the Community NGINX-based deployment.

It provides long-term stability, vendor-backed support, regular updates and security patches, and advanced traffic management.

For a detailed overview of the changes and new features, see the What's New guide.

NGINX Plus is not supported

The Wallarm Ingress Controller uses the open-source edition of the F5 NGINX Ingress Controller. NGINX Plus is not included and is not supported.

Choosing your migration strategy

You can migrate to the new Wallarm Ingress Controller using one of four strategies. The appropriate option depends on your infrastructure, IP requirements, and tolerance for downtime.

Review the summary table below to determine which approach best fits your environment. Detailed descriptions of each strategy follow.

Strategy Downtime IP changes Complexity Best for Est. time
Load balancer None No High Environments with an external load balancer 4–8 hours (includes staged rollout and monitoring)
DNS switch None (DNS propagation applies) Yes Low Environments where IP changes are acceptable 3–4 hours plus DNS propagation time (depends on your TTL setting)
Selector swap None No Medium Production environments with strict IP requirements 4–6 hours
Direct replacement 5-15 minutes Yes Low Development and staging environments 2–3 hours (including the downtime window)

Recommendation

If unsure, use selector swap for production environments and direct replacement for development or staging.

Migration - part 1 (strategy independent)

Purpose: Prepare the new Ingress Controller and validate your converted Ingress configuration without changing production. You will deploy the new controller alongside the existing one, convert Ingress manifests on copies, and test them in a separate namespace. Part 1 is the same for every migration strategy.

Prerequisites

Before starting the migration, ensure the following components meet the minimum version requirements:

  • Kubernetes CLI (kubectl) – v1.25+

  • Helm – v3.10+

  • cURL – 7.x+ (usually pre-installed on Linux/macOS)

  • Basic shell utilities – grep, sed, awk

Additionally, you need these access and permissions:

  • Namespace administration – Can create, modify, and delete resources in target namespaces.

  • Wallarm API token – You can reuse the existing API token with the Node deployment/Deployment usage type from your current NGINX Ingress Controller deployment.

  • DNS management access (required for strategies B and D) – Ability to create/update A/CNAME records.

  • Load balancer access (required for strategy A – Access to your external load balancer configuration.

  • Monitoring/metrics access – Ability to view logs and metrics.

Step 0. Collect current Ingress deployment details and validate environment

Before starting the migration, gather the following information from your existing Ingress Controller deployment and complete basic environment validations.

  1. Gather deployment information and save it - you will need it throughout the migration:

    # 1. Identify the namespace of the current Ingress Controller
    kubectl get pods --all-namespaces | grep ingress
    # Note the namespace name (usually 'ingress-nginx')
    
    # 2. Record the current LoadBalancer external IP
    kubectl get svc -n <ingress-namespace> -o wide
    # Note the value in the EXTERNAL-IP column
    
    # 3. List all domains and hostnames handled by Ingress
    kubectl get ingress --all-namespaces -o jsonpath='{.items[*].spec.rules[*].host}' | tr ' ' '\n' | sort -u
    # Save this list
    
    # 4. Identify the Wallarm API endpoint in use
    kubectl get configmap -n <ingress-namespace> -o yaml | grep -i wallarm
    # Typical values: api.wallarm.com (US), api.wallarm.eu (EU), or us1.api.wallarm.com
    
    # 5. Determine the current Helm release name
    helm list -A | grep ingress
    # Note the release name (usually 'ingress-nginx')
    
  2. Perform pre-flight validations.

    Complete these checks to verify the cluster and environment are ready for migration:

    • Back up all Ingress resources:
    kubectl get ingress --all-namespaces -o yaml > backup-ingresses-$(date +%Y%m%d).yaml
    echo "Backup saved to: backup-ingresses-$(date +%Y%m%d).yaml"
    
    • Export current Helm configuration:
    helm list -A | grep ingress  # Find your release name
    helm get values <release-name> -n <namespace> > backup-helm-values-$(date +%Y%m%d).yaml
    
    • Document current load balancer IP (critical for rollback):
    kubectl get svc -n <ingress-namespace> -o jsonpath='{.items[?(@.spec.type=="LoadBalancer")].status.loadBalancer.ingress[0].ip}'
    
    • Verify cluster resources:
    kubectl top nodes
    # Check: CPU < 70%, Memory < 80% on all nodes
    
    • Reduce DNS TTL (critical for strategies B and D) - Lower TTL 24-48 hours before migration:
    # Check current TTL
    dig your-domain.com | grep -A 1 "ANSWER SECTION"
    # Look for the number before IN A (that's your TTL in seconds)
    
    # Recommended: Set TTL to 300 seconds (5 minutes) before migration
    # This allows faster DNS propagation during the migration
    # Update in your DNS provider (Route53, Cloudflare, etc.)
    
    # After migration is stable (48h+), you can increase TTL back to normal (3600 or higher)
    
  3. Identify a maintenance window:

    • Production: Prefer low-traffic period (e.g., weekends or off-hours)
    • Development and staging environments: Flexible, anytime is acceptable
  4. Notify stakeholders about the following:

    • Migration schedule
    • Expected duration
    • Potential risks
    • Rollback plan

Step 1: Review the new Ingress Controller documentation

  1. Read the comparison guide to understand the differences between the previous and new Ingress Controller implementations:

  2. Read the new Ingress Controller deployment guide and the configuration parameters.

    Key configuration areas include:

    • Wallarm API credentials (config.wallarm.api.host, config.wallarm.api.token)
    • API Firewall configuration (optional)
    • Resource limits and scaling
    • Metrics and monitoring endpoints

Step 2: Deploy the new Controller

  1. Deploy the new controller.

    Deploy the new Ingress Controller in your cluster using the provided values.yaml file:

    # Add the Wallarm Helm repository
    helm repo add wallarm https://charts.wallarm.com/
    helm repo update
    
    # Deploy the new Ingress Controller
    helm install wallarm-ingress-new wallarm/wallarm-ingress \
      --version 7.0.0-rc1 \
      -n wallarm-ingress-new \
      --create-namespace \
      -f values.yaml
    

    IngressClass name

    Use a different IngressClass name (e.g., nginx-new) to run the new controller alongside the old one during migration.

    Example of the values.yaml file with the minimum configuration is below. See more configuration parameters.

    config:
      wallarm:
        enabled: true
        api:
          host: "us1.api.wallarm.com"
          token: "<NODE_TOKEN>"
          # nodeGroup: defaultIngressGroup
    
    config:
      wallarm:
        enabled: true
        api:
          host: "api.wallarm.com" 
          token: "<NODE_TOKEN>"
          # nodeGroup: defaultIngressGroup
    

    <NODE_TOKEN> is the API token generated for Wallarm Node deployment.

  2. Verify the Ingress Controller deployment in Kubernetes:

    # Check controller pods
    kubectl get pods -n wallarm-ingress-new
    
    # Check Wallarm WCLI logs for cloud connectivity and errors
    kubectl logs -n wallarm-ingress-new deployment/wallarm-ingress-controller \
      -c wallarm-wcli --tail=50 | grep -i "sync\|connect\|error"
    
    # Check Postanalytics logs
    kubectl logs -n wallarm-ingress-new deployment/wallarm-ingress-postanalytics --tail=50
    
  3. Verify the new Ingress Controller in the Wallarm Console.

    To do so, go to Wallarm Console → SettingsNodes and check if the new Ingress Controller node appears. It should show up within 2–3 minutes of deployment.

Step 3. Prepare your Ingress resources

Collect all Ingress resources that use the old Ingress Controller:

  • Simple method (no jq required):

    # List all Ingress resources across all namespaces
    kubectl get ingress --all-namespaces
    
    # Export all Ingress resources to a backup file (for reference and rollback purposes)
    kubectl get ingress --all-namespaces -o yaml > old-ingresses-backup.yaml
    echo "Backup saved to: old-ingresses-backup.yaml"
    
    # Count the total number of Ingress resources
    kubectl get ingress --all-namespaces --no-headers | wc -l
    
  • Advanced method (requires jq – a JSON processor):

    # Filter only Ingress resources using the old controller
    # Breakdown:
    #   kubectl get ingress --all-namespaces -o json  → Get all Ingress as JSON
    #   jq '.items[]'                                  → Loop through each Ingress
    #   select(...)                                    → Filter by IngressClass
    kubectl get ingress --all-namespaces \
      -o json | jq '.items[] | select(
        .metadata.annotations["kubernetes.io/ingress.class"] == "nginx" or
        .spec.ingressClassName == "nginx" or
        (.metadata.annotations["kubernetes.io/ingress.class"] // "" | length == 0)
      )' > old-ingresses.json
    

Default Ingress Controller

Ingress resources without an explicit IngressClass may be using the default ingress controller. Verify which controller is set as default: kubectl get ingressclass -o yaml.

Use the exported files (e.g. old-ingresses-backup.yaml) as the source for creating working copies to convert in Step 4. Do not modify the live Ingress resources in the cluster until you apply validated changes in Part 2.

Step 4. Convert annotations

Work with copies, not production

Do not modify existing production Ingress resources directly. Changing annotations or the IngressClass can affect live traffic. Instead, work with copies of your Ingress manifests: export or copy them, apply the conversions below to the copies, and test in a separate namespace (Step 5). Only in Part 2 will you apply the validated changes to your actual production Ingress resources.

Update your copied Ingress manifests to ensure compatibility with the new Ingress Controller, as follows.

  1. Change the IngressClass to match the new controller:

    # Old
    metadata:
      annotations:
        kubernetes.io/ingress.class: nginx
    
    # New
    metadata:
      annotations:
        kubernetes.io/ingress.class: nginx-new
    
  2. Update controller-specific annotations by replacing the old NGINX annotation prefix with the new one:

    # Old
    annotations:
      nginx.ingress.kubernetes.io/rewrite-target: /$2
      nginx.ingress.kubernetes.io/ssl-redirect: "true"
    
    # New
    annotations:
      nginx.org/rewrites: "serviceName=myservice rewrite=/$2"
      nginx.org/redirect-to-https: "true"
    
  3. Wallarm-related annotations (e.g., wallarm-mode, wallarm-application) remain unchanged and continue to work with the new controller. Only the prefix changes from nginx.ingress.kubernetes.io/ to nginx.org/.

    annotations:
      nginx.org/wallarm-mode: "block"
      nginx.org/wallarm-application: "1"
      nginx.org/wallarm-parse-response: "on"
    

Step 5. Test converted Ingress resources

Before migrating production traffic, verify your converted Ingress resources in a separate test namespace. Production Ingress resources remain unchanged at this stage. You are only validating the converted copies.

Example values

The domain names (e.g. test.example.com), resource names (e.g. test-ingress), and command outputs shown in Step 5 are for illustration only. Use the host defined in your own test Ingress manifest (the value under spec.rules[].host) in the verification commands, and expect your actual resource names and outputs to differ.

  1. Apply the manifest file containing the converted Ingress resource(s) from Step 4 — the test version with the new IngressClass and nginx.org/* annotations. You can use any filename (e.g., test-ingress-new.yaml). The name is for your reference only.

    kubectl apply -f test-ingress-new.yaml -n test-namespace  
    

    If this step was successful, you will see output similar to (example):

    ingress.networking.k8s.io/test-ingress created
    

    or configured if you are updating an existing Ingress. Your actual resource name and namespace will match your manifest.

  2. Check the NGINX configuration generated by the new controller. Use the host from your test Ingress in the grep pattern (the example below uses a placeholder host test.example.com):

    kubectl exec -n wallarm-ingress-new deployment/wallarm-ingress-controller \
      -- nginx -T | grep -A 20 "server_name test.example.com"
    

    The presence of your host in the server_name directive in the output confirms the domain is configured correctly.

  3. Get the new load balancer IP:

    NEW_LB_IP=$(kubectl get svc -n wallarm-ingress-new wallarm-ingress-controller \
      -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    
    if [ -z "$NEW_LB_IP" ]; then
      echo "ERROR: LoadBalancer IP not assigned yet. Wait and retry."
      exit 1
    fi
    
    echo "New LoadBalancer IP: $NEW_LB_IP"
    
  4. Test HTTP or HTTPS connectivity. Use the host from your test Ingress in the Host header (example below uses a placeholder):

    # HTTP
    curl -H "Host: test.example.com" http://$NEW_LB_IP/
    # HTTPS
    curl -H "Host: test.example.com" https://$NEW_LB_IP/
    

    Expected outcome:

    • The HTTP response status is 200 OK.
    • Your application responds correctly, e.g.:
      • The homepage HTML is returned as expected.
      • API endpoints return the expected JSON or other responses.
  5. Test Wallarm protection.

    Simulate a malicious request using the host from your test Ingress in the Host header (example uses a placeholder):

    curl -H "Host: test.example.com" "http://$NEW_LB_IP/?id=1' OR '1'='1"
    

    Wait 2–3 minutes and verify in Wallarm Console → EventsAttacks that the request is detected/blocked.

Congratulations! You have completed the strategy-independent portion of the migration. So far you have worked only with copies of Ingress manifests and tested them in a test namespace. Production Ingress resources have not been changed yet.

Migration - part 2 (strategy dependent)

Purpose: Shift traffic from the old controller to the new one and apply the validated Ingress changes to your production Ingress resources. You choose one of four strategies (load balancer, DNS switch, selector swap, or direct replacement) depending on your environment.

Strategy A - load balancer (traffic splitting)

This method uses an external load balancer (F5, HAProxy, cloud ALB, etc.) to gradually shift traffic from the old controller to the new one.

Migration steps:

  1. Apply validated Ingress changes to production. Update your production Ingress resources with the new IngressClass and nginx.org/* annotations (as validated in Part 1, Steps 4–5). This ensures the new controller will serve those Ingresses when traffic is shifted to it. Use the same converted manifests you tested; apply them to the actual production namespaces (e.g. with kubectl apply -f <validated-manifests>.yaml per namespace, or your preferred rollout method).

  2. Configure your external load balancer to split traffic:

    # Example: NGINX upstream configuration
    upstream wallarm_ingress {
        server <old-lb-ip>:443 weight=100;  # Start: 100% to old
        server <new-lb-ip>:443 weight=0;    # Start: 0% to new
    }
    
  3. Gradually adjust traffic weights:

    Phase 1: 90/10 (old/new) → Monitor for 2-4 hours
    Phase 2: 75/25           → Monitor for 2-4 hours
    Phase 3: 50/50           → Monitor for 4-8 hours
    Phase 4: 25/75           → Monitor for 2-4 hours
    Phase 5: 0/100           → Complete migration
    
  4. Monitor during each phase:

    # Watch resource usage on both controllers
    kubectl top pods -n ingress-nginx
    kubectl top pods -n wallarm-ingress-new
    
    # Check for HTTP errors
    kubectl logs -n wallarm-ingress-new deployment/wallarm-ingress-controller \
      | grep -E "HTTP/[0-9.]+ (4|5)[0-9]{2}"
    
  5. Complete migration by removing the old controller.

    Once 100% of traffic is routed to the new controller and all metrics are stable, you can safely remove the old controller.

Strategy B - DNS switch

This method deploys the new controller with a new load balancer IP and updates DNS to point to it.

Migration steps:

  1. Get the new load balancer IP:

    NEW_LB_IP=$(kubectl get svc -n wallarm-ingress-new wallarm-ingress-controller \
      -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    
    if [ -z "$NEW_LB_IP" ]; then
      echo "ERROR: LoadBalancer IP not assigned yet. Wait and retry."
      exit 1
    fi
    
    echo "New LoadBalancer IP: $NEW_LB_IP"
    
  2. Apply validated Ingress changes to production.

    Update your production Ingress resources so they use the new controller: apply the IngressClass and nginx.org/* annotation changes you validated in Part 1. For example, to update all Ingress resources in a target namespace to the new IngressClass:

    # Update all Ingress resources in the target namespace
    kubectl get ingress -n <namespace> -o yaml | \
      sed 's/kubernetes.io\/ingress.class: nginx/kubernetes.io\/ingress.class: nginx-new/' | \
      kubectl apply -f -
    

    Ensure all annotation prefixes and values match your validated manifests (Step 4). Repeat for each production namespace, or apply your validated manifest files with kubectl apply -f <validated-manifests>.yaml.

  3. Test the new setup:

    # Test HTTP connectivity directly against the new IP
    curl -H "Host: your-domain.com" http://$NEW_LB_IP/
    
    # Test with attack simulation
    curl -H "Host: your-domain.com" "http://$NEW_LB_IP/test?id=1' OR '1'='1"
    
  4. Verify in Wallarm Console → EventsAttacks that the attack was detected successfully.

  5. Update DNS records to point to the new IP:

    # Example using AWS Route53:
    aws route53 change-resource-record-sets \
      --hosted-zone-id <zone-id> \
      --change-batch "{
        \"Changes\": [{
          \"Action\": \"UPSERT\",
          \"ResourceRecordSet\": {
            \"Name\": \"your-domain.com\",
            \"Type\": \"A\",
            \"TTL\": 300,
            \"ResourceRecords\": [{\"Value\": \"$NEW_LB_IP\"}]
          }
        }]
      }"
    
  6. Monitor DNS propagation and traffic:

    # Check DNS resolution
    dig +short your-domain.com
    nslookup your-domain.com
    
    # Monitor logs and traffic on the new controller
    kubectl logs -n wallarm-ingress-new deployment/wallarm-ingress-controller -f
    
  7. Wait for DNS TTL to expire while monitoring the old controller for declining traffic.

  8. After 24–48 hours, remove the old Ingress Controller once all traffic has migrated.

Strategy C - Selector Swap

This method preserves the existing load balancer IP by switching the Kubernetes service selector from the old controller pods to the new ones.

Recommended timing

Perform the migration during a low-traffic window (e.g., Saturday morning).

Migration steps:

  1. Get the current load balancer IP:

    OLD_LB_IP=$(kubectl get svc -n ingress-nginx ingress-nginx-controller \
      -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    
    if [ -z "$OLD_LB_IP" ]; then
      echo "ERROR: Could not retrieve current LoadBalancer IP"
      exit 1
    fi
    
    echo "Current LoadBalancer IP (must preserve): $OLD_LB_IP"
    
  2. Update your values file (e.g., values-same-namespace.yaml) with the following configuration:

    controller:
      podLabels:
        app.kubernetes.io/name: nginx-ingress-new
        app.kubernetes.io/instance: wallarm-new
        app.kubernetes.io/component: controller-new
    
      service:
        create: false  # Critical: Do not create a new LoadBalancer Service
    
      ingressClass: nginx-new
    
    config:
      wallarm:
        api:
          host: "api.wallarm.com"  # or us1.api.wallarm.com
          token: "YOUR_TOKEN"
    
  3. Deploy the new Ingress Controller in the same namespace as the old controller using the updated values file:

    helm install wallarm-ingress-new wallarm/wallarm-ingress \
      --version 7.0.0-rc1 \
      -n ingress-nginx \
      -f values-same-namespace.yaml
    
  4. Verify no new LoadBalancer service was created:

    kubectl get svc -n ingress-nginx
    # Only the OLD LoadBalancer Service should be visible
    
  5. Verify new pods are running:

    kubectl get pods -n ingress-nginx -l app.kubernetes.io/instance=wallarm-new
    
  6. Test new controller pods directly:

    NEW_POD=$(kubectl get pod -n ingress-nginx \
      -l app.kubernetes.io/instance=wallarm-new \
      -o jsonpath='{.items[0].metadata.name}')
    
    kubectl port-forward -n ingress-nginx $NEW_POD 8080:80 &
    PF_PID=$!
    
    sleep 2  # Wait for port-forward to establish
    curl -H "Host: your-domain.com" http://localhost:8080/
    
    # Stop port-forward
    kill $PF_PID
    
  7. Apply validated Ingress changes to production.

    Update your production Ingress resources to use the new controller (IngressClass and nginx.org/* annotations as validated in Part 1). This ensures the new controller will serve them once the service selector is switched. For each production Ingress (or in bulk per namespace):

    kubectl patch ingress <ingress-name> -n <namespace> \
      -p '{"metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx-new"}}}'
    

    Apply the full set of annotation changes from your validated manifests if you use more than IngressClass (e.g. nginx.org/wallarm-mode, nginx.org/rewrites). Repeat for all production Ingress resources that will be served by the new controller.

    Important

    The next steps switch traffic from the old controller to the new one. Verify all previous steps before proceeding.

  8. Check the labels of the new controller pods. You will need them later to update the service selector:

    kubectl get pods -n ingress-nginx \
      -l app.kubernetes.io/instance=wallarm-new \
      --show-labels
    

    Example output:

    # NAME                           LABELS
    # wallarm-new-abc123             app.kubernetes.io/name=nginx-ingress-new,app.kubernetes.io/instance=wallarm-new,...
    
  9. Update the LoadBalancer service to point to the new pods:

    kubectl patch svc ingress-nginx-controller -n ingress-nginx -p '{
      "spec": {
        "selector": {
          "app.kubernetes.io/name": "nginx-ingress-new",
          "app.kubernetes.io/instance": "wallarm-new",
          "app.kubernetes.io/component": "controller-new"
        }
      }
    }'
    

    Where:

    • kubectl patch - Updates an existing resource without replacing it entirely.
    • svc ingress-nginx-controller - The service name (your existing load balancer).
    • -n ingress-nginx - The namespace where the service exists.
    • -p '{ "spec": { "selector": {...} } }' - JSON patch to update the selector.
    • The selector labels MUST match your new controller pod labels exactly.

    Expected outcome:

    service/ingress-nginx-controller patched
    

    This confirms that the service selector was updated successfully and traffic will now be routed to the new controller pods.

  10. Verify IP preservation:

    NEW_LB_IP=$(kubectl get svc -n ingress-nginx ingress-nginx-controller \
      -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    echo "LoadBalancer IP after switch: $NEW_LB_IP"
    
    if [ "$OLD_LB_IP" == "$NEW_LB_IP" ]; then
      echo "SUCCESS: IP address preserved - $NEW_LB_IP"
    else
      echo "CRITICAL: IP address changed from $OLD_LB_IP to $NEW_LB_IP - INITIATE ROLLBACK IMMEDIATELY"
      exit 1
    fi
    
  11. Monitor traffic on the new controller:

    # Watch logs on the new controller
    kubectl logs -n ingress-nginx -l app.kubernetes.io/instance=wallarm-new -f
    
    # Check metrics (find the correct deployment name first)
    DEPLOYMENT_NAME=$(kubectl get deployment -n ingress-nginx \
      -l app.kubernetes.io/instance=wallarm-new \
      -o jsonpath='{.items[0].metadata.name}')
    
    kubectl exec -n ingress-nginx \
      deployment/wallarm-ingress-new-controller \
      -c wallarm-wcli -- wcli metric
    
  12. After validation period (24–48 hours), scale down the old controller:

    # Scale to zero (keeps configuration for potential rollback)
    kubectl scale deployment -n ingress-nginx \
      ingress-nginx-controller --replicas=0
    
  13. After an additional 24 hours of stable traffic, delete the old controller:

    helm uninstall ingress-nginx -n ingress-nginx
    

    Recommended cleanup

    After the migration has been stable for 30+ days, schedule a maintenance window to properly clean up the setup (see the steps below). This ensures the service is owned and managed by the new Helm chart. It also simplifies future upgrades and prevents operational confusion.

  14. Create a new LoadBalancer Service managed by the new Helm chart:

    Replace <YOUR_CURRENT_IP> with the IP address from step 1 ($OLD_LB_IP).

    helm upgrade wallarm-ingress-new wallarm/wallarm-ingress \
      -n ingress-nginx \
      --set controller.service.create=true \
      --set controller.service.loadBalancerIP="$OLD_LB_IP" \
      -f values.yaml
    
  15. Wait for the new load balancer to be assigned the same IP (cloud provider dependent; typically takes 1–5 minutes).

    # Monitor the service status
    kubectl get svc -n ingress-nginx -w
    
  16. Verify the new service has the same IP, then delete the old service:

    # Verify new service has the correct IP
    NEW_SERVICE_IP=$(kubectl get svc -n ingress-nginx wallarm-ingress-new-controller \
      -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    
    if [ "$NEW_SERVICE_IP" == "$OLD_LB_IP" ]; then
      echo "SUCCESS: New service has correct IP - safe to delete old service"
      kubectl delete svc ingress-nginx-controller -n ingress-nginx
    else
      echo "WARNING: IP mismatch. Review before deleting old service."
    fi
    
  17. Verify the configuration:

    kubectl get svc -n ingress-nginx
    

Strategy D - direct replacement

This method removes the old controller and deploys the new one in its place.

Migration steps:

Recommendation

For major cloud providers, we strongly recommend reserving the load balancer IP before migration to prevent IP changes. See steps 1 and 2 in the procedure below.

  1. Allocate or reserve a static public IP, depending on your cloud provider:

    EIP_ALLOC=$(aws ec2 allocate-address --domain vpc --query 'AllocationId' --output text)
    EIP=$(aws ec2 describe-addresses --allocation-ids $EIP_ALLOC --query 'Addresses[0].PublicIp' --output text)
    echo "Reserved EIP: $EIP (Allocation: $EIP_ALLOC)"
    
    gcloud compute addresses create wallarm-ingress-ip \
      --region <your-region>
    
    STATIC_IP=$(gcloud compute addresses describe wallarm-ingress-ip \
      --region <your-region> --format="value(address)")
    echo "Reserved IP: $STATIC_IP"
    
    az network public-ip create \
      --resource-group <resource-group> \
      --name wallarm-ingress-ip \
      --sku Standard \
      --allocation-method Static
    
    STATIC_IP=$(az network public-ip show \
      --resource-group <resource-group> \
      --name wallarm-ingress-ip \
      --query ipAddress \
      --output tsv)
    echo "Reserved IP: $STATIC_IP"
    
  2. Deploy the new controller with the reserved IP:

    # In your values.yaml:
    controller:
      service:
        annotations:
          service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
          service.beta.kubernetes.io/aws-load-balancer-eip-allocations: "<EIP_ALLOC>"
    
    # In your values.yaml:
    controller:
      service:
        loadBalancerIP: "<STATIC_IP>"
        annotations:
          cloud.google.com/load-balancer-type: "External"    
    
    # In your values.yaml:
    controller:
      service:
        loadBalancerIP: "<STATIC_IP>"
        annotations:
          service.beta.kubernetes.io/azure-load-balancer-resource-group: "<resource-group>"    
    
  3. Back up the current configuration:

    # Export all Ingress resources
    kubectl get ingress --all-namespaces -o yaml > ingress-backup.yaml
    
    # Export current controller Helm values
    helm get values ingress-nginx -n ingress-nginx > old-values-backup.yaml
    
  4. Prepare converted Ingress resources:

    • Update all IngressClass annotations as described in Step 4.
    • Update annotation prefixes.
    • Save the updated resources to new-ingresses.yaml.
  5. Schedule a maintenance window and notify users about the expected downtime.

  6. Delete the old controller:

    # Downtime begins here
    helm uninstall ingress-nginx -n ingress-nginx
    
    # Verify resources are removed
    kubectl get pods -n ingress-nginx
    
  7. Install the new controller immediately:

    helm install wallarm-ingress wallarm/wallarm-ingress \
      --version 7.0.0-rc1 \
      -n wallarm-ingress \
      --create-namespace \
      -f new-values.yaml
    
    # Wait for pods to become ready
    kubectl wait --for=condition=ready pod \
      -l app.kubernetes.io/name=nginx-ingress \
      -n wallarm-ingress \
      --timeout=300s
    
  8. Apply validated Ingress changes to production. Apply the converted Ingress resources (the validated manifests from Part 1, saved as new-ingresses.yaml) to your production namespaces. This is the step where production Ingress resources are updated; downtime is already in effect and the new controller is running.

    kubectl apply -f new-ingresses.yaml
    

    If your Ingress resources span multiple namespaces, apply the corresponding manifest to each namespace (or use kubectl apply -f new-ingresses.yaml -n <namespace> as needed).

  9. Verify that services are accessible:

    NEW_LB_IP=$(kubectl get svc -n wallarm-ingress wallarm-ingress-controller \
      -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
    
    if [ -z "$NEW_LB_IP" ]; then
      echo "ERROR: LoadBalancer IP not assigned yet. Wait and retry."
      exit 1
    fi
    
    echo "New LoadBalancer IP: $NEW_LB_IP"
    
    # Test each domain
    curl -H "Host: app1.example.com" http://$NEW_LB_IP/
    curl -H "Host: app2.example.com" http://$NEW_LB_IP/
    
  10. Update DNS records to point to the new load balancer IP (if not pre-reserved).

  11. Test attack detection:

    # Test attack detection
    curl -H "Host: app1.example.com" "http://$NEW_LB_IP/?id=1' OR '1'='1"
    
  12. Verify in Wallarm Console that attacks are detected.