How to Migrate AWS ECS Services to Vultr Kubernetes Engine

Updated on 05 February, 2026
Migrate applications from AWS ECS to Vultr Kubernetes Engine with minimal downtime and reduced vendor lock-in.
How to Migrate AWS ECS Services to Vultr Kubernetes Engine header image

Amazon Elastic Container Service (ECS) is a proprietary container orchestration platform that manages Docker containers using AWS-specific task definitions, services, and integrations. While ECS provides deep integration with AWS services, it creates vendor lock-in and limits portability. Kubernetes offers a standardized, cloud-agnostic approach to container orchestration with broader ecosystem support and platform independence.

This guide outlines the migration of containerized applications from AWS ECS to Vultr Kubernetes Engine (VKE). It covers the analysis of existing ECS task definitions and services, conversion to Kubernetes manifests, replication of networking and service discovery patterns, migration of secrets and configuration, and deployment strategies that minimize downtime.

Prerequisites

Before you begin, you need to:

Analyze Your ECS Infrastructure

Before migrating, document your existing ECS configuration to understand the components that require conversion. This analysis identifies task definitions, services, load balancers, service discovery configurations, and dependencies.

  1. List all ECS clusters in your AWS account.

    console
    $ aws ecs list-clusters
    

    The output displays cluster ARNs for all ECS clusters in your account.

  2. List services running in your target cluster. Replace CLUSTER-NAME with your actual cluster name.

    console
    $ aws ecs list-services --cluster CLUSTER-NAME
    
  3. Describe each service to understand its configuration. Replace SERVICE-NAME and CLUSTER-NAME with your actual values.

    console
    $ aws ecs describe-services --cluster CLUSTER-NAME --services SERVICE-NAME > service-config.json
    

    This command exports the service configuration to a JSON file for reference during migration.

  4. Extract the task definition name from the service configuration.

    console
    $ cat service-config.json | jq -r '.services[0].taskDefinition'
    

    The output displays the task definition ARN. Note the task definition family name and revision number for the next step.

  5. Retrieve the task definition details. Replace TASK-DEFINITION-ARN with the ARN from the previous step.

    console
    $ aws ecs describe-task-definition --task-definition TASK-DEFINITION-ARN > task-definition.json
    
  6. Extract key configuration details from the task definition.

    console
    $ cat task-definition.json | jq '.taskDefinition | {family, cpu, memory, networkMode, containerDefinitions: [.containerDefinitions[] | {name, image, cpu, memory, portMappings, environment, secrets, mountPoints}]}'
    

    The output displays container images, resource allocations, port mappings, environment variables, secrets, and volume mounts. Save this information for converting to Kubernetes manifests.

  7. Document the service dependencies by examining the service configuration file.

    console
    $ cat service-config.json | jq '.services[0] | {loadBalancers, serviceRegistries, networkConfiguration}'
    

    The output shows load balancer attachments, service discovery registrations, and networking settings. Note these configurations for replication in Kubernetes.

    Note
    ECS awsvpc mode maps each task to an ENI. Kubernetes uses a flat pod network model. If your ECS services relied on Security Groups for east-west isolation, replicate this behavior using Kubernetes NetworkPolicies.

Create a Kubernetes Namespace

Create a dedicated namespace for your application to isolate resources and simplify management. This provides logical separation from other applications and enables namespace-level resource quotas and access controls.

  1. Create a namespace for your application.

    console
    $ kubectl create namespace my-app
    

    Replace my-app with a descriptive name for your application environment.

  2. Verify that the namespace is created.

    console
    $ kubectl get namespace my-app
    
  3. Set the namespace as the default context to avoid specifying -n my-app with every command.

    console
    $ kubectl config set-context --current --namespace=my-app
    

All subsequent kubectl commands in this guide will deploy resources to this namespace. To deploy to a different namespace, either change the context or add the -n namespace-name flag to each command.

Convert ECS Task Definitions to Kubernetes Deployments

ECS task definitions define container images, resource limits, environment variables, and volume mounts. Kubernetes Deployments serve the same purpose using a different YAML structure. Convert your task definitions to Kubernetes manifests by mapping ECS parameters to their Kubernetes equivalents.

The following table maps common ECS task definition parameters to their Kubernetes equivalents.

ECS Parameter Kubernetes Equivalent Notes
desiredCount spec.replicas Number of running tasks or pods
image spec.containers[].image Container image reference
containerPort spec.containers[].ports[].containerPort Port exposed by the container
cpu (units) resources.requests.cpu 1024 ECS units = 1 vCPU = 1000 millicores
memory (MiB) resources.requests.memory Direct conversion to Mi or Gi
environment env Static environment variables
secrets env[].valueFrom.secretKeyRef References Kubernetes Secrets
healthCheck livenessProbe, readinessProbe Container health monitoring
mountPoints volumeMounts Volume mounting configuration
volumes volumes Volume definitions
  1. Create a directory for Kubernetes manifests.

    console
    $ mkdir k8s-manifests
    $ cd k8s-manifests
    
  2. Create a Deployment manifest file. Replace my-app with your application name.

    console
    $ nano my-app-deployment.yaml
    
  3. Add the following Deployment configuration. Adjust values based on your ECS task definition.

    yaml
    apiVersion: apps/v1
    kind: Deployment
    metadata:
      name: my-app
      labels:
        app: my-app
    spec:
      replicas: 3
      selector:
        matchLabels:
          app: my-app
      template:
        metadata:
          labels:
            app: my-app
        spec:
          containers:
            - name: my-app
              image: your-registry/my-app:latest
              ports:
                - name: http
                  containerPort: 80
                  protocol: TCP
              env:
                - name: APP_ENV
                  value: "production"
                - name: LOG_LEVEL
                  value: "INFO"
              resources:
                requests:
                  cpu: "250m"
                  memory: "512Mi"
                limits:
                  cpu: "500m"
                  memory: "1Gi"
              livenessProbe:
                httpGet:
                  path: /health
                  port: http
                initialDelaySeconds: 30
                periodSeconds: 10
                timeoutSeconds: 5
                failureThreshold: 3
              readinessProbe:
                httpGet:
                  path: /ready
                  port: http
                initialDelaySeconds: 10
                periodSeconds: 5
                timeoutSeconds: 3
                failureThreshold: 3
    

    Save and close the file.

  4. Convert CPU values from ECS units to Kubernetes millicores. ECS uses 1024 units per vCPU, while Kubernetes uses 1000 millicores per CPU.

    Kubernetes CPU (millicores) = (ECS CPU units / 1024) × 1000
    
    Examples:
    * 256 ECS units = (256 / 1024) × 1000 = 250m
    * 512 ECS units = (512 / 1024) × 1000 = 500m
    * 1024 ECS units = (1024 / 1024) × 1000 = 1000m (1 vCPU)
  5. Convert memory values from MiB to Kubernetes memory format.

    ECS: 512 (MiB) → Kubernetes: 512Mi
    ECS: 1024 (MiB) → Kubernetes: 1Gi
    ECS: 2048 (MiB) → Kubernetes: 2Gi

Migrate Secrets and Environment Variables

ECS stores secrets in AWS Secrets Manager or Systems Manager Parameter Store. Kubernetes uses native Secret and ConfigMap resources for sensitive and non-sensitive configuration data respectively.

  1. List secrets referenced in your ECS task definition.

    console
    $ cat task-definition.json | jq '.taskDefinition.containerDefinitions[].secrets'
    

    The output displays secret ARNs from AWS Secrets Manager or Parameter Store.

  2. Retrieve secret values from AWS Secrets Manager. Replace SECRET-NAME with your secret identifier.

    console
    $ aws secretsmanager get-secret-value --secret-id SECRET-NAME --query SecretString --output text |jq .
    
  3. Create a Kubernetes Secret for sensitive application data.

    console
    $ kubectl create secret generic my-app-secrets \
        --from-literal=JWT_SECRET='YOUR_JWT_SECRET' \
        --from-literal=API_KEY='YOUR_API_KEY'
    

    Replace the keys and values with your actual secrets retrieved from AWS, and repeat this step for each secret you need to create.

  4. Create a ConfigMap for non-sensitive configuration.

    console
    $ nano my-app-config.yaml
    
  5. Add the ConfigMap configuration.

    yaml
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: my-app-config
    data:
      APP_ENV: "production"
      LOG_LEVEL: "INFO"
      API_ENDPOINT: "https://api.example.com"
      MAX_CONNECTIONS: "100"
    

    Save and close the file.

  6. Apply the ConfigMap to your cluster.

    console
    $ kubectl apply -f my-app-config.yaml
    
  7. Update the Deployment manifest to reference the Secret and ConfigMap.

    console
    $ nano my-app-deployment.yaml
    
  8. Replace the env section with envFrom references to the ConfigMap and Secret.

    yaml
    envFrom:
                - configMapRef:
                    name: my-app-config
                - secretRef:
                    name: my-app-secrets
    

    This configuration injects all key-value pairs from the referenced ConfigMap and Secret as environment variables inside the container.

    Save and close the file.

Replicate ECS Service Discovery with Kubernetes Services

ECS uses AWS Cloud Map for service discovery, allowing services to communicate using DNS names. Kubernetes provides built-in service discovery through Service resources backed by CoreDNS, which automatically create stable DNS records and virtual IPs for accessing pods.

  1. Create a Service manifest for internal communication.

    console
    $ nano my-app-service.yaml
    
  2. Add the Service configuration for the ClusterIP type, which exposes the application only inside the cluster.

    yaml
    apiVersion: v1
    kind: Service
    metadata:
      name: my-app
      labels:
        app: my-app
    spec:
      type: ClusterIP
      ports:
        - name: http
          port: 80
          targetPort: 80
          protocol: TCP
      selector:
        app: my-app
    

    In this configuration:

    • port is the Service port. Other pods use this port when connecting to the service.
    • targetPort is the container port exposed by the application inside each pod.
    • Kubernetes forwards traffic from the Service port to the container port on matching pods.
    • The selector matches pods labeled app: my-app, dynamically updating endpoints as pods scale or restart.

    Save and close the file.

  3. Apply the Service manifest.

    console
    $ kubectl apply -f my-app-service.yaml
    
  4. Verify that the Service is created and has assigned a cluster IP address.

    console
    $ kubectl get service my-app
    

    The output displays the Service details, including the CLUSTER-IP, which acts as a stable virtual IP for the application.

Note
Applications within the cluster can now access this service using the DNS name my-app.my-app.svc.cluster.local (following the pattern <service-name>.<namespace>.svc.cluster.local) or the short name my-app when accessing from within the same namespace. Kubernetes automatically maintains this DNS record as pods are created or destroyed.

Configure Load Balancing with Gateway API

ECS integrates with Application Load Balancers (ALB) and Network Load Balancers (NLB) for external traffic routing. VKE uses the Kubernetes Gateway API with Envoy Gateway for advanced traffic management, providing equivalent functionality to ALB with path-based and host-based routing, along with automated TLS certificate provisioning.

Before creating the Gateway resource for your application, install the required components by following these sections from the Gateway API with TLS Encryption guide:

Create the Gateway Resource

Deploy the Gateway resource to provision a Vultr Load Balancer and assign a public IP address.

  1. Create the Gateway manifest file.

    console
    $ nano app-gateway.yaml
    
  2. Add the following configuration. Replace myapp.example.com with your actual domain name.

    yaml
    apiVersion: gateway.networking.k8s.io/v1
    kind: Gateway
    metadata:
      name: app-gateway
      annotations:
        cert-manager.io/cluster-issuer: letsencrypt-prod
    spec:
      gatewayClassName: eg
      listeners:
        - name: http
          protocol: HTTP
          port: 80
          hostname: "myapp.example.com"
          allowedRoutes:
            namespaces:
              from: Same
        - name: https
          protocol: HTTPS
          port: 443
          hostname: "myapp.example.com"
          tls:
            mode: Terminate
            certificateRefs:
              - kind: Secret
                name: app-tls-secret
          allowedRoutes:
            namespaces:
              from: Same
    

    Save and close the file.

  3. Apply the Gateway manifest.

    console
    $ kubectl apply -f app-gateway.yaml
    
  4. Retrieve the Gateway external IP address.

    console
    $ kubectl get gateway app-gateway
    

    The output displays the Gateway status and assigned IP address. Note this IP address for DNS configuration.

  5. Update your domain's DNS A record to point to the assigned IP address.

Configure HTTPRoute for Application Routing

Create an HTTPRoute to define routing rules from the Gateway to your backend service.

  1. Create the HTTPRoute manifest file.

    console
    $ nano app-route.yaml
    
  2. Add the following configuration. Replace myapp.example.com with your domain name.

    yaml
    apiVersion: gateway.networking.k8s.io/v1
    kind: HTTPRoute
    metadata:
      name: app-route
    spec:
      parentRefs:
        - name: app-gateway
          sectionName: https
      hostnames:
        - "myapp.example.com"
      rules:
        - matches:
            - path:
                type: PathPrefix
                value: /
          backendRefs:
            - name: my-app
              port: 80
    

    Save and close the file.

    For path-based routing to multiple services, add additional rules:

    yaml
    rules:
        - matches:
            - path:
                type: PathPrefix
                value: /api
          backendRefs:
            - name: my-api
              port: 8080
        - matches:
            - path:
                type: PathPrefix
                value: /
          backendRefs:
            - name: my-app
              port: 80
    
  3. Apply the HTTPRoute manifest.

    console
    $ kubectl apply -f app-route.yaml
    
  4. Verify the TLS certificate status.

    console
    $ kubectl get certificate
    

    The certificate may take a few minutes to provision. Wait until the READY column shows True.

Verify the Migration

After applying all Kubernetes manifests, verify that your application functions correctly on VKE.

  1. Monitor the deployment progress.

    console
    $ kubectl get pods -l app=my-app --watch
    

    The command displays real-time pod status updates. Wait until all pods show STATUS as Running and READY shows the expected ratio (e.g., 1/1). Press Ctrl + C to exit watch mode.

  2. Check the pod logs to verify the application started correctly.

    console
    $ kubectl logs -l app=my-app --tail=50
    
  3. Verify that the Service is routing traffic to the pods.

    console
    $ kubectl get endpoints my-app
    

    The output displays the pod IP addresses and ports that the Service routes traffic to. The number of endpoints should match your replica count.

  4. Test internal service connectivity from within the cluster.

    console
    $ kubectl run test-pod --image=curlimages/curl:latest --rm -it --restart=Never -- curl http://my-app
    

    This command creates a temporary pod that sends an HTTP request to your service. The output displays the HTTP response, confirming internal networking functions correctly.

  5. Verify the Gateway external IP address.

    console
    $ kubectl get gateway app-gateway
    

    Note the IP address assigned to the Gateway.

  6. Test external connectivity through the Gateway using your domain name.

    console
    $ curl https://myapp.example.com
    

    Replace myapp.example.com with your actual domain. The output displays your application's HTTP response, confirming external access works correctly with TLS encryption.

  7. Monitor HPA behavior under load.

    console
    $ kubectl get hpa my-app-hpa --watch
    

    The HPA displays current CPU and memory utilization alongside current and desired replica counts. The autoscaler adjusts replicas automatically based on the metrics.

Cut Over Traffic from ECS to VKE

After validating that the application is running correctly on Vultr Kubernetes Engine (VKE), perform a controlled cut over to move production traffic from Amazon ECS to the Kubernetes-based deployment.

  1. Ensure all pods are running, healthy, and passing readiness checks.
  2. Update your DNS records or traffic manager to point to the Vultr Load Balancer IP address serving the application on VKE.
  3. Monitor application logs, metrics, and error rates during the transition.
  4. Keep ECS services running temporarily to allow a quick rollback if required.

After traffic is fully served by VKE and the application is stable, scale ECS services to zero and keep the existing ECS resources available for rollback if needed.

Update CI/CD Pipelines

Update your continuous integration and deployment pipelines to deploy workloads to Vultr Kubernetes Engine (VKE) instead of Amazon ECS. This change replaces ECS service updates and task definition revisions with Kubernetes-native deployment workflows.

  1. Replace the ECS service update command with a Kubernetes rollout.

    console
    # Old ECS deployment
    # aws ecs update-service --cluster my-cluster --service my-service --force-new-deployment
    
    # New Kubernetes deployment
    $ kubectl set image deployment/my-app my-app=your-registry/my-app:${BUILD_TAG}
    $ kubectl rollout status deployment/my-app
    

    The kubectl set image command updates the container image for the Deployment, while kubectl rollout status monitors the rolling update until all pods are running the new version.

  2. Configure access to the VKE cluster in your CI/CD environment.

    • Add the VKE kubeconfig file to your CI/CD environment and set the KUBECONFIG environment variable.
    • Verify that the active Kubernetes context points to the target VKE namespace before running deployment commands.
Note
CI/CD implementations vary by platform and workflow. This example demonstrates one automated deployment approach. Adjust the steps as required to match your CI/CD tooling, security model, and environment.

Depending on your application requirements, you may also reference the following topics, which are intentionally out of scope for this walkthrough:

Conclusion

You have successfully migrated containerized workloads from Amazon ECS to Vultr Kubernetes Engine by converting ECS task definitions into Kubernetes Deployments, replicating service discovery and load balancing, migrating configuration and secrets, and updating CI/CD pipelines. VKE provides a fully managed, standards-based Kubernetes platform that reduces vendor lock-in while offering flexible networking and production-ready orchestration. For advanced configurations and operational best practices, refer to the official Vultr Kubernetes Engine documentation.

Comments