AWS offers two container orchestrators (ECS and EKS) and two compute modes (EC2 launch type and Fargate). Picking the right combination depends on your team's familiarity with Kubernetes and how much of the cluster you want to manage.
An ECS task definition (JSON) for a Fargate service running a Python API:
{
"family": "api-service",
"networkMode": "awsvpc",
"requiresCompatibilities": ["FARGATE"],
"cpu": "512",
"memory": "1024",
"executionRoleArn": "arn:aws:iam::123456789012:role/ecsTaskExecutionRole",
"taskRoleArn": "arn:aws:iam::123456789012:role/api-task-role",
"containerDefinitions": [{
"name": "api",
"image": "123456789012.dkr.ecr.us-west-2.amazonaws.com/api:v1.4.2",
"portMappings": [{"containerPort": 8080, "protocol": "tcp"}],
"essential": true,
"environment": [{"name": "LOG_LEVEL", "value": "INFO"}],
"secrets": [{
"name": "DB_PASSWORD",
"valueFrom": "arn:aws:secretsmanager:us-west-2:123456789012:secret:prod/db-AbCdEf"
}],
"logConfiguration": {
"logDriver": "awslogs",
"options": {
"awslogs-group": "/ecs/api-service",
"awslogs-region": "us-west-2",
"awslogs-stream-prefix": "api"
}
}
}]
}
Register and deploy with the AWS CLI:
aws ecs register-task-definition --cli-input-json file://taskdef.json
aws ecs update-service \
--cluster prod \
--service api-service \
--task-definition api-service \
--force-new-deployment
Choose EKS when the team already runs Kubernetes elsewhere, you need cloud portability, you want Kubernetes-native ecosystem tools (Helm, Argo CD, Istio, Karpenter, operators), or you have complex workload patterns (StatefulSets, DaemonSets, custom schedulers). Choose ECS when the team prefers a simpler control plane, you want tight AWS-native integration with minimal operational overhead, and you don't need K8s features.
Fargate removes node management — no patching, no scaling node groups, no right-sizing. You pay per task vCPU/GB-second, which is more expensive per unit than EC2 but eliminates idle capacity and ops cost. EC2 is cheaper at steady scale (especially with Spot or Savings Plans), supports GPUs, privileged containers, and DaemonSets. A common pattern: Fargate for unpredictable or low-volume services, EC2 + Karpenter for production workloads at scale.
On ECS, attach a Task Role to the task definition — the container's AWS SDK calls automatically use those credentials via the task metadata endpoint. On EKS, use IAM Roles for Service Accounts (IRSA): annotate a Kubernetes ServiceAccount with an IAM role ARN and the pod's SDK exchanges its OIDC token for AWS credentials. Both avoid embedding access keys.
Karpenter is an open-source Kubernetes node provisioner from AWS. Unlike Cluster Autoscaler, which scales pre-defined node groups, Karpenter looks at pending pods and provisions exactly the right instance type just-in-time — choosing across hundreds of instance shapes and Spot/On-Demand. Result: faster scale-up (often under a minute), better bin-packing, and lower cost.
Use rolling update (default) with a load-balancer health check and ECS Service Deployment Circuit Breaker — ECS launches the new task definition, waits for ALB target-group healthy status, then drains the old task. For more control, use Blue/Green deployments via CodeDeploy with two target groups; CodeDeploy shifts traffic at a controlled rate (linear, canary, or all-at-once) and supports automatic rollback on CloudWatch alarms.
The default VPC CNI assigns each pod a real VPC IP from the node's ENIs, capping pods at the instance's max-ENI × IPs-per-ENI minus a few reserved. Workarounds: enable VPC CNI prefix delegation (assigns /28 prefixes per ENI, dramatically increasing density), or use a different CNI (Cilium, Calico) with overlay networking.