EKS - Create and Import Cluster
Overview
This guide provides step-by-step instructions for creating and managing Amazon EKS clusters using the Kosmos CLI and AWS.
You will learn how to:
- Prepare prerequisites and access
- Set up IAM roles and policies
- Configure OIDC providers
- Create an EKS cluster
- Install Kosmos platform agents
- Validate and deploy workloads
Part 1: Prerequisites
Required tools:
- Kosmos CLI – Command-line interface for Kosmos operations Getting started with Kosmos CLI
- AWS CLI – Interact with AWS services Install AWS CLI
- Helm CLI – Kubernetes package manager Install Helm CLI
- kubectl – Kubernetes command-line tool Install kubectl
Required access:
- Kosmos Access Key – Authenticate with Kosmos console
- AWS IAM Account Access – Admin privileges required for creating roles, policies, and managing EKS
Network prerequisites:
Ensure the following AWS resources exist:
- A VPC with appropriate CIDR blocks
- At least 2 public and 2 private subnets in different availability zones
- Internet Gateway attached (for public clusters)
- Route tables and security groups configured
- Your IP whitelisted in security groups (for public access)
EKS Cluster and Node Role prerequisites:
- EKS service role – Used by the control plane to manage AWS resources
- EKS node role – Used by EC2 worker nodes to interact with AWS resources
Example – Create EKS service and node role
Create EKS service role (If do not exist):
Explanation: The eksServiceRole allows the EKS control plane to manage AWS resources. It must be created before cluster creation.
cat > eks-service-role-trust.json <<'EOF'
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": { "Service": "eks.amazonaws.com" },
"Action": "sts:AssumeRole"
}
]
}
EOF
aws iam create-role --role-name eksServiceRole \
--assume-role-policy-document file://eks-service-role-trust.json
aws iam attach-role-policy --role-name eksServiceRole \
--policy-arn arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
aws iam attach-role-policy --role-name eksServiceRole \
--policy-arn arn:aws:iam::aws:policy/AmazonEKSServicePolicy
Create EKS node role (If do not exist):
The eksNodeRole allows EC2 nodes to register with the cluster, pull container images, and manage network interfaces.
cat > node-role-trust.json <<'EOF'
{
"Version": "2012-10-17",
"Statement": [
{ "Effect": "Allow", "Principal": { "Service": "ec2.amazonaws.com" }, "Action": "sts:AssumeRole" }
]
}
EOF
aws iam create-role \
--role-name eksNodeRole \
--assume-role-policy-document file://node-role-trust.json
aws iam attach-role-policy \
--role-name eksNodeRole \
--policy-arn arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy
aws iam attach-role-policy \
--role-name eksNodeRole \
--policy-arn arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy
aws iam attach-role-policy \
--role-name eksNodeRole \
--policy-arn arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly
Create node instance profile:
aws iam get-instance-profile --instance-profile-name eksNodeInstanceProfile 2>/dev/null || {
echo "Creating eksNodeInstanceProfile..."
aws iam create-instance-profile --instance-profile-name eksNodeInstanceProfile
aws iam add-role-to-instance-profile \
--instance-profile-name eksNodeInstanceProfile \
--role-name eksNodeRole
}
Set Environment variables
Gather the following values before starting:
| Variable | Description |
|---|---|
${ACCOUNT_ID} | AWS account ID |
${ACCESS_KEY} | Kosmos access key |
${FLEET_ID} | Kosmos fleet identifier |
${CLUSTER_NAME} | Desired cluster name |
${REGION} | Target AWS region (e.g., us-east-2) |
${ADMIN_TEAM} | Kosmos team name from the console (grants cluster-admin access) |
${OWNER} | Cluster owner identifier |
${EKS_SERVICE_ROLE} | EKS Service Role |
${EKS_Node_ROLE} | EKS Node Role |
export ACCOUNT_ID=<Your-AWS-account-ID>
export ACCESS_KEY=<Your-Kosmos-access-key>
export FLEET_ID=<Your-Kosmos-fleet-identifier>
export CLUSTER_NAME=<Desired-name-for-your-cluster>
export REGION=<Target-AWS-region>
export ADMIN_TEAM=<Your-team-name>
export OWNER=<Owner-identifier-for-the-cluster>
export EKS_SERVICE_ROLE=<EKS-service-role>
export EKS_Node_ROLE=<EKS-Node-role>
Part 2: IAM role and policy setup
Step 2.1: Create trust entity for Kosmos role
Verify your AWS account ID:
aws sts get-caller-identity --query Account --output text
Create trust-entity.json:
Click to view full trust-entity.json file
cat > trust-entity.json <<'EOF'
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::${ACCOUNT_ID}:oidc-provider/console.kosmos.spcplatform.com/kosmos-oidc"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"console.kosmos.spcplatform.com/kosmos-oidc:sub": "fleet-${FLEET_ID}",
"console.kosmos.spcplatform.com/kosmos-oidc:aud": "kosmos-operator"
}
}
},
{
"Effect": "Allow",
"Principal": { "AWS": ["$(aws sts get-caller-identity --query Arn --output text)"] },
"Action": "sts:AssumeRole"
}
]
}
EOF
Step 2.2: Create Kosmos operator role
Click to view CLI command to create Kosmos operator role
aws iam create-role \
--role-name kosmos-operator \
--assume-role-policy-document file://trust-entity.json
Step 2.3: Create and attach Kosmos operator policies
Click to view full "kosmos-eks-policy" file
cat > kosmos-eks-policy.json <<'EOF'
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "EC2Permissions",
"Effect": "Allow",
"Action": [
"ec2:RunInstances",
"ec2:RevokeSecurityGroupIngress",
"ec2:RevokeSecurityGroupEgress",
"ec2:DescribeInstanceTypes",
"ec2:DescribeRegions",
"ec2:DescribeVpcs",
"ec2:DescribeTags",
"ec2:DescribeSubnets",
"ec2:DescribeSecurityGroups",
"ec2:DescribeRouteTables",
"ec2:DescribeLaunchTemplateVersions",
"ec2:DescribeLaunchTemplates",
"ec2:DescribeKeyPairs",
"ec2:DescribeInternetGateways",
"ec2:DescribeImages",
"ec2:DescribeAvailabilityZones",
"ec2:DescribeAccountAttributes",
"ec2:DeleteTags",
"ec2:DeleteLaunchTemplate",
"ec2:DeleteSecurityGroup",
"ec2:DeleteKeyPair",
"ec2:CreateTags",
"ec2:CreateSecurityGroup",
"ec2:CreateLaunchTemplateVersion",
"ec2:CreateLaunchTemplate",
"ec2:CreateKeyPair",
"ec2:AuthorizeSecurityGroupIngress",
"ec2:AuthorizeSecurityGroupEgress",
"sts:GetCallerIdentity"
],
"Resource": "*"
},
{
"Sid": "IAMPermissions",
"Effect": "Allow",
"Action": [
"iam:ListRoles",
"iam:ListRoleTags",
"iam:ListInstanceProfilesForRole",
"iam:ListInstanceProfiles",
"iam:ListAttachedRolePolicies",
"iam:GetRole",
"iam:GetInstanceProfile",
"iam:DetachRolePolicy",
"iam:DeleteRole",
"iam:CreateRole",
"iam:AttachRolePolicy",
"iam:TagRole"
],
"Resource": "*"
},
{
"Sid": "IAMPermissionsForPassRoleToEKS",
"Effect": "Allow",
"Action": [
"iam:PassRole"
],
"Resource": "*",
"Condition": {
"StringEquals": {
"iam:PassedToService": "eks.amazonaws.com"
}
}
},
{
"Sid": "CloudFormationPermissions",
"Effect": "Allow",
"Action": [
"cloudformation:ListStacks",
"cloudformation:ListStackResources",
"cloudformation:DescribeStacks",
"cloudformation:DescribeStackResources",
"cloudformation:DescribeStackResource",
"cloudformation:DeleteStack",
"cloudformation:CreateStackSet",
"cloudformation:CreateStack"
],
"Resource": "*"
},
{
"Sid": "KMSPermissions",
"Effect": "Allow",
"Action": "kms:ListKeys",
"Resource": "*"
},
{
"Sid": "EKSPermissions",
"Effect": "Allow",
"Action": [
"eks:UpdateNodegroupVersion",
"eks:UpdateNodegroupConfig",
"eks:UpdateClusterVersion",
"eks:UpdateClusterConfig",
"eks:UntagResource",
"eks:UpdateAddon",
"eks:TagResource",
"eks:ListUpdates",
"eks:ListTagsForResource",
"eks:ListNodegroups",
"eks:ListFargateProfiles",
"eks:ListClusters",
"eks:ListAddons",
"eks:ListIdentityProviderConfigs",
"eks:DescribeUpdate",
"eks:DescribeNodegroup",
"eks:DescribeFargateProfile",
"eks:DescribeCluster",
"eks:DescribeAddon",
"eks:DescribeAddonVersions",
"eks:DescribeAddonConfiguration",
"eks:DescribeIdentityProviderConfig",
"eks:DeleteNodegroup",
"eks:DeleteFargateProfile",
"eks:DeleteCluster",
"eks:DeleteAddon",
"eks:CreateNodegroup",
"eks:CreateFargateProfile",
"eks:CreateAddon",
"eks:CreateCluster",
"eks:AssociateIdentityProviderConfig"
],
"Resource": "*"
},
{
"Sid": "IAMPermissionsForServiceRoleCreation",
"Effect": "Allow",
"Action": [
"iam:AddRoleToInstanceProfile",
"iam:AttachRolePolicy",
"iam:CreateInstanceProfile",
"iam:CreateRole",
"iam:CreateOpenIDConnectProvider",
"iam:DeleteInstanceProfile",
"iam:DeleteRole",
"iam:DetachRolePolicy",
"iam:GetInstanceProfile",
"iam:GetRole",
"iam:ListAttachedRolePolicies",
"iam:ListInstanceProfiles",
"iam:ListInstanceProfilesForRole",
"iam:ListRoles",
"iam:ListRoleTags",
"iam:ListOpenIDConnectProviders",
"iam:RemoveRoleFromInstanceProfile",
"iam:TagRole",
"sts:AssumeRoleWithWebIdentity"
],
"Resource": "*"
},
{
"Sid": "IAMPermissionsForServiceLinkedRoleCreation",
"Effect": "Allow",
"Action": [
"iam:CreateServiceLinkedRole"
],
"Resource": "arn:aws:iam::*:role/aws-service-role/eks.amazonaws.com/AWSServiceRoleForAmazonEKS"
},
{
"Sid": "VPCPermissions",
"Effect": "Allow",
"Action": [
"ec2:ReplaceRoute",
"ec2:ModifyVpcAttribute",
"ec2:ModifySubnetAttribute",
"ec2:DisassociateRouteTable",
"ec2:DetachInternetGateway",
"ec2:DescribeVpcs",
"ec2:DeleteVpc",
"ec2:DeleteTags",
"ec2:DeleteSubnet",
"ec2:DeleteRouteTable",
"ec2:DeleteRoute",
"ec2:DeleteInternetGateway",
"ec2:CreateVpc",
"ec2:CreateSubnet",
"ec2:CreateSecurityGroup",
"ec2:CreateRouteTable",
"ec2:CreateRoute",
"ec2:CreateInternetGateway",
"ec2:AttachInternetGateway",
"ec2:AssociateRouteTable"
],
"Resource": "*"
}
]
}
EOF
CLI command to create and attach Kosmos operator policy and role:
Click to view CLI command to create and attach Kosmos operator role
aws iam create-policy \
--policy-name kosmos-operator-policy \
--policy-document file://kosmos-eks-policy.json
aws iam attach-role-policy \
--role-name kosmos-operator \
--policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/kosmos-operator-policy
Part 3. Register OIDC provider in AWS
What is OIDC and why is it needed?
OpenID Connect (OIDC) is an identity layer built on top of OAuth 2.0 that allows one service to verify the identity of another through a trusted identity provider. Instead of sharing long-lived credentials, services exchange short-lived tokens that prove who they are.
How Kosmos uses OIDC to access AWS:
When Kosmos needs to manage resources in your AWS account (such as creating EKS nodes or updating cluster configurations), it generates a signed identity token. This token contains claims identifying the specific fleet making the request. AWS then validates this token against the OIDC provider you registered and, if trusted, issues temporary AWS credentials.
Why we use the SSL certificate thumbprint:
When you register Kosmos as an OIDC provider in AWS, you provide a certificate thumbprint. This thumbprint tells AWS: “Trust tokens signed by the service presenting this certificate.” It ensures AWS only accepts tokens from the legitimate Kosmos identity provider, preventing impersonation.
Benefits of this approach:
- No stored credentials — Kosmos never stores your AWS access keys
- Short-lived access — Each operation uses fresh, temporary credentials
- You control trust — Remove the OIDC provider anytime to revoke access
Register the OIDC provider
You have two options for registering the OIDC provider:
Option A:
Click to view: Using AWS console UI (Recommended)
- Log in to AWS Console
- Navigate to: IAM → Access Management → Identity Providers
- Click + Add provider
- Choose OpenID Connect, Enter the following:
- Provider URL:
https://console.kosmos.spcplatform.com/kosmos-oidc - Audience:
kosmos-operator
- Provider URL:
- Click Add provider. Identity provider will be created.
- Click to open your Identity provider, Go to the Endpoint verification tab and copy and save your Thumbprint
Option B:
Click to view: Manual Configuration via CLI
Get the certificate thumbprint:
Save the following script as
get_oidc_fingerprint.sh:#!/bin/bash # get_oidc_fingerprint.sh - Extract OIDC provider certificate fingerprint # Usage: ./get_oidc_fingerprint.sh [OIDC_URL] OIDC_URL="${1:-https://console.kosmos.spcplatform.com/kosmos-oidc}" HOST=$(echo "$OIDC_URL" | sed -E 's|https?://([^/:]+).*|\1|') # Get certificate chain (</dev/null prevents command from hanging) openssl s_client -servername "$HOST" -showcerts -connect "$HOST:443" </dev/null 2>/dev/null > certs_chain.txt if [ ! -s certs_chain.txt ]; then echo "Failed to retrieve certificate chain." exit 1 fi # Split certificate chain into individual files awk 'BEGIN {cert=""; count=0} /BEGIN CERTIFICATE/ {cert=$0; next} /END CERTIFICATE/ {cert=cert "\n" $0; filename=sprintf("cert_%02d.crt", count++); print cert > filename; cert=""} {if (cert != "") cert=cert "\n" $0}' certs_chain.txt # Find certificate matching the domain for cert_file in cert_*.crt; do subject=$(openssl x509 -in "$cert_file" -noout -subject 2>/dev/null) altnames=$(openssl x509 -in "$cert_file" -noout -text 2>/dev/null | grep -A 1 "Subject Alternative Name") if echo "$subject" | grep -q "CN.*$HOST" || echo "$altnames" | grep -q "$HOST"; then FINGERPRINT=$(openssl x509 -in "$cert_file" -fingerprint -sha1 -noout | sed 's/://g' | awk -F= '{print $2}') FINGERPRINT=$(echo "$FINGERPRINT" | tr '[:upper:]' '[:lower:]') echo "$FINGERPRINT" rm -f certs_chain.txt cert_*.crt exit 0 fi done echo "No matching certificate found." rm -f certs_chain.txt cert_*.crt exit 1Run the script:
chmod +x get_oidc_fingerprint.sh ./get_oidc_fingerprint.shThis outputs a 40-character SHA1 fingerprint (e.g.,
c940c37c6a3d3327385074008b7009cb5c8f84d6).Create OIDC provider configuration file (
create-oidc-provider.json):cat > create-oidc-provider.json <<'EOF' { "Url": "https://console.kosmos.spcplatform.com/kosmos-oidc", "ThumbprintList": ["YOUR_THUMBPRINT_FROM_ABOVE"] } EOFCreate the OIDC provider:
aws iam create-open-id-connect-provider \ --cli-input-json file://create-oidc-provider.json > oidc-provider-output.json
Save the returned OpenIDConnectProviderArn—you’ll need it next.
Part 4: Creating an EKS cluster
4.1 Log in to the Kosmos
kosmos login console.kosmos.spcplatform.com --access-key ${ACCESS_KEY}
Step 4.2: Prepare cluster configuration
Tip: Use kosmos create eks --skeleton to see the correct YAML format.
- Kind:
EKSCluster - Kubernetes Version: Match node group version exactly (e.g.,
1.32) - Roles: Use
eksServiceRolefor cluster,eksNodeRolefor nodes - Public/Private Access: Enable as required
Create eks-cluster-config.yaml:
Important: Expand the section below and carefully review the YAML configuration. You must replace the placeholder values for
subnets,publicAccessSources, and other environment-specific fields before creating the cluster.
Click to open – eks-cluster-config.yaml file
cat > eks-cluster-config.yaml <<EOF
apiVersion: storage.kosmos.spcplatform.com/v1
kind: EKSCluster
metadata:
labels:
app.kubernetes.io/name: ${CLUSTER_NAME}
app.kubernetes.io/instance: ${FLEET_ID}
app.kubernetes.io/part-of: kosmos
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: kosmos
name: ${CLUSTER_NAME}
namespace: ${FLEET_ID}
spec:
description: "EKS cluster created via Kosmos CLI"
authorization:
adminUsers: [${OWNER}]
owner: ${OWNER}
eksConfig:
clusterRole: "${EKS_SERVICE_ROLE}" # Add your EKS service role ARN here
kubernetesVersion: "1.32" # Replace value with the actual K8S version
publicAccess: true
privateAccess: true
kosmosRoleArn: "arn:aws:iam::${ACCOUNT_ID}:role/kosmos-operator"
displayName: "${CLUSTER_NAME}" # Enter your Cluster name here
region: "${REGION}" # Enter your Regions here
loggingTypes:
["api", "audit", "authenticator", "controllerManager", "scheduler"]
secretsEncryption: false
tags:
ManagedBy: "kosmos"
subnets:
- "subnet-xxxxxxxxxxxxxxxxx" # Replace with your actual subnet ID
- "subnet-xxxxxxxxxxxxxxxxx" # Replace with your actual subnet ID
- "subnet-xxxxxxxxxxxxxxxxx" # Replace with your actual subnet ID
- "subnet-xxxxxxxxxxxxxxxxx" # Replace with your actual subnet ID
securityGroups: []
publicAccessSources:
- "10.0.0.1/32" # Replace with your actual public IP
- "10.0.0.2/32" # Replace with your actual public IP
ebsCSIDriver: true
imported: false
nodeGroups:
- nodegroupName: "${CLUSTER_NAME}-nodes" # Replace value with your actual node group name
nodeRole: "${EKS_Node_ROLE}" # Add your EKS Node role ARN here
resourceTags:
Environment: "prd"
diskSize: 50 # Replace value with your desired disk size
instanceType: "t3.medium" # Replace value with your desired instance type
version: "1.32" # This value must be the same as the kubernetesVersion above
minSize: 1
maxSize: 4
desiredSize: 1
gpu: false
subnets: []
tags:
NodeGroup: "primary"
labels:
workload: "general"
requestSpotInstances: false
EOF
Note: Please do not forget to replace your publicAccessSources and subnets in the above YAML file. Look for:
- Replace the above with your actual subnet IDs
- Add your IP here
- Optional: specific security groups
Step 4.3: Create the cluster
- Create cluster:
kosmos create eks --file eks-cluster-config.yaml --fleet ${FLEET_ID}
- List clusters in Kosmos:
kosmos list eks --fleet ${FLEET_ID}
- Verify cluster in AWS:
aws eks list-clusters
Step 4.4: Install platform agent and connect to Kosmos
Important: Cluster endpoint may default to private-only. Public access is required for agent installation from outside the VPC.
Step 4.4.1: Assume Kosmos Operator Role
aws sts assume-role \
--role-arn arn:aws:iam::${ACCOUNT_ID}:role/kosmos-operator \
--role-session-name kosmos-operator-session \
> assume-role-output.json
Export temporary credentials:
export AWS_ACCESS_KEY_ID=$(jq -r '.Credentials.AccessKeyId' assume-role-output.json)
export AWS_SECRET_ACCESS_KEY=$(jq -r '.Credentials.SecretAccessKey' assume-role-output.json)
export AWS_SESSION_TOKEN=$(jq -r '.Credentials.SessionToken' assume-role-output.json)
Verify role:
aws sts get-caller-identity
# Expected: Arn: arn:aws:sts::${ACCOUNT_ID}:assumed-role/kosmos-operator/kosmos-operator-session
Step 4.4.2: Update kubeconfig
aws eks update-kubeconfig --name ${CLUSTER_NAME} --region ${REGION}
Step 4.4.3: Install Kosmos platform agent
kosmos join cluster ${CLUSTER_NAME} --fleet ${FLEET_ID}
Verify agent:
kubectl get pods -n vcluster-platform
helm list -n vcluster-platform
Part 5: Importing an existing EKS cluster
- Make sure that you have completed the Prerequisites , IAM role and policy setup , and OIDC provider registration .
- Make sure you’re logged into Kosmos CLI
kosmos login console.kosmos.spcplatform.com --access-key ${ACCESS_KEY}
Step 5.1: Prepare import configuration
For existing clusters, create eks-import-config.yaml:
cat > eks-import-config.yaml <<EOF
apiVersion: storage.kosmos.spcplatform.com/v1
kind: EKSCluster
metadata:
labels:
app.kubernetes.io/name: ${CLUSTER_NAME}
app.kubernetes.io/instance: ${FLEET_ID}
app.kubernetes.io/part-of: kosmos
app.kubernetes.io/managed-by: kustomize
app.kubernetes.io/created-by: kosmos
name: ${CLUSTER_NAME}
namespace: ${FLEET_ID}
spec:
name: ${CLUSTER_NAME}
description: "Existing EKS cluster imported to Kosmos"
authorization:
adminTeams: [${OWNER}]
owner: ${OWNER}
eksConfig:
displayName: "${CLUSTER_NAME}-imported"
region: "${REGION}"
imported: true # Key difference for imports
kosmosRoleArn: "arn:aws:iam::${ACCOUNT_ID}:role/kosmos-operator"
publicAccessSources:
- "10.0.0.1/32" # Replace with your actual public IP
- "10.0.0.2/32" # Replace with your actual public IP
kubernetesVersion: "1.32" # Match existing cluster version
EOF
Step 5.2: Import the Cluster
Note: The
--nameparameter in CLI commands must match thedisplayNamefrom your import configuration (e.g.,${CLUSTER_NAME}-imported), not themetadata.name. AWS uses thedisplayNameas the actual cluster identifier.
Ensure you have the cluster name and region of the existing EKS cluster as well as the kosmos-operator ARN created earlier in Step 2.3 .
Verify cluster exists:
aws eks describe-cluster --name ${CLUSTER_NAME} --region ${REGION}
Import to Kosmos:
kosmos create eks --file eks-import-config.yaml
Check status:
kosmos list eks --fleet ${FLEET_ID}
Step 5.3: Connect imported cluster
# Assume the kosmos-operator
aws sts assume-role \
--role-arn arn:aws:iam::${ACCOUNT_ID}:role/kosmos-operator \
--role-session-name kosmos-operator-session \
> assume-role-output.json
# Export credentials
export AWS_ACCESS_KEY_ID=$(jq -r '.Credentials.AccessKeyId' assume-role-output.json)
export AWS_SECRET_ACCESS_KEY=$(jq -r '.Credentials.SecretAccessKey' assume-role-output.json)
export AWS_SESSION_TOKEN=$(jq -r '.Credentials.SessionToken' assume-role-output.json)
# Update kubeconfig for existing cluster
aws eks update-kubeconfig --name ${CLUSTER_NAME} --region ${REGION}
# Install agent to connect to Kosmos
kosmos join cluster ${CLUSTER_NAME} --fleet ${FLEET_ID}
# Verify agent is running
kubectl get pods -n vcluster-platform
helm list -n vcluster-platform
Part 6: Validation and usage
Step 6.1: Validate cluster access
# Login to Kosmos
kosmos login https://console.kosmos.spcplatform.com/
# Switch context to cluster
kosmos use cluster ${CLUSTER_NAME} --fleet ${FLEET_ID}
# Test access
kubectl get namespaces
kubectl get nodes
kubectl get pods --all-namespaces
Step 6.2: Verify cluster health
kubectl get nodes -o wide
kubectl get pods -n kube-system
kubectl get pods -n vcluster-platform
kubectl cluster-info
Part 7: Test sample application
Step 7.1: Deploy nginx
kubectl create deployment nginx-hello --image=nginx --port=80
kubectl expose deployment nginx-hello --type=LoadBalancer --port=80
kubectl get deployment nginx-hello
kubectl get pods -l app=nginx-hello
kubectl get svc nginx-hello
kubectl get svc nginx-hello -w
Step 7.2: Clean up
kubectl delete deployment nginx-hello
kubectl delete service nginx-hello
Part 8: Troubleshooting
Common issues
Cluster stuck in “Connecting”
- Check OIDC provider configuration
- Validate IAM role trust relationships
- Verify network connectivity
- Confirm
kosmos-operatorandkosmos-client-roleexist
Authentication errors
- Ensure correct role ARN in cluster YAML
- Verify fleet ID matches trust relationships
- Check temporary credentials expiration
Network access issues
- Your IP should be in
publicAccessSources - Security groups allow required ports
- VPN connection required for private clusters
- Your IP should be in
Node group creation failures
- Confirm node role has attached policies
- Subnet availability zones match
- Disk size and instance type meet requirements
eksNodeInstanceProfileexists and includes node role
eksServiceRoleerrors- Must be created before cluster creation
- Attach
AmazonEKSClusterPolicyandAmazonEKSServicePolicy - Trust policy allows
eks.amazonaws.comto assume role
Agent Installation Failures
- Must assume
kosmos-operator - Cluster must have public access enabled
- Verify network connectivity
- Ensure AWS credentials are exported
- Must assume
Part 9: Cleanup and Teardown
When you no longer need the cluster, follow these steps to clean up resources.
Step 9.1: Remove cluster from Kosmos
Delete the EKS cluster from Kosmos management:
kosmos delete eks --name ${CLUSTER_NAME} --fleet ${FLEET_ID}
Note: This removes the cluster from Kosmos management. If the cluster was created by Kosmos (not imported), this will also delete the EKS cluster and its resources in AWS.
Step 9.2: Delete IAM resources (optional)
If you created IAM resources specifically for this cluster and no longer need them:
# Detach and delete kosmos-operator policy
aws iam detach-role-policy \
--role-name kosmos-operator \
--policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/kosmos-operator-policy
aws iam delete-policy \
--policy-arn arn:aws:iam::${ACCOUNT_ID}:policy/kosmos-operator-policy
# Delete kosmos-operator role
aws iam delete-role --role-name kosmos-operator
Step 9.3: Remove OIDC provider (optional)
Warning: Only remove the OIDC provider if no other clusters or fleets are using it. Removing it will break authentication for all resources that depend on it.
# List OIDC providers to find the ARN
aws iam list-open-id-connect-providers
# Delete the Kosmos OIDC provider
aws iam delete-open-id-connect-provider \
--open-id-connect-provider-arn arn:aws:iam::${ACCOUNT_ID}:oidc-provider/console.kosmos.spcplatform.com/kosmos-oidc
Notes
- Replace all
${VARIABLE}placeholders with real values - AWS-compatible APIs are used by SPC platform
- VPN requirements may vary based on your organization
- Some features require additional permissions or licenses
- For VPC creation, refer to MKS setup guide with helper scripts