Create GKE Cluster using Terraform

Introduction

This is a Terraform reference script for creating a Kosmos GKE Cluster through the Kosmos provider that conforms with the Samsung Security Checklist (v2.0.2).

Alternative: If you prefer using the CLI instead of Terraform, see Create GKE Cluster using CLI .

Requirements

NameVersion
Terraform CLI>= 1.9
Kosmos CLI>= 4.3.9
Google Cloud CLI>= 517.0.0
Kosmos Terraform Provider>= 0.9.3

Artifacts

Download the Terraform module and provider from the Terraform Artifacts page:

ArtifactVersion
Kosmos Terraform Providerv0.12.0
GKE (Google Cloud Platform) Modulev4.1.2

Provider Configuration

Create two files: versions.tf for version constraints and provider.tf for provider configuration.

versions.tf

# Version Constraints for Kosmos GKE Terraform Configuration
terraform {
  required_version = ">= 1.9"

  required_providers {
    kosmos = {
      source  = "local/samsung/kosmos"
      version = ">= 0.9.3"
    }

    google = {
      source  = "hashicorp/google"
      version = ">= 5.0"
    }

    google-beta = {
      source  = "hashicorp/google-beta"
      version = ">= 5.0"
    }
  }
}

provider.tf

# Google Cloud Provider Configuration
# Authentication: Uses Application Default Credentials (ADC)
# Run: gcloud auth application-default login
provider "google" {
  project = var.gcp_project_id
  region  = var.gcp_region

  # Recommended when using gcloud CLI credentials
  user_project_override = true
}

provider "google-beta" {
  project = var.gcp_project_id
  region  = var.gcp_region

  user_project_override = true
}

# Kosmos Provider Configuration
provider "kosmos" {
  # Access key can be set via KOSMOS_ACCESS_KEY environment variable
  accesskey = var.kosmos_access_key

  # Kosmos API endpoint
  endpoint = var.kosmos_endpoint
}

# Data source to get current GCP project information
data "google_project" "current" {
  project_id = var.gcp_project_id
}

Getting started

Prerequisites

  1. Install the Kosmos provider following the Terraform provider guide . Install the version specified in versions.tf.

  2. Ensure you have the required Kosmos permissions: Cluster Creator or Cluster Admin role. If you don’t have these permissions, contact your Kosmos operator.

  3. Install the gcloud CLI following the official installation guide .

  4. Configure GCP Application Default Credentials (ADC) for Terraform:

    gcloud auth application-default login
    
  5. Enable the required GCP APIs in your project:

    gcloud services enable \
      compute.googleapis.com \
      container.googleapis.com \
      iam.googleapis.com \
      iamcredentials.googleapis.com \
      cloudresourcemanager.googleapis.com \
      cloudkms.googleapis.com \
      --project=YOUR_PROJECT_ID
    
  6. Ensure the GCP credentials have the required permissions .

  7. Find your public IP address (needed for gke_master_authorized_networks and bastion_authorized_networks):

    curl -s ifconfig.me
    
  8. Generate a Kosmos access key:

    1. Open Kosmos console
    2. Click your username (top-right) → Access Keys
    3. Click Create access key
  9. Ensure you have a Kosmos fleet. If you don’t have one, contact your Kosmos administrator or see the fleet management documentation .

How to run

  1. Download the terraform-kosmos-gke module from the Artifacts table above

  2. Create a new file called terraform.tfvars inside the working directory. Refer to terraform.tfvars.example ( included in the example download ) for the required variables.

  3. Initialize the working directory and download Terraform providers and modules

    terraform init
    
  4. Apply the Terraform configuration. Review the resources to be created carefully, then type yes when prompted:

    terraform apply --var-file=terraform.tfvars
    
    
    1. Or terraform apply is also possible as Terraform by default will use terraform.tfvars file’s content as variable inputs
    2. After the cluster’s state turns to connecting, you can connect the cluster towards Kosmos by following the steps below
  5. To destroy all the resources, run below command

    terraform destroy --var-file=terraform.tfvars
    

Connecting the cluster to Kosmos

Prerequisites

  1. Ensure you have network access towards the cluster’s control plane
    • This means that gke_master_authorized_networks variable should at least contain your outbound IP address
  2. Ensure you have gke-gcloud-auth-plugin installed through gcloud, you can use the follow the instructions here to install
  3. Ensure you have kosmos CLI installed, you can follow the instructions here to install
  4. Ensure you have helm CLI installed, you can follow the instructions here to install . This is required by kosmos CLI to connect the cluster to Kosmos.

How to connect cluster to Kosmos

  1. Authenticate yourself in gcloud CLI, you can run below command:

    gcloud auth login
    
  2. Authenticate yourself in kosmos CLI, you can run below command:

    • $KOSMOS_ACCESS_KEY should be the same Kosmos access key generated in the Prerequisites
    kosmos login --access-key $KOSMOS_ACCESS_KEY https://console.kosmos.spcplatform.com
    
  3. Generate kubeconfig access towards the created GKE cluster, you can run below command (replace vars with actual values):

    • $CLUSTER_NAME should be the name of the created GKE cluster in cluster_name variable
    • $CLUSTER_REGION should be the name of the region where the GKE cluster is created in gcp_region variable
    gcloud container clusters get-credentials $CLUSTER_NAME  \
    --location=$CLUSTER_REGION
    
  4. Connect the cluster towards Kosmos, you can run below command

    • $FLEET_NAME should be the name of the fleet where the cluster is registered in fleet_name variable
    • $CLUSTER_NAME should be the name of the created GKE cluster in cluster_name variable
    kosmos join cluster --fleet $FLEET_NAME $CLUSTER_NAME
    
  5. After a while, the state of the cluster in Kosmos will turn into ready

Variables

VariablesRequiredDescriptiontypeValue Example
fleet_name“Kosmos fleet name where the GKE cluster will be registered in”string“fleet1”
cluster_name“To be created GKE cluster’s name”string“kosmos-gke-cluster”
kosmos_access_key“Access Key for Kosmos”string""
kosmos_user“Kosmos user ID to be registered as the owner of the GKE cluster in Kosmos”string“kosmosuser”
gcp_region“GCP region where the resources will be created in”string“asia-southeast1”
gcp_project_id“GCP project where the GKE cluster will be created through Kosmos”string""
gcp_workload_identity_pool_id“Workload Identity Pool ID for Kosmos federation. Max 32 characters - the module concatenates this with fleet name, so keep it short (e.g., kosmos-pool)”string“kosmos-pool”
gcp_workload_identity_provider_id“Workload Identity Provider ID for Kosmos OIDC. Max 32 characters - keep it short to avoid length errors (e.g., kosmos-oidc)”string“kosmos-oidc”
gke_version_major“Major version to be used as GKE cluster’s version”string“1.31”
bastion_authorized_networks“CIDR blocks where the bastion host is to be accessible from”list(string)“[“210.94.41.89/32”, “203.126.64.67/32”]”
network_name“Name of the VPC network to be created and used for the GKE cluster and bastion host”string“kosmos-gkecluster-network”
node_pools“Node pools to be added as the cluster’s node pool Node Pools format Nested schema for spec.gke_config.node_pools , by default a node pool with 1 node on each zone is created for convenience of testing”list(object)See node pool example
create_new_workload_identity_pool“Creates a new workload identity pool for Kosmos' to be assumed role if true”booltrue
bastion_subnet_range_ip“IP range for the bastion instance’s subnet”string“10.1.0.0/16”
bastion_ssh_port“Port used to SSH into the bastion host, do not use port 22 as per security policy”number“10910”
bastion_os_image“GCP OS image used for the bastion host”string“ubuntu-os-cloud/ubuntu-2004-lts”
bastion_machine_type“Bastion instance’s GCP VM size, e.g. e2-micro, n1-standard-1, etc.”string“e2-micro”
bastion_network_tag“Bastion instance’s attached network tag, used to allow access in firewall rules”string“bastion-server”
gke_cluster_ipv4cidr“IP range for the GKE’s internal K8s networking”string“172.16.0.0/16”
gke_master_authorized_networks“CIDR blocks where the created GKE cluster is to be accessible from. Must include Kosmos platform IPs (see below)”list(object)[{cidr_block="X.X.X.X/32",display_name="Suwon Network IP"}]
gke_subnet_range_ip“IP range for the GKE nodes' subnet”string“10.2.0.0/16”
gke_release_channel“GKE Release channel to be used for K8s minor version selection, check GCP’s official docs for list of available channels”string“REGULAR”
kosmos_tier“Kosmos environment to be used, valid values are dev, stg, and null for production tier”stringnull
enable_cloud_nat“Decides on whether to create a Cloud NAT for the GKE subnet, Cloud NAT is needed to connect the cluster to Kosmos”booltrue
cloud_nat_ip_address_name“Name of the reserved IP in GCP to be used as Cloud NAT’s IP address, ephemeral IP will be used if set to nullstring“kosmos-whitelisted-ip”
create_vpc“Whether to create a new VPC with subnets and firewall rules, or use an existing VPC. If false, you must specify gke_subnet_name and bastion_subnet_name for existing subnets”booltrue
gke_subnet_name“(Required if create_vpc is false) Name of existing subnet for the GKE cluster”string“gke-subnet-1”
bastion_subnet_name“(Required if create_vpc is false) Name of existing subnet for the Bastion instance”string“bastion-subnet-1”
add_suffix_on_workload_id_names“(Only used if create_new_workload_identity_pool is true) Add random string as Workload Identity pool & provider names' prefix to prevent collision due to 30-day deletion period.boolfalse
bastion_desired_status“Decides the state of the bastion VM, by default it’s set to TERMINATED which means the VM is shut-down, possible values are RUNNING, SUSPENDED, and TERMINATED”string“TERMINATED”
bastion_static_ip_name“Name of the reserved static IP to be attached to the Bastion host, new static IP will be created if null”string“reserved-static-ip-1”
additional_node_firewall_rules“Map of additional firewall rules to be applied to the GKE node pool’s network tag, format is similar to follow google_compute_firewall, check docs for details”any{}
enable_network_policy“Enables network policy support from GKE for the cluster”boolfalse
gke_maintenance_window“If specified, GKE will only do maintenance on the given 4 hour window starting from this timestamp in UTC, e.g. ‘14:55’”string“14:55”
create_vpc = false
gke_subnet_name = "kosmos-gke-subnet"
bastion_subnet_name = "kosmos-bastion-subnet"
network_name= "kosmos-vpc"

Node pool example

[
    {
        name               = "node-pool-sample"
        initial_node_count = 1
        version            = "1.31.5-gke.1169000" # Can be omitted, will default to cluster's version
        config             = {}
        autoscaling = {
            enabled = false
        }
        max_pods_constraint = 110
        management          = {}
    }
]

Kosmos Platform IPs

The GKE cluster must allow API access from Kosmos platform IPs. Include these in your gke_master_authorized_networks variable:

CIDR BlockDisplay Name
3.208.49.242/32kosmos-1
35.172.72.171/32kosmos-2
34.214.188.139/32kosmos-3
35.165.241.214/32kosmos-4
13.209.31.187/32kosmos-5
15.165.61.114/32kosmos-6

Example configuration:

gke_master_authorized_networks = [
  { cidr_block = "3.208.49.242/32", display_name = "kosmos-1" },
  { cidr_block = "35.172.72.171/32", display_name = "kosmos-2" },
  { cidr_block = "34.214.188.139/32", display_name = "kosmos-3" },
  { cidr_block = "35.165.241.214/32", display_name = "kosmos-4" },
  { cidr_block = "13.209.31.187/32", display_name = "kosmos-5" },
  { cidr_block = "15.165.61.114/32", display_name = "kosmos-6" },
  { cidr_block = "YOUR.PUBLIC.IP/32", display_name = "my-ip" },
]

Quick Start Example

This is a complete working example to create a GKE cluster with Kosmos. You can either create each file manually or download the complete example below.

main.tf

# Main Terraform Configuration for Kosmos GKE Cluster

locals {
  project_number = data.google_project.current.number
}

# Download the module from the Artifacts table above
# Replace X.Y.Z with the actual version number
module "gke_cluster" {
  source = "https://srin-s3-terraform-modules.s3.ap-southeast-1.amazonaws.com/terraform-kosmos-gke-vX.Y.Z.tar.gz"

  # Required variables
  fleet_name   = var.fleet_name
  kosmos_user  = var.kosmos_user
  cluster_name = var.cluster_name

  # GCP configuration
  gcp_region     = var.gcp_region
  gcp_project_id = var.gcp_project_id

  # Workload Identity configuration
  # NOTE: IDs must be short - module concatenates with fleet name
  # Pool ID max: 32 chars, SA account_id max: 30 chars
  create_new_workload_identity_pool = true
  gcp_workload_identity_pool_id     = var.gcp_workload_identity_pool_id
  gcp_workload_identity_provider_id = var.gcp_workload_identity_provider_id
  workload_sa_name_prefix           = "kosmos"

  # Kubernetes version
  gke_version_major = var.gke_version_major

  # Network configuration
  network_name     = var.network_name
  create_vpc       = var.create_vpc
  enable_cloud_nat = var.enable_cloud_nat

  # Access configuration - Kosmos platform IPs required
  bastion_authorized_networks    = var.bastion_authorized_networks
  gke_master_authorized_networks = var.gke_master_authorized_networks

  # Node pools
  node_pools = var.node_pools

  # Bastion configuration
  bastion_desired_status = var.bastion_desired_status

  # Additional firewall rules
  additional_node_firewall_rules = var.additional_node_firewall_rules
}

variables.tf

# Variables for Kosmos GKE Cluster

#------------------------------------------------------------------------------
# Required Variables
#------------------------------------------------------------------------------

variable "fleet_name" {
  description = "Name of the Kosmos fleet where the cluster will be deployed"
  type        = string
}

variable "kosmos_user" {
  description = "Kosmos user ID as owner of the cluster"
  type        = string
}

variable "kosmos_access_key" {
  description = "Kosmos user access key for authentication"
  type        = string
  sensitive   = true
}

variable "cluster_name" {
  description = "Name for the GKE cluster"
  type        = string
}

variable "gcp_project_id" {
  description = "GCP project ID where the cluster will be created"
  type        = string
}

variable "gcp_region" {
  description = "GCP region for the GKE cluster"
  type        = string
  default     = "asia-southeast1"
}

variable "gcp_workload_identity_pool_id" {
  description = "Workload Identity Pool ID (max 32 chars)"
  type        = string
}

variable "gcp_workload_identity_provider_id" {
  description = "Workload Identity Provider ID (max 32 chars)"
  type        = string
}

variable "gke_version_major" {
  description = "Kubernetes major version (e.g., 1.31)"
  type        = string
  default     = "1.31"
}

#------------------------------------------------------------------------------
# Optional Variables - Kosmos Configuration
#------------------------------------------------------------------------------

variable "kosmos_endpoint" {
  description = "Kosmos API endpoint URL"
  type        = string
  default     = "https://console.kosmos.spcplatform.com"
}

#------------------------------------------------------------------------------
# Optional Variables - Network Configuration
#------------------------------------------------------------------------------

variable "network_name" {
  description = "Name for the VPC network"
  type        = string
  default     = "kosmos-gke-network"
}

variable "create_vpc" {
  description = "Whether to create a new VPC for the GKE cluster"
  type        = bool
  default     = true
}

variable "enable_cloud_nat" {
  description = "Whether to create Cloud NAT for private nodes"
  type        = bool
  default     = true
}

#------------------------------------------------------------------------------
# Optional Variables - Access Configuration
#------------------------------------------------------------------------------

variable "bastion_authorized_networks" {
  description = "List of CIDR blocks that can access the bastion host"
  type        = list(string)
  default     = []
}

variable "gke_master_authorized_networks" {
  description = "List of CIDR blocks that can access the GKE master API"
  type = list(object({
    cidr_block   = string
    display_name = string
  }))
  default = [
    # Kosmos platform IPs - required for cluster management
    { cidr_block = "3.208.49.242/32", display_name = "kosmos-1" },
    { cidr_block = "35.172.72.171/32", display_name = "kosmos-2" },
    { cidr_block = "34.214.188.139/32", display_name = "kosmos-3" },
    { cidr_block = "35.165.241.214/32", display_name = "kosmos-4" },
    { cidr_block = "13.209.31.187/32", display_name = "kosmos-5" },
    { cidr_block = "15.165.61.114/32", display_name = "kosmos-6" },
  ]
}

#------------------------------------------------------------------------------
# Optional Variables - Node Pool Configuration
#------------------------------------------------------------------------------

variable "node_pools" {
  description = "Configuration for GKE node pools"
  type = list(object({
    name               = string
    initial_node_count = optional(number, 1)
    config = optional(object({
      machine_type = optional(string, "e2-medium")
      disk_size_gb = optional(number, 50)
      disk_type    = optional(string, "pd-standard")
      image_type   = optional(string, "COS_CONTAINERD")
    }), {})
    autoscaling = optional(object({
      enabled        = optional(bool, false)
      min_node_count = optional(number, 1)
      max_node_count = optional(number, 3)
    }), {})
    max_pods_constraint = optional(number, 110)
    management = optional(object({
      auto_repair  = optional(bool, true)
      auto_upgrade = optional(bool, true)
    }), {})
  }))
  default = [{
    name               = "default-pool"
    initial_node_count = 1
    config = {
      machine_type = "e2-medium"
      disk_size_gb = 50
    }
    autoscaling = { enabled = false }
    max_pods_constraint = 110
    management = {
      auto_repair  = true
      auto_upgrade = true
    }
  }]
}

#------------------------------------------------------------------------------
# Optional Variables - Bastion Configuration
#------------------------------------------------------------------------------

variable "bastion_desired_status" {
  description = "State of the bastion instance (RUNNING or TERMINATED)"
  type        = string
  default     = "TERMINATED"
}

#------------------------------------------------------------------------------
# Optional Variables - Additional Firewall Rules
#------------------------------------------------------------------------------

variable "additional_node_firewall_rules" {
  description = "Map of additional firewall rules for GKE nodes"
  type        = any
  default     = {}
}

terraform.tfvars.example

# Copy to terraform.tfvars and fill in your values
# Find your public IP: curl -s ifconfig.me

#------------------------------------------------------------------------------
# Required Variables
#------------------------------------------------------------------------------

fleet_name                        = "your-fleet-name"
kosmos_user                       = "your-kosmos-username"
kosmos_access_key                 = "your-kosmos-access-key"  # Or set KOSMOS_ACCESS_KEY env var
cluster_name                      = "my-gke-cluster"
gcp_project_id                    = "your-gcp-project-id"
gcp_region                        = "asia-southeast1"
gcp_workload_identity_pool_id     = "kosmos-pool"      # Max 32 chars
gcp_workload_identity_provider_id = "kosmos-oidc"      # Max 32 chars

#------------------------------------------------------------------------------
# Optional - Access Configuration
#------------------------------------------------------------------------------

# Add your IP for bastion SSH access
bastion_authorized_networks = ["YOUR.PUBLIC.IP/32"]

# Add your IP for kubectl access (Kosmos IPs included by default)
# gke_master_authorized_networks = [
#   { cidr_block = "3.208.49.242/32", display_name = "kosmos-1" },
#   { cidr_block = "35.172.72.171/32", display_name = "kosmos-2" },
#   { cidr_block = "34.214.188.139/32", display_name = "kosmos-3" },
#   { cidr_block = "35.165.241.214/32", display_name = "kosmos-4" },
#   { cidr_block = "13.209.31.187/32", display_name = "kosmos-5" },
#   { cidr_block = "15.165.61.114/32", display_name = "kosmos-6" },
#   { cidr_block = "YOUR.PUBLIC.IP/32", display_name = "my-ip" },
# ]

outputs.tf

output "cluster_name" {
  description = "Name of the GKE cluster"
  value       = var.cluster_name
}

output "configure_kubectl_command" {
  description = "Command to configure kubectl"
  value       = "gcloud container clusters get-credentials ${var.cluster_name} --region ${var.gcp_region} --project ${var.gcp_project_id}"
}

output "kosmos_join_command" {
  description = "Command to join cluster to Kosmos"
  value       = "kosmos join cluster ${var.cluster_name} --fleet ${var.fleet_name}"
}

Usage examples

Basic usage

module "gke_cluster" {
  source = "https://srin-s3-terraform-modules.s3.ap-southeast-1.amazonaws.com/terraform-kosmos-gke-vX.Y.Z.tar.gz"

  fleet_name                        = "production-fleet"
  kosmos_user                       = "admin-user"
  gcp_region                        = "asia-southeast1"
  cluster_name                      = "prod-gke-cluster"
  gcp_project_id                    = "your-gcp-project-id"
  kosmos_access_key                 = "your-access-key"
  bastion_authorized_networks       = ["210.94.41.89/32"]
  network_name                      = "kosmos-gkecluster-network"
  gcp_workload_identity_pool_id     = "kosmos-workload-identity-pool"
  gcp_workload_identity_provider_id = "kosmos-workload-identity-provider"
  gke_version_major                 = "1.31"
  # ... other required variables
}

Advanced usage with custom VPC

module "gke_cluster" {
  source = "https://srin-s3-terraform-modules.s3.ap-southeast-1.amazonaws.com/terraform-kosmos-gke-vX.Y.Z.tar.gz"

  create_vpc          = false
  gke_subnet_name     = "existing-gke-subnet"
  bastion_subnet_name = "existing-bastion-subnet"
  network_name        = "existing-vpc"

  fleet_name                        = "staging-fleet"
  kosmos_user                       = "dev-team"
  gcp_region                        = "asia-southeast1"
  cluster_name                      = "staging-gke-cluster"
  gcp_project_id                    = "your-gcp-project-id"
  kosmos_access_key                 = "your-access-key"
  bastion_authorized_networks       = ["210.94.41.89/32"]
  gcp_workload_identity_pool_id     = "kosmos-workload-identity-pool"
  gcp_workload_identity_provider_id = "kosmos-workload-identity-provider"
  gke_version_major                 = "1.31"
  # ... other variables
}

Created resources

kosmos-gke

GKE

  • Bastion Instance
  • KMS
    • KMS Keyring
    • KMS CryptoKey
  • Kosmos Cluster
  • GKE Cluster
  • Service Account
  • VPC
    • Subnet
      • Bastion
      • GKE Cluster

Security checklist

List of checklist that conform the Samsung security checklist

  • IAM

    • Service Account Management

      • Mandatory Description for purpose of use

        resource "google_service_account" "..." {
          ...
          description  = "Service account to manage GKE cluster"
        }
        
  • Computing

    • Bastion Host Management

      • Mandatory Label (Ensure that the required labels are set for bastion host)

        resource "google_compute_instance" "..." {
          ...
            labels = {
              sec_assets_gateway = "general"
            }
          }
        
      • Prohibition of use other than SSH/RDP services (Disable default SSH port 22 and use another port)

        resource "google_compute_instance" "..." {
          ...
          metadata = {
            startup-script = "perl -pi -e 's/^#?Port 22$/Port ${local.ssh_port}/  ' /etc/ssh/sshd_config; systemctl restart sshd || systemctl restart   ssh"
          }
        }
        
      • Restriction of SSH/RDP Network Path (Ensure that all instances in the GCP are configured for SSH connection only through the Bastion Host)

        resource "google_compute_firewall" "..." {
          direction          = "EGRESS"
          target_tags        = ${all_node_tag}
          destination_ranges = ${network_range}
          allow {
            protocol = "tcp"
            ports    = ${ssh_port}
          }
        }
        
        resource "google_compute_firewall" "..." {
          direction   = "INGRESS"
          source_tags = ${all_node_tag}
          allow {
            protocol = "tcp"
            ports    = ${ssh_port}
          }
        }
        
        
      • Ingress Restriction (Ensure that the Bastion Host are restricted to connect only from the corporate network)

        resource "google_compute_firewall" "..." {
          direction     = "INGRESS"
          source_ranges = ${corp_network}
          allow {
            protocol = "tcp"
            ports    = ${ssh_port}
          }
        }
        
  • Serverless

    • Secret Encryption

      • Check that Application Layer secrets encryption is enabled

        resource "kosmos_gkeclusters" "..." {
          ...
          key_name = google_kms_crypto_key.${resource_name}.id
        }
        
    • Kubernetes Management

      • Enable Private Cluster

        resource "kosmos_gkeclusters" "..." {
          private_cluster_config = {
            enable_private_nodes    = true
          }
        }
        
      • Cluster API endpoint (Ensure that whether access control for Cluster Endpoint is set and managed)

        resource "kosmos_gkeclusters" "..." {
          master_authorized_networks = {
            enabled     = true
            cidr_blocks = ${authorized_network}
          }
        }
        
      • Dedicated GKE subnet

        module "vpc" {
          ...
           subnets = [
              {
                subnet_name           = ""
                subnet_ip             = ""
                subnet_region         = ""
                subnet_flow_logs      = "true"
                subnet_private_access = "true"
              }
            ]
        }
        
  • KMS

    • Separation of Key

      • Mandatory label (Key: “Sec_assets_kms”, Value: Descriptions for key usages)

        resource "google_kms_crypto_key" "..." {
          ...
          labels = {
            "sec_assets_kms" = "..."
          }
        }
        
    • Key Rotation

      • Check that Key Rotation Period is set to 90 days

        resource "google_kms_crypto_key" "..." {
          ...
          rotation_period = "${90 * 24 * 3600}s"
        }
        

List of checklist that does not conform to the Samsung security checklist

  • Network
    • VPC Management
      • NAT Association Management
        • Ensure that check whether NAT is not connected to the private subnet
          • violation is due to NAT gateway being needed for GKE cluster to connect towards Kosmos' control plane & to pull container images
    • Firewall Management
      • Ingress/Egress Rule Management
        • “0.0.0.0/0” or “::/0” is registered in the IP (SourceRanges or DestinationRanges) value.
        • IP (SourceRanges or DestinationRanges) values cannot exceed a 24-bit mask.
      • Descriptions for Firewall rule

Required permissions

Note: The permissions listed only covers initial terraform apply to create everything from scratch and terraform destroy to clean-up all resources

Compute engine permissions:

compute.disks.create
compute.disks.delete
compute.firewalls.create
compute.firewalls.delete
compute.images.get
compute.images.use
compute.instances.create
compute.instances.delete
compute.networks.create
compute.networks.delete
compute.networks.get
compute.networks.use
compute.routes.create
compute.routes.delete
compute.routers.create
compute.routers.delete
compute.routers.get
compute.routers.update
compute.subnetworks.create
compute.subnetworks.delete
compute.subnetworks.get
compute.subnetworks.use

KMS permissions:

cloudkms.cryptoKeys.create
cloudkms.cryptoKeys.delete
cloudkms.cryptoKeys.get
cloudkms.cryptoKeys.setIamPolicy
cloudkms.cryptoKeys.use
cloudkms.keyRings.create
cloudkms.keyRings.delete
cloudkms.keyRings.get
cloudkms.keyRings.use

GKE (Kubernetes engine) permissions:

container.clusters.create
container.clusters.delete
container.clusters.get
container.nodePools.create
container.nodePools.delete
container.operations.get
container.versions.get

IAM permissions:

iam.serviceAccounts.create
iam.serviceAccounts.delete
iam.serviceAccounts.get
iam.serviceAccounts.setIamPolicy
iam.workloadIdentityPoolProviders.create
iam.workloadIdentityPoolProviders.delete
iam.workloadIdentityPoolProviders.get
iam.workloadIdentityPools.create
iam.workloadIdentityPools.delete
iam.workloadIdentityPools.get

Resource manager permissions

resourcemanager.projects.get
resourcemanager.projects.getIamPolicy
resourcemanager.projects.setIamPolicy

Schema

Required

Optional

  • name (String) name of the GKECluster
  • namespace (String) object name and auth scope, such as for teams and projects

Nested schema for spec

Required:

  • authorization (Attributes) Optional. Configuration related to the cluster RBAC settings. (see below for nested schema )
  • gke_config (Attributes) Required. Configuration gke operator. (see below for nested schema )
  • name (String) Cluster name

Optional:

  • binary_authorization (Attributes) Optional. Binary Authorization configuration for this cluster. (see below for nested schema )
  • description (String) Optional. A human readable description of this cluster. Cannot be longer than 255 UTF-8 encoded bytes.
  • display_name (String) If specified this name is displayed in the UI instead of the metadata name
  • logging_config (Attributes) Optional. Logging configuration for this cluster. (see below for nested schema )
  • monitoring_config (Attributes) Optional. Monitoring configuration for this cluster. (see below for nested schema )
  • owner (String)

Nested schema for spec.authorization

Optional:

  • admin_teams (List of String) Optional. Groups of users that can perform operations as a cluster admin. A managed ClusterRoleBinding will be created to grant the cluster-admin ClusterRole to the groups. Up to ten admin groups can be provided. For more info on RBAC, see https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles
  • admin_users (List of String) Optional. Users that can perform operations as a cluster admin. A managed ClusterRoleBinding will be created to grant the cluster-admin ClusterRole to the users. Up to ten admin users can be provided. For more info on RBAC, see https://kubernetes.io/docs/reference/access-authn-authz/rbac/#user-facing-roles

Nested schema for spec.gke_config

Required:

  • cluster_name (String) ClusterName is the name of the GKE cluster.
  • project_id (String) ProjectID is the ID of the GCP project where the cluster is created.

Optional:

  • autopilot_config (Attributes) GKE Autopilot is a mode of operation in GKE in which Google manages your cluster configuration, including your nodes, scaling, security, and other preconfigured settings. (see below for nested schema )
  • cluster_addons (Attributes) ClusterAddons contains configuration for cluster add-ons. (see below for nested schema )
  • cluster_ipv4cidr (String) ClusterIpv4CidrBlock is the IPv4 CIDR block for the cluster.
  • delete_on_detachment (Boolean)
  • description (String) Description is a human-readable description of the cluster.
  • enable_kubernetes_alpha (Boolean) EnableKubernetesAlpha enables alpha features in Kubernetes.
  • google_credential_secret (String) GoogleCredentialSecret is the name of the secret containing Google credentials.
  • imported (Boolean) Imported indicates whether the cluster is imported.
  • ip_allocation_policy (Attributes) IPAllocationPolicy is the IP allocation policy for the cluster. (see below for nested schema )
  • key_name (String) KeyName: Name of CloudKMS key to use for the encryption of secrets in etcd. Ex. projects/my-project/locations/global/keyRings/my-ring/cryptoKeys/my-key
  • kubernetes_version (String) KubernetesVersion is the version of Kubernetes to use.
  • labels (Map of String) Labels are user-defined key-value pairs for the cluster.
  • locations (List of String) Locations are the regions/zone where the cluster is available.
  • logging_service (String) LoggingService is the logging service to use.
  • maintenance_window (String) MaintenanceWindow is the maintenance window configuration.
  • master_authorized_networks (Attributes) MasterAuthorizedNetworksConfig is the configuration for master authorized networks. (see below for nested schema )
  • monitoring_service (String) MonitoringService is the monitoring service to use.
  • network (String) Network is the name of the network to use, if specified.
  • network_policy_enabled (Boolean) NetworkPolicyEnabled enables network policy enforcement, if true.
  • node_pools (Attributes List) NodePools is a list of node pool configurations. (see below for nested schema )
  • private_cluster_config (Attributes) PrivateClusterConfig contains private cluster configuration. (see below for nested schema )
  • project_number (String) Number of project(‘gcloud projects describe $(gcloud config get-value core/project) –format=value(projectNumber)')
  • region (String) Region is the GCP region where the cluster is located. Required if Zone is not set.
  • service_account (String) Service account email(SERVICE_ACCOUNT_NAME@PROJECT_ID.iam.gserviceaccount.com)
  • subnetwork (String) Subnetwork is the name of the subnetwork to use, if specified.
  • workload_identity_pool_id (String) Workload-identity pool-id.
  • workload_identity_provider_id (String) Workload-identity provider-id.
  • zone (String) Zone is the GCP zone where the cluster is located. Required if Region is not set.

Nested schema for spec.gke_config.autopilot_config

Optional:

  • enabled (Boolean)

Nested schema for spec.gke_config.cluster_addons

Optional:

  • horizontal_pod_autoscaling (Boolean) HorizontalPodAutoscaling indicates whether horizontal pod autoscaling is enabled.
  • http_load_balancing (Boolean) HTTPLoadBalancing indicates whether HTTP load balancing is enabled.
  • network_policy_config (Boolean) NetworkPolicyConfig indicates whether network policy configuration is enabled.

Nested schema for spec.gke_config.ip_allocation_policy

Optional:

  • cluster_ipv4cidr_block (String) ClusterIpv4CidrBlock is the IPv4 CIDR block for the cluster.
  • cluster_secondary_range_name (String) ClusterSecondaryRangeName is the name of the secondary range for cluster pods.
  • create_subnetwork (Boolean) CreateSubnetwork indicates whether to create a subnetwork for the cluster.
  • node_ipv4cidr_block (String) NodeIpv4CidrBlock is the IPv4 CIDR block for nodes.
  • services_ipv4cidr_block (String) ServicesIpv4CidrBlock is the IPv4 CIDR block for services.
  • services_secondary_range_name (String) ServicesSecondaryRangeName is the name of the secondary range for services.
  • subnetwork_name (String) SubnetworkName is the name of the subnetwork to use, if specified.
  • use_ip_aliases (Boolean) UseIPAliases indicates whether to use IP aliases.

Nested schema for spec.gke_config.master_authorized_networks

Required:

  • cidr_blocks (Attributes List) CidrBlocks is an array of CIDR blocks that are authorized for access to the master. (see below for nested schema )

Optional:

  • enabled (Boolean) Enabled indicates whether master authorized networks are enabled.

Nested schema for spec.gke_config.master_authorized_networks.enabled

Required:

  • cidr_block (String) CidrBlock is the CIDR block for the network.

Optional:

  • display_name (String) DisplayName is a display name for the CIDR block.

Nested schema for spec.gke_config.node_pools

Required:

  • initial_node_count (Number) InitialNodeCount is the initial number of nodes in the node pool.
  • name (String) Name is the name of the node pool.
  • version (String) Version is the Kubernetes version for the node pool.

Optional:

  • autoscaling (Attributes) Autoscaling specifies the autoscaling configuration for the node pool. (see below for nested schema )
  • config (Attributes) Config specifies the configuration for nodes in the node pool. (see below for nested schema )
  • management (Attributes) Management specifies the management configuration for the node pool. (see below for nested schema )
  • max_pods_constraint (Number) MaxPodsConstraint is the maximum number of pods that can run on a node in the node pool.

Nested schema for spec.gke_config.node_pools.max_pods_constraint

Optional:

  • enabled (Boolean) Enabled indicates whether autoscaling is enabled for the node pool.
  • max_node_count (Number) MaxNodeCount is the maximum number of nodes in the node pool when autoscaling is enabled.
  • min_node_count (Number) MinNodeCount is the minimum number of nodes in the node pool when autoscaling is enabled.

Nested schema for spec.gke_config.node_pools.max_pods_constraint

Optional:

  • disk_size_gb (Number) DiskSizeGb is the size of the node’s disk in gigabytes.
  • disk_type (String) DiskType is the type of disk to use for the node.
  • image_type (String) ImageType is the type of image to use for the node.
  • labels (Map of String) Labels are user-defined key-value pairs for the node.
  • local_ssd_count (Number) LocalSsdCount is the number of local SSDs to attach to the node.
  • machine_type (String) MachineType is the type of machine for the node.
  • oauth_scopes (List of String) OauthScopes are the OAuth scopes for the node.
  • preemptible (Boolean) Preemptible indicates whether the node is preemptible.
  • tags (List of String) Tags are the tags associated with the node.
  • taints (Attributes List) Taints are the taints applied to the node. (see below for nested schema )

Nested schema for spec.gke_config.node_pools.max_pods_constraint.taints

Required:

  • effect (String) Effect is the effect of the taint, which can be NoSchedule or PreferNoSchedule or NoExecute.
  • key (String) Key is the key of the taint.
  • value (String) Value is the value of the taint.

Nested schema for spec.gke_config.node_pools.max_pods_constraint

Optional:

  • auto_repair (Boolean) AutoRepair indicates whether auto repair is enabled for the node pool.
  • auto_upgrade (Boolean) AutoUpgrade indicates whether auto upgrade is enabled for the node pool.

Nested schema for spec.gke_config.private_cluster_config

Required:

  • master_ipv4cidr_block (String) MasterIpv4CidrBlock is the IPv4 CIDR block for the master.

Optional:

  • enable_private_endpoint (Boolean) EnablePrivateEndpoint enables the private endpoint for the cluster.
  • enable_private_nodes (Boolean) EnablePrivateNodes enables private nodes for the cluster.

Nested schema for spec.binary_authorization

Optional:

  • evaluation_mode (String) Define binary authorization properties here

Nested schema for spec.logging_config

Optional:

  • component_config (Attributes) Parameters that describe the Logging configuration in a cluster. (see below for nested schema )

Nested schema for spec.logging_config.component_config

Optional:

  • enable_components (List of String)

Nested schema for spec.monitoring_config

Optional:

  • managed_prometheus_config (Attributes) Enable SPC Kosmos Managed Service for Prometheus in the cluster. (see below for nested schema )
  • managed_thanos_config (Attributes) Enable SPC Kosmos Managed Service for Thanos in the cluster. (see below for nested schema )

Nested schema for spec.monitoring_config.managed_prometheus_config

Optional:

  • enabled (Boolean)

Nested schema for spec.monitoring_config.managed_thanos_config

Optional:

  • enabled (Boolean)

Download Example

Download a complete working example with all required Terraform files:

Download GKE Terraform Example

The example includes:

  • main.tf - Module invocation
  • provider.tf - Provider configuration
  • variables.tf - Variable definitions with Kosmos Platform IPs as defaults
  • versions.tf - Terraform and provider version constraints
  • outputs.tf - Useful outputs including kubectl and kosmos CLI commands
  • terraform.tfvars.example - Example variable values
  • README.md - Quick start instructions
  • .gitignore - Git ignore patterns

Edit this page on GitHub