Vault installation

Overview

HashiCorp Vault provides secure storage, and tight control over access to tokens, passwords, certificates, encryption keys for protecting secrets, and other sensitive data and makes them accessible via UI, CLI, or HTTP API.

Kosmos users can install Vault using Kosmos provided Vault AppTemplate. This relies on the publicly available vault-helm helm chart. When installing Vault in a target cluster, users can utilize this App. The AppTemplate allows you to provide configuration in three ways:

  1. No configuration provided. This leads to using default values and install Vault in dev mode.

  2. Configure parameters of your interest. This will override default values for those parameters.

  3. Configure using parameter which accepts whole set of configurations as raw yaml (should abide to the schema by vault-helm chart values.yaml)

Prerequisites

Before beginning the installation process, ensure the following requirements are met:

  • Cluster Access

    • Fleet Cluster or
    • DevSpace vCluster access
  • Namespace

    • When working with Fleets, you must create a namespace.
    • On vcluster, the namespace gets created automatically.

Installation process using CLI

Step 1: Login to Kosmos

Login to the Kosmos console using the CLI:

kosmos login console.kosmos.spcplatform.com --access-key <YOUR_ACCESS_KEY>

Example output:

Successfully logged into Kosmos instance https://console.kosmos.spcplatform.com

Verify the logged-in user:

kosmos get currentuser

Step 2: Verify existing Kosmos app

List available application templates:

kosmos list apps

Ensure the vault app template exists.

Verify using:

kosmos get app --name vault

Example output:

NAME            DISPLAY NAME       DESCRIPTION
vault             Vault            Application template for installing Vault Helm chart.

Step 3: Check customizable parameters

Retrieve configurable parameters for the application:

kosmos get app --name vault -o json | jq '[.spec.parameters[] | {variable: .variable, type: .type, description: .description}]'

For the list of parameters and configurable items, see below

Step 4: Configure application parameters

If none configurations are provided, Vault will install with default parameters. On the other extreme, to have a granular level of configuration, one can use vault_helm_values_raw to provide yaml, which is used verbatim as values.yaml


Example helm override

global:
  enabled: true
  namespace: "vault"
  imagePullSecrets:
    - name: image-pull-secret
  tlsDisable: true
  psp:
    enable: false
    annotations: |
      seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default,runtime/default
      apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
      seccomp.security.alpha.kubernetes.io/defaultProfileName:  runtime/default
      apparmor.security.beta.kubernetes.io/defaultProfileName:  runtime/default
  serverTelemetry:
    prometheusOperator: false
  image:
    repository: "hashicorp/vault"
    tag: "1.21.2"
    pullPolicy: IfNotPresent
  updateStrategyType: "OnDelete"
  resources:
    requests:
      memory: 256Mi
      cpu: 250m
    limits:
      memory: 256Mi
      cpu: 250m

Create parameter file

Create a parameter file with format parameter_variable1: <value1>\nparameter_variable2: <value2>\n

vault-params.yaml

Example1: The following example sets vault_helm_values_raw parameter, the multiline value will be used as helm chart values override.

vault_helm_values_raw: |-
  global:
    enabled: true
    namespace: vault
  server:
    standalone:
      enabled: true
    dataStorage:
      enabled: true
      size: 10Gi
      mountPath: "/vault/data"

Example 2: The following example overrides some of the values, and the unspecified will use default values.

server:
  enabled: true
  resources:
    requests:
      memory: 256Mi
      cpu: 250m
    limits:
      memory: 256Mi
      cpu: 250m

Step 5: Install the application

Deploy the application using the Kosmos CLI:

kosmos install app \
  --name vault \
  --parameter-file vault-params.yaml \
  --release-name vault \
  --target-cluster mks-test-vault \
  --fleet qe-fleet \
  --target-namespace vault

Example output:

Successfully installed App 'vault'

Installation process using Kosmos Management Console

Step 1: Access the Kosmos console

Navigate to the Kosmos Management Console .

Kosmos console

Log in with your credentials.

Kosmos login

Step 2: Navigate to application

Go to your selected cluster and click the App/Helm tab.

  1. Click Install App
  2. Find and select Vault from the template list

Vault template selection

Step 3: Configure parameters

  1. Default Install: If one does not specify any parameters, the AppTemplate installs Vault helm chart with default values.
  2. Partial Overrides: For a partial override of Vault values, use the specifics parameter input fields provided.
  3. Full Override: If Kosmos user prefers to provide, the content for values.yaml, he can do so using vault_helm_values_raw parameter.

An example of configuring vault_helm_values_raw parameter is below:

Vault parameter

Once installed, one should see Vault app installed in “Apps” tab

Vault app installed

Parameter usage and configuration

Vault consists of parameters required when installing the Vault through the Kosmos application.

Teardown and Cleanup

When you would not require Vault in the target cluster, navigate to apps, select “Vault” installed app and click “uninstall”.

Vault delete app
One should see a message like
Vault deleted message

Successfully deleted Helm Release   {"component": "task-runner", "namespace": "vault", "name": "vault"}

Parameters

NameTypeRequiredDefault ValueDescription
vault_helm_values_rawmultilineA yaml formatted input, which is used as values.yaml for vault-helm chart to install this App.\nIf provided, only this value will be used as yaml input and rest of the field inputs will be ignored.\nMust be in yaml format, and match the vault-helm chart values.yaml\nexample:\nglobal:\n enabled: true\n namespace: vault\nserver:\n standalone:\n enabled: true\n dataStorage:\n enabled: true\n size: 10Gi\n mountPath: “/vault/data”\n
global_enabledbooleantrueEnable deployment of Vault components.
namespacestringvaultNamespace for vault resources
global_image_pull_secretsstringImage pull secret to use for registry authentication.\nAlternatively, the value may be specified as an array of strings.\nexample: image-pull-secret1,image-pull-secret2
disable_tlsbooleantrueTLS for end-to-end encrypted transport
global_external_vault_addrstringExternal vault server address for the injector and CSI provider to use.\nSetting this will disable deployment of a vault server.\nexample: https://myvaultserver:8200
global_openshiftbooleanfalseDeploy to openshift
global_psp_enabledbooleanfalseCreate PodSecurityPolicy for pods
global_psp_annotationsmultilineAnnotations for PodSecurityPolicy. Input should be valid json map.\nexample:\nannotations:\n vaultproject.io/psp: ‘privileged’
serverTelemetry_prometheusOperatorbooleanfalseEnable integration with the Prometheus Operator\nSee the top level serverTelemetry section below before enabling this feature.
injector_enabledstring-Enable deployment of the Vault Agent Injector component.
injector_replicasnumber1Number of replicas for the Vault Agent Injector deployment.
injector_portnumber8080Port for the Vault Agent Injector to listen on.
injector_leader_elector_enabledbooleanfalseIf multiple replicas are specified, by default a leader will be determined\nso that only one injector attempts to create TLS certificates.
injector_metrics_enabledbooleanfalseIf true, will enable a node exporter metrics endpoint at /metrics.
injector_image_repositorystringhashicorp/vault-k8s Repository for vault-k8s image used for the injector.
injector_image_tagstring1.7.2 Tag of the vault-k8s image to use for the injector.
injector_agent_image_repositorystringhashicorp/vault AgentImage sets the repo and tag of the Vault image to use for the Vault Agent\ncontainers. This should be set to the official Vault image. Vault 1.3.1+ is\nrequired.
injector_agent_image_tagstring1.21.2 Tag of the Vault image to use for the Vault Agent containers.
injector_agent_defaults_cpu_limitstring500m The default values for the injected Vault Agent containers.
injector_agent_defaults_cpu_requeststring250m The default values for the injected Vault Agent containers.
injector_agent_defaults_mem_limitstring128Mi The default values for the injected Vault Agent containers.
injector_agent_defaults_mem_requeststring64Mi The default values for the injected Vault Agent containers.
injector_agent_defaults_ephemeral_limitstring128Mi The default values for the injected Vault Agent containers.
injector_agent_defaults_ephemeral_requeststring64Mi The default values for the injected Vault Agent containers.
injector_agent_defaults_templatestringmap Default template type for secrets when no custom template is specified.\nPossible values include: ‘json’ and ‘map’.
injector_agent_defaults_template_config_exitonretryfailurebooleanfalseDefault value for the exit_on_retry_failure field in the template configuration for the injected Vault Agent containers.\nThis field controls whether the Vault Agent should exit if it encounters an error when trying to render a template and retry until it succeeds, or if it should keep retrying without exiting.
injector_agent_defaults_template_config_staticsecretrenderintervalstringAgent default template config staticSecretRenderInterval.\nThis field controls the interval at which the Vault Agent should render static secrets.
injector_auth_pathstringauth/kubernetes The path to authenticate to Vault for the Vault Agent Injector. This should be set to the path of the Kubernetes auth method configured in Vault.
injector_log_levelstringinfo Configure the logging verbosity for the Vault Agent Injector.\nSupported log levels include: trace, debug, info, warn, error
injector_log_formatstringstandard Configure the logging format for the Vault Agent Injector.\nSupported log formats include: json, standard
injector_revoke_on_shutdownbooleanfalseConfigures all Vault Agent sidecars to revoke their token when shutting down.
server_enabledbooleantrueIf true, or ‘-’ with global.enabled true, Vault server will be installed.
server_image_repositorystringhashicorp/vault image sets the repo and tag of the vault image to use for the server.
server_image_tagstring1.21.2 Tag of the vault image to use for the server.
server_log_levelstringConfigure the logging verbosity for the Vault server.\nSupported log levels include: trace, debug, info, warn, error
server_log_formatstringConfigure the logging format for the Vault server.\nSupported log formats include: json, standard.
server_resourcesmultilineResource requests, limits, etc. for the server cluster placement. This\nshould map directly to the value of the resources field for a PodSpec.\nBy default no direct resource request is made.\nexample:\nresources:\n requests:\n memory: 256Mi\n cpu: 250m\n limits:\n memory: 256Mi\n cpu: 250m
server_ingress_enabledbooleanfalseEnable vault ingress. Allows ingress services to be created to allow external access\nfrom Kubernetes to access Vault pods.\nIn order to expose the service, use the route section below
server_ingress_labelsmultilineLabels for the Vault Server ingress. Allows ingress services to be created to allow external access\nfrom Kubernetes to access Vault pods.\nIf deployment is on OpenShift, the following block is ignored.\nIn order to expose the service, use the route section below\nexample:\nlabels:\n ingress-label1: label-val1\n ingress-label2: label-val2
server_ingress_annotationsmultilineAnnotations for the Vault Server ingress. Allows ingress services to be created to allow external access\nfrom Kubernetes to access Vault pods.\nIf deployment is on OpenShift, the following block is ignored.\nIn order to expose the service, use the route section below\nexample:\nannotations:\n kubernetes.io/ingress.class: nginx\n kubernetes.io/tls-acme: ‘true’
server_ingress_ingress_class_namestringIngress class name for the Vault Server ingress.\nThis is used to specify the ingress class to use for the ingress resources created for Vault.\nThis is an alternative to specifying the ingress class through annotations.
server_ingress_path_typestringPrefix Ingress path type for the Vault Server ingress.\nThis is used to specify the path type to use for the ingress resources created for Vault.\nSupported values include: ImplementationSpecific, Exact, Prefix
server_ingress_active_servicebooleantrueWhen HA mode is enabled and K8s service registration is being used,\nconfigure the ingress to point to the Vault active service.
server_ingress_hostsmultilineThe hosts to use for the Vault Server ingress rules when using HA. \nThis is used to specify the hosts to use for the Vault Server ingress rules when using HA.\nThis should be set to the hostnames that will be used to access the active vault instance through the ingress,\nwhich is typically the main vault service when using the helm chart in HA mode.\nexample:\nhosts:\n - host: chart-example.local\n paths: []
server_ingress_extra_pathsmultilineExtra paths to use for the Vault Server ingress rules when using HA. \nThis is used to specify extra paths to use for the Vault Server ingress rules when using HA.\nThis should be set to any extra paths that will be used to access the active vault instance through the ingress,\nwhich is typically the main vault service when using the helm chart in HA mode.\nexample:\nextraPaths:\n - path: /\n backend:\n service:\n name: ssl-redirect\n port:\n number: use-annotation\n - path: /\n backend:\n service:\n name: ssl-redirect\n port:\n number: use-annotation
server_ingress_tlsmultilineTLS settings to use for the Vault Server ingress rules when using HA. \nThis is used to specify TLS settings to use for the Vault Server ingress rules when using HA.\nThis should be set to any TLS settings that will be used to access the active vault instance through the ingress,\nwhich is typically the main vault service when using the helm chart in HA mode.\nexample:\ntls:\n - secretName: vault-tls\n hosts:\n - chart-example.local
server_ingress_host_aliasesmultilinehostAliases is a list of aliases to be added to /etc/hosts. Specified as a YAML list.\nexample:\nhostAliases:\n - ip: ‘127.0.0.1’\n hostnames:\n - ‘example.local’
server_auth_delegator_enabledbooleantrueAuthDelegator enables a cluster role binding to be attached to the service\naccount. This cluster role binding can be used to setup Kubernetes auth\nmethod. See https://developer.hashicorp.com/vault/docs/auth/kubernetes
server_extra_init_containersmultilineextraInitContainers is a list of init containers. Specified as a YAML list.\nThis is useful if you need to run a script to provision TLS certificates or\nwrite out configuration files in a dynamic way.\nexample:\nextraInitContainers:\n - name: my-init-container\n image: busybox\n command: [‘sh’, ‘-c’, ‘echo Hello from the init container! && sleep 5’]\n args:\n - cd /tmp &&\n wget https://github.com/puppetlabs/vault-plugin-secrets-oauthapp/releases/download/v1.2.0/vault-plugin-secrets-oauthapp-v1.2.0-linux-amd64.tar.xz -O oauthapp.xz &&\n tar -xf oauthapp.xz &&\n mv vault-plugin-secrets-oauthapp-v1.2.0-linux-amd64 /usr/local/libexec/vault/oauthapp &&\n chmod +x /usr/local/libexec/vault/oauthapp\n volumeMounts:\n - name: plugins\n mountPath: /usr/local/libexec/vault
server_extra_containersmultilineextraContainers is a list of additional containers to add to the Vault server statefulSet. Specified as a YAML list.\nThis is useful if you need to run a script to provision TLS certificates or\nwrite out configuration files in a dynamic way.\nexample:\nextraContainers:\n - name: my-extra-container\n image: busybox\n command: [‘sh’, ‘-c’, ‘echo Hello from the extra container! && sleep 5’]\n args:\n - cd /tmp &&\n wget https://github.com/puppetlabs/vault-plugin-secrets-oauthapp/releases/download/v1.2.0/vault-plugin-secrets-oauthapp-v1.2.0-linux-amd64.tar.xz -O oauthapp.xz &&\n tar -xf oauthapp.xz &&\n mv vault-plugin-secrets-oauthapp-v1.2.0-linux-amd64 /usr/local/libexec/vault/oauthapp &&\n chmod +x /usr/local/libexec/vault/oauthapp\n volumeMounts:\n - name: plugins\n mountPath: /usr/local/libexec/vault
server_share_process_namespacebooleanfalse
server_extra_argsmultilineextraArgs is a string containing additional Vault server arguments.
server_extra_portsmultilineextraPorts is a list of extra ports. Specified as a YAML list.\nThis is useful if you need to add additional ports to the statefulset in dynamic way. \nexample:\nextraPorts:\n - containerPort: 8300\n name: http-monitoring
server_termination_grace_period_secondsnumber10Optional duration in seconds the pod needs to terminate gracefully.\nSee: https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/
server_pre_stop_sleep_secondsnumber5Used to set the sleep time during the preStop step, if custom preStop\ncommands are not set.
server_pre_stop_commandsmultilineUsed to define custom preStop exec commands to run before the pod is\nterminated. If not set, this will default to:\nexample:\npreStop:\n - ‘/bin/sh’\n - ‘-c’\n - ‘sleep {{ .Values.server.preStopSleepSeconds }} && kill -SIGTERM $(pidof vault)’
server_post_start_commandsmultilineThis can be used to automate processes such as initialization\nor boostrapping auth methods.\nexample:\npostStart:\n - /bin/sh\n - -c\n - /vault/userconfig/myscript/run.sh
server_extra_environment_varsmultilineextraEnvironmentVars is a list of extra environment variables to set with the stateful set.\nThese could be used to include variables required for auto-unseal.\nexample:\nextraEnvironmentVars:\n GOOGLE_REGION: global\n GOOGLE_PROJECT: myproject\n GOOGLE_APPLICATION_CREDENTIALS: /vault/userconfig/myproject/myproject-creds.json
server_extra_secret_environment_varsmultilineextraSecretEnvironmentVars is a list of extra environment variables to set with the stateful set.\nThese variables take value from existing Secret objects.\nexample:\nextraSecretEnvironmentVars:\n - envName: AWS_SECRET_ACCESS_KEY\n secretName: vault\n secretKey: AWS_SECRET_ACCESS_KEY
server_volumesmultilinevolumes is a list of volumes made available to all containers. These are rendered\nvia toYaml rather than pre-processed like the extraVolumes value.\nThe purpose is to make it easy to share volumes between containers.\nexample:\nvolumes:\n - name: plugins\n emptyDir: {}
server_volume_mountsmultilinevolumeMounts is a list of volumeMounts for the main server container. These are rendered\nvia toYaml rather than pre-processed like the extraVolumeMounts value.\nexample:\nvolumeMounts:\n - mountPath: /usr/local/libexec/vault\n name: plugins\n readOnly: true
server_affinitymultilineAffinity Settings\n Commenting out or setting as empty the affinity variable, will allow\n deployment to single node services such as Minikube\n This should be YAML matching the PodSpec’s affinity field.\nexample:\naffinity:
server_topology_spread_constraintsmultilineTopology settings for server pods\nref: https://kubernetes.io/docs/concepts/workloads/pods/pod-topology-spread-constraints/\nThis should be either a multi-line string or YAML matching the topologySpreadConstraints array\nin a PodSpec.\nexample:\ntopologySpreadConstraints: []
server_tolerationsmultilineToleration Settings for server pods\nThis should be either a multi-line string or YAML matching the Toleration array\nin a PodSpec.\nexample:\ntolerations: []
server_node_selectormultilinenodeSelector labels for server pod assignment, formatted as YAML map.\nref: https://kubernetes.io/docs/concepts/configuration/assign-pod-node/#nodeselector\nexample:\nnodeSelector:\n beta.kubernetes.io/arch: amd64
server_network_policy_enabledbooleanfalseEnables network policy for server pods
server_network_policy_egressmultilineEgress rules for network policy for server pods, formatted as YAML list.\nref: https://kubernetes.io/docs/concepts/services-networking/network-policies/#egress-rules\nexample:\negress:\n- to:\n - ipBlock:\n cidr: 10.0.0.0/24\n ports:\n - protocol: TCP\n port: 443
server_network_policy_ingressmultilineIngress rules for network policy for server pods, formatted as YAML list.\nref: https://kubernetes.io/docs/concepts/services-networking/network-policies/#ingress-rules\nexample:\ningress:\n- from:\n - namespaceSelector: {}\n ports:\n - port: 8200\n protocol: TCP\n - port: 8201\n protocol: TCP
server_priority_class_namestringPriority class for server pods\nref: https://kubernetes.io/docs/concepts/configuration/pod-priority-preemption/#priorityclass
server_extra_labelsmultilineExtra labels to attach to the server pods\nThis should be a YAML map of the labels to apply to the server pods\nexample:\nextraLabels: {}
server_annotationsmultilineExtra annotations to attach to the server pods\nThis can either be YAML or a YAML-formatted multi-line templated string map\nof the annotations to apply to the server pods
server_include_config_annotationbooleanfalseAdd an annotation to the server configmap and the statefulset pods,\nvaultproject.io/config-checksum, that is a hash of the Vault configuration.\nThis can be used together with an OnDelete deployment strategy to help\nidentify which pods still need to be deleted during a deployment to pick up\nany configuration changes.
server_service_enabledbooleantrue
server_service_active_enabledbooleantrueEnable or disable the vault-active service, which selects Vault pods that\nhave labeled themselves as the cluster leader with vault-active: 'true'.
server_service_active_annotationsmultilineExtra annotations for the service definition.\nThis can either be json or yaml map of the annotations to apply\nto the active service.
server_service_standby_enabledbooleantrue
server_service_standby_annotationsmultilineExtra annotations for the service definition. This can either be YAML or a\nYAML-formatted multi-line templated string map of the annotations to apply\nto the standby service.
server_service_instance_selector_enabledbooleantrue
server_service_cluster_ipstringclusterIP controls whether a Cluster IP address is attached to the\nVault service within Kubernetes. By default, the Vault service will\nbe given a Cluster IP address, set to None to disable. When disabled\nKubernetes will create a ‘headless’ service. Headless services can be\nused to communicate with pods directly through DNS instead of a round-robin\nload balancer.
server_service_typestringClusterIPConfigures the service type for the main Vault service. Can be ClusterIP or NodePort.
server_service_ip_family_policystringThe IP family and IP families options are to set the behaviour in a dual-stack environment.\nOmitting these values will let the service fall back to whatever the CNI dictates the defaults\nshould be.These are only supported for kubernetes versions >=1.23.0\nConfigures the service’s supported IP family policy, can be either:\nSingleStack: Single-stack service. The control plane allocates a cluster IP for the Service, using the first configured service cluster IP range.\nPreferDualStack: Allocates IPv4 and IPv6 cluster IPs for the Service.\nRequireDualStack: Allocates Service .spec.ClusterIPs from both IPv4 and IPv6 address ranges.
server_service_ip_familiesmultilineSets the families that should be supported and the order in which they should be applied to ClusterIP as well.\nCan be IPv4 and/or IPv6.
server_service_publish_not_ready_addressesbooleantrueDo not wait for pods to be ready before including them in the services'\ntargets. Does not apply to the headless service, which is used for\ncluster-internal communication.
server_service_external_traffic_policystringClusterThe externalTrafficPolicy can be set to either Cluster or Local\nand is only valid for LoadBalancer and NodePort service types.\nThe default value is Cluster.\nref: https://kubernetes.io/docs/concepts/services-networking/service/#external-traffic-policy
server_service_node_portnumber0If type is set to ‘NodePort’, a specific nodePort value can be configured,\nwill be random if left blank.
server_service_active_node_portnumber0When HA mode is enabled\nIf type is set to ‘NodePort’, a specific nodePort value can be configured,\nwill be random if left blank.
server_service_standby_node_portnumber0When HA mode is enabled\nIf type is set to ‘NodePort’, a specific nodePort value can be configured,\nwill be random if left blank.
server_service_portnumberPort on which Vault server is listening.
server_service_target_portnumberTarget port to which the service should be mapped to.
server_service_annotationsmultilineExtra annotations for the service definition.\nThis can either be json or yaml map of the annotations to apply to the main vault service.
server_data_storage_enabledbooleantrueThis configures the Vault Statefulset to create a PVC for data\nstorage when using the file or raft backend storage engines.\nSee https://developer.hashicorp.com/vault/docs/configuration/storage to know more
server_data_storage_sizestringSize of the PVC created.
server_data_storage_mountPathstring/vault/dataLocation where the PVC will be mounted.
server_data_storage_storageClassstringName of the storage class to use. If null it will use the\nconfigured default Storage Class.
server_data_storage_accessModestringReadWriteOnce Access Mode of the storage device being used for the PVC.
server_data_storage_annotationsmultilineAnnotations to apply to the PVC.\nexample:\nannotations:\n vaultproject.io/annotation-key: annotation-value
server_data_storage_labelsmultilineLabels to apply to the PVC.\nexample:\nlabels:\n vaultproject.io/label-key: label-value
server_persistent_volume_claim_retention_policymultilinePersistent Volume Claim (PVC) retention policy\nref: https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#persistentvolumeclaim-retention\nExample:\npersistentVolumeClaimRetentionPolicy:\n whenDeleted: Retain\n whenScaled: Retain
server_audit_storage_enabledbooleanfalseThis configures the Vault Statefulset to create a PVC for audit\nstorage when using the file backend storage engine for audit logs.\n See https://developer.hashicorp.com/vault/docs/audit to know more
server_audit_storage_sizestringSize of the PVC created for audit storage.
server_audit_storage_mountPathstring/vault/auditLocation where the PVC for audit storage will be mounted.
server_audit_storage_storageClassstringName of the storage class to use for audit storage. If null it will use the\nconfigured default Storage Class.
server_audit_storage_accessModestringReadWriteOnceAccess Mode of the storage device being used for the PVC for audit storage.
server_audit_storage_annotationsmultilineAnnotations to apply to the PVC for audit storage.
server_audit_storage_labelsmultilineLabels to apply to the PVC for audit storage.
server_dev_enabledbooleanfalse
server_dev_root_tokenstringroot The root token to use when running in dev mode. Ignored if not running in dev mode.\nSet VAULT_DEV_ROOT_TOKEN_ID value
server_standalone_enabledstringRun Vault in ‘standalone’ mode. This is the default mode and should be used for production deployments.\nIn this mode, Vault will manage its own storage and HA (if enabled) using the configured storage backend.\nSee https://developer.hashicorp.com/vault/docs/concepts/ha to know more
server_standalone_configmultilineconfig is a raw string of default configuration when using a Stateful\ndeployment. Default is to use a PersistentVolumeClaim mounted at /vault/data\nand store data there. This is only used when using a Replica count of 1, and\nusing a stateful set. Supported formats are HCL and JSON.\n\nNote: Configuration files are stored in ConfigMaps so sensitive data\nsuch as passwords should be either mounted through extraSecretEnvironmentVars\nor through a Kube secret. For more information see:\nhttps://developer.hashicorp.com/vault/docs/platform/k8s/helm/run#protecting-sensitive-vault-configurations\nexample:\nui = true\nlistener ‘tcp’ {\n tls_disable = 1\n address = ‘[::]:8200’\n cluster_address = ‘[::]:8201’\n # Enable unauthenticated metrics access (necessary for Prometheus Operator)\n #telemetry {\n # unauthenticated_metrics_access = ‘true’\n #}\n}\nstorage ‘file’ {\n path = ‘/vault/data’\n}\n# Example configuration for using auto-unseal, using Google Cloud KMS. The\n# GKMS keys must already exist, and the cluster must have a service account\n# that is authorized to access GCP KMS.\nseal ‘gcpckms’ {\n project = ‘vault-helm-dev’\n region = ‘global’\n key_ring = ‘vault-helm-unseal-kr’\n crypto_key = ‘vault-helm-unseal-key’\n}\n# Example configuration for enabling Prometheus metrics in your config.\ntelemetry {\n prometheus_retention_time = ’30s'\n disable_hostname = true\n}
server_ha_enabledbooleanfalse
server_ha_replicasnumber3
server_ha_api_addrstringThe api_addr configuration for Vault HA\nSee https://developer.hashicorp.com/vault/docs/configuration#api_addr\nIf set to null, this will be set to the Pod IP Address
server_ha_cluster_addrstringThe cluster_addr configuration for Vault HA\nSee https://developer.hashicorp.com/vault/docs/configuration#cluster_addr\nIf set to null, defaults to https://$(HOSTNAME).{{ template ‘vault.fullname’ . }}-internal:8201
server_ha_raft_enabledbooleanfalseRun Vault in ‘HA Raft’ mode. This is an alternative to using Consul for HA storage and does not require an external storage backend. This is only available in Vault Enterprise.
server_ha_raft_set_node_idbooleanfalseSet the node ID for Vault HA Raft mode. This is only used if HA Raft mode is enabled.
server_ha_raft_configmultilineNote: Configuration files are stored in ConfigMaps so sensitive data\nsuch as passwords should be either mounted through extraSecretEnvironmentVars\nor through a Kube secret. For more information see:\nhttps://developer.hashicorp.com/vault/docs/platform/k8s/helm/run#protecting-sensitive-vault-configurations\nSupported formats are HCL and JSON.\n\nui = true\nlistener ‘tcp’ {\n tls_disable = 1\n address = ‘[::]:8200’\n cluster_address = ‘[::]:8201’\n # Enable unauthenticated metrics access (necessary for Prometheus Operator)\n #telemetry {\n # unauthenticated_metrics_access = ‘true’\n #}\n}\nstorage ‘raft’ {\n path = ‘/vault/data’\n}\nservice_registration ‘kubernetes’ {}
server_ha_configmultilineNote: Configuration files are stored in ConfigMaps so sensitive data\nsuch as passwords should be either mounted through extraSecretEnvironmentVars\nor through a Kube secret. For more information see:\nhttps://developer.hashicorp.com/vault/docs/platform/k8s/helm/run#protecting-sensitive-vault-configurations\nexample:\nui = true\nlistener ‘tcp’ {\n tls_disable = 1\n address = ‘[::]:8200’\n cluster_address = ‘[::]:8201’\n}\nstorage ‘consul’ {\n path = ‘vault’\n address = ‘HOST_IP:8500’\n}\nservice_registration ‘kubernetes’ {}\n# Example configuration for using auto-unseal, using Google Cloud KMS. The\n# GKMS keys must already exist, and the cluster must have a service account\n# that is authorized to access GCP KMS.\nseal ‘gcpckms’ {\n project = ‘vault-helm-dev-246514’\n region = ‘global’\n key_ring = ‘vault-helm-unseal-kr’\n crypto_key = ‘vault-helm-unseal-key’\n}\n# Example configuration for enabling Prometheus metrics.\n# If you are using Prometheus Operator you can enable a ServiceMonitor resource below.\n# You may wish to enable unauthenticated metrics in the listener block above.\ntelemetry {\n prometheus_retention_time = ’30s'\n disable_hostname = true\n}
server_disruption_budget_enabledbooleantrueA disruption budget limits the number of pods of a replicated application\nthat are down simultaneously from voluntary disruptions
server_disruption_budget_max_unavailablestringmaxUnavailable will default to (n/2)-1 where n is the number of\nreplicas. If you’d like a custom value, you can specify an override here.
server_service_account_createbooleantrueCreate service account used to run Vault Server.
server_service_account_namestringThe name of the service account to use.\nIf not set and create is true, a name is generated using the fullname template
server_service_account_create_secretbooleantrueCreate a Secret API object to store a non-expiring token for the service account.\nPrior to v1.24.0, Kubernetes used to generate this secret for each service account by default.\nKubernetes now recommends using short-lived tokens from the TokenRequest API or projected volumes instead if possible.\nFor more details, see https://kubernetes.io/docs/concepts/configuration/secret/#service-account-token-secrets\nserviceAccount.create must be equal to ‘true’ in order to use this feature.
server_service_account_annotationsmultilineExtra annotations for the serviceAccount definition. This can either be\nYAML or a YAML-formatted multi-line templated string map of the\nannotations to apply to the serviceAccount.\nexample:\nannotations: {}
server_service_account_labelsmultilineExtra labels for the serviceAccount definition. This can either be\njson or yaml map of the labels to apply to the serviceAccount.\nexample:\nlabels:\n app.kubernetes.io/name: name\n app.kubernetes.io/instance: instance-name\n component: server
server_service_discovery_enabledbooleantrueEnable or disable a service account role binding with the permissions required for\nVault’s Kubernetes service_registration config option.\nSee https://developer.hashicorp.com/vault/docs/configuration/service-registration/kubernetes
server_host_networkbooleanfalseWhether to use the host network for the Vault server pods.
ui_publish_not_ready_addressesbooleantrueUI Publish not ready addresses
ui_active_vault_pod_onlybooleanfalseThe service should only contain selectors for active Vault pod
ui_service_typestringClusterIPUI service type
ui_external_portnumberVault UI port.
ui_target_portnumberTarget port to map to.
ui_service_ip_family_policystringThe IP family and IP families options are to set the behaviour in a dual-stack environment\nConfigures the service’s supported IP family, can be either:\nSingleStack: Single-stack service. The control plane allocates a cluster IP for the Service, using the first configured service cluster IP range.\nPreferDualStack: Allocates IPv4 and IPv6 cluster IPs for the Service.\nRequireDualStack: Allocates Service .spec.ClusterIPs from both IPv4 and IPv6 address ranges.
server_service_ip_familiesmultilineSets the families that should be supported and the order in which they should be applied to ClusterIP as well.\nCan be IPv4 and/or IPv6.
ui_service_ip_familiesstringSets the families that should be supported and the order in which they should be applied to ClusterIP as well Can be IPv4 and/or IPv6.

Edit this page on GitHub