Best Practices

Apply multiple Policy Controller bundles

This section explains how to enable Policy Controller bundles.

For more detailed information about applying and using policy bundles, read the instructions for the bundle that you want to apply using the left navigation menu. For more information about policy bundles, see the Policy Controller bundles overview.

If you installed Policy Controller using the KOSMOS console, the Samsung Security Checklist bundle is installed by default, but you can enable more bundles.

Before you begin

Apply policy bundles

To apply one or more policy bundles on a cluster using the KOSMOS console, complete the following steps:

  1. In the KOSMOS console, go to the Policy page under the Fleet section.

  2. Under the Settings tab, in the cluster table, select Edit edit in the Edit configuration column.

  3. In the Add/Edit policy bundles menu, ensure the template library is toggled on.

  4. To enable all policy bundles, toggle Add all policy bundles on.

  5. To enable individual policy bundles, toggle on each policy bundle that you want to enable.

  6. Optional: To exempt a namespace from enforcement, expand the Show advanced settings menu. In the Exempt namespaces field, provide a list of valid namespaces.

[!TIP] Best practice: Exempt system namespaces to avoid errors in your environment. You can find the instructions to exempt namespaces and a list of common namespaces created by KOSMOS on the Exclude namespaces page .

  1. Select Save changes.

You can view additional information about your policy coverage and violations using the Policy Controller dashboard.

Troubleshooting

You can’t modify policy bundles that are installed directly by using the instructions on this page. If you’re having issues with a policy bundle and need to make edits, install the bundle by using one of the methods on the individual policy bundle’s page. These methods pull the policy bundle from a Git repository, which lets you make changes. For example, if you want to edit the CIS Kubernetes Benchmark 1.5, follow the instructions on Use CIS Kubernetes Benchmark v1.5.1 policy constraints instead of this page.

What’s next


Use CIS Kubernetes benchmark v1.5.1 policy constraints

Policy Controller comes with a default library of constraint templates that can be used with the CIS bundle to audit the compliance of your cluster against the CIS Kubernetes Benchmark v1.5.1. This benchmark is a set of recommendations for configuring Kubernetes to support a strong security posture.

This page contains instructions for manually applying a policy bundle. Alternatively, you can apply policy bundles directly .

This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.

This bundle of constraints addresses and enforces policies in the following domains:

  • RBAC and service accounts
  • Pod Security Policies
  • Network policies and CNI
  • Secrets management
  • General policies

Note: This bundle has not been certified by CIS.

CIS Kubernetes v1.5.1 policy bundle constraints

Constraint NameControl DescriptionControl ID
cis-k8s-v1.5.1-no-secrets-as-env-varsPrefer using Secrets as files over Secrets as environment variables5.4.1
cis-k8s-v1.5.1-pods-require-security-contextApply Security Context to your Pods and containers5.7.3
cis-k8s-v1.5.1-prohibit-role-wildcard-access.yamlRestricts the use of wildcards in Roles and ClusterRoles5.1.3
cis-k8s-v1.5.1-psp-allow-privilege-escalation-containerMinimize the admission of containers with allowPrivilegeEscalation5.2.5
cis-k8s-v1.5.1-psp-capabilitiesMinimize the admission of containers with the NET_RAW capability
Minimize the admission of containers with added capabilities
Minimize the admission of containers with capabilities assigned
5.2.7
5.2.8
5.2.9
cis-k8s-v1.5.1-psp-host-namespace.yamlMinimize the admission of containers wanting to share the host process ID namespace
Minimize the admission of containers wanting to share the host IPC namespace
5.2.2
5.2.3
cis-k8s-v1.5.1-psp-host-network-portsMinimize the admission of containers wanting to share the host network namespace5.2.4
cis-k8s-v1.5.1-psp-pods-must-run-as-nonrootMinimize the admission of root containers5.2.6
cis-k8s-v1.5.1-psp-privileged-container.yamlMinimize the admission of privileged containers5.2.1
cis-k8s-v1.5.1-psp-seccomp-defaultEnsure that the seccomp profile is set to docker/default in your Pod definitions5.7.2
cis-k8s-v1.5.1-require-namespace-network-policiesEnsure that all namespaces have Network Policies defined5.3.2
cis-k8s-v1.5.1-restrict-clusteradmin-rolebindings.yamlRestricts the use of the cluster-admin role5.1.1

Before you begin

  1. Install Policy Controller on your cluster with the default library of constraint templates.

Audit CIS Kubernetes v1.5.1 policy bundle

Policy Controller lets you enforce policies for your Kubernetes cluster. To help test your workloads and their compliance with regard to the CIS policies outlined in the preceding table, you can deploy these constraints in “audit” mode to reveal violations and more importantly give yourself a chance to fix them before enforcing on your Kubernetes cluster.

You can apply these policies with spec.enforcementAction set to dryrun using kubectl.

  1. (Optional) Preview the policy constraints with kubectl:
kubectl kustomize https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/cis-k8s-v1.5.1
  1. Apply the policy constraints with kubectl:
kubectl apply -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/cis-k8s-v1.5.1
The output is similar to the following: Click to expand output
k8snoenvvarsecrets.constraints.gatekeeper.sh/cis-k8s-v1.5.1-no-secrets-as-env-vars created
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-allow-privilege-escalation created
k8spspallowedusers.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-pods-must-run-as-nonroot created
k8spspcapabilities.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-capabilities created
k8spsphostnamespace.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-host-namespace created
k8spsphostnetworkingports.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-host-network-ports created
k8spspprivilegedcontainer.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-privileged-container created
k8spspseccomp.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-seccomp-default created
k8spodsrequiresecuritycontext.constraints.gatekeeper.sh/cis-k8s-v1.5.1-pods-require-security-context created
k8sprohibitrolewildcardaccess.constraints.gatekeeper.sh/cis-k8s-v1.5.1-prohibit-role-wildcard-access created
k8srequirenamespacenetworkpolicies.constraints.gatekeeper.sh/cis-k8s-v1.5.1-require-namespace-network-policies created
k8srestrictrolebindings.constraints.gatekeeper.sh/cis-k8s-v1.5.1-restrict-clusteradmin-rolebindings created
  1. Verify that policy constraints have been installed and check if violations exist across the cluster:
kubectl get -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/cis-k8s-v1.5.1
The output is similar to the following: Click to expand output
NAME                                                                                 ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8snoenvvarsecrets.constraints.gatekeeper.sh/cis-k8s-v1.5.1-no-secrets-as-env-vars   dryrun               0

NAME                                                                                                              ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-allow-privilege-escalation   dryrun               0

NAME                                                                                       ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspallowedusers.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-pods-must-run-as-nonroot   dryrun               0

NAME                                                                           ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspcapabilities.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-capabilities   dryrun               0

NAME                                                                              ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostnamespace.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-host-namespace   dryrun               0

NAME                                                                                        ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostnetworkingports.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-host-network-ports   dryrun               0

NAME                                                                                          ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspprivilegedcontainer.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-privileged-container   dryrun               0

NAME                                                                         ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspseccomp.constraints.gatekeeper.sh/cis-k8s-v1.5.1-psp-seccomp-default   dryrun               0

NAME                                                                                                   ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spodsrequiresecuritycontext.constraints.gatekeeper.sh/cis-k8s-v1.5.1-pods-require-security-context   dryrun               0

NAME                                                                                                   ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8sprohibitrolewildcardaccess.constraints.gatekeeper.sh/cis-k8s-v1.5.1-prohibit-role-wildcard-access   dryrun               0

NAME                                                                                                             ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srequirenamespacenetworkpolicies.constraints.gatekeeper.sh/cis-k8s-v1.5.1-require-namespace-network-policies   dryrun               0

NAME                                                                                                  ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srestrictrolebindings.constraints.gatekeeper.sh/cis-k8s-v1.5.1-restrict-clusteradmin-rolebindings   dryrun               0

View policy violations

Once the policy constraints are installed in audit mode, violations on the cluster can be viewed in the UI using the Policy Controller Dashboard.

You can also use kubectl to view violations on the cluster using the following command:

kubectl get constraint -l bundleName=cis-k8s-v1.5.1 -o json | jq -cC '.items[]| [.metadata.name,.status.totalViolations]'

If violations are present, a listing of the violation messages per constraint can be viewed with:

kubectl get constraint -l bundleName=cis-k8s-v1.5.1 -o json | jq -C '.items[]| select(.status.totalViolations>0)| [.metadata.name,.status.violations[]?]'

Change CIS Kubernetes v1.5.1 policy bundle enforcement action

Once you’ve reviewed policy violations on your cluster, you can consider changing the enforcement mode so the Admission Controller will either warn on or even deny block non-compliant resource from getting applied to the cluster.

[!WARNING] The deny enforcement action should be used with care as it can potentially block required changes resulting in interruption to critical workloads or the cluster. It is strongly recommended that only the warn or dryrun enforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see Auditing using constraints .

  1. Use kubectl to set the policies' enforcement action to warn:
kubectl get constraint -l bundleName=cis-k8s-v1.5.1 -o name | xargs -I {} kubectl patch {} --type='json' -p='[{"op":"replace","path":"/spec/enforcementAction","value":"warn"}]'
  1. Verify that policy constraints enforcement action have been updated:
kubectl get constraint -l bundleName=cis-k8s-v1.5.1

Test policy enforcement

Create a non-compliant resource on the cluster using the following command:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  namespace: default
  name: wp-non-compliant
  labels:
    app: wordpress
spec:
  containers:
    - image: wordpress
      name: wordpress
      ports:
      - containerPort: 80
        name: wordpress
EOF

The admission controller should produce a warning listing out the policy violations that this resource violates, as shown in the following example:

Warning: [cis-k8s-v1.5.1-psp-pods-must-run-as-nonroot] Container wordpress is attempting to run without a required securityContext/runAsNonRoot or securityContext/runAsUser != 0
Warning: [cis-k8s-v1.5.1-psp-allow-privilege-escalation] Privilege escalation container is not allowed: wordpress
Warning: [cis-k8s-v1.5.1-psp-seccomp-default] Seccomp profile 'not configured' is not allowed for container 'wordpress'. Found at: no explicit profile found. Allowed profiles: {"RuntimeDefault", "docker/default", "runtime/default"}
Warning: [cis-k8s-v1.5.1-psp-capabilities] container <wordpress> is not dropping all required capabilities. Container must drop all of ["NET_RAW"] or "ALL"
Warning: [cis-k8s-v1.5.1-pods-require-security-context] securityContext must be defined for all Pod containers
pod/wp-non-compliant created

Remove CIS Kubernetes v1.5.1 policy bundle

If needed, the CIS K8s policy bundle can be removed from the cluster.

  • Use kubectl to remove the policies:
kubectl delete constraint -l bundleName=cis-k8s-v1.5.1

Use NIST SP 800-190 policy constraints

Policy Controller comes with a default library of constraint templates that can be used with the NIST SP 800-190 which implements controls listed in National Institute of Standards and Technology (NIST) Special Publication (SP) 800-190 , Application Container Security Guide. The bundle is intended to help organizations with application container security including image security, container runtime security, network security and host system security to name a few.

This page contains instructions for manually applying a policy bundle. Alternatively, you can apply policy bundles directly .

This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.

[!Note] This bundle has not been certified by NIST.

NIST SP 800-190 policy bundle constraints

Constraint NameConstraint DescriptionControl ID
nist-sp-800-190-apparmorRestricts AppArmor profiles allowed for Pods.CM-3 Configuration Change Control
nist-sp-800-190-block-secrets-of-type-basic-authRestricts the use of basic-auth type secrets.
nist-sp-800-190-capabilitiesRestricts additional Capabilities allowed for Pods.
nist-sp-800-190-enforce-config-managementRequires Config Sync is running and Drift Prevention enabled with at least one RootSync object on the cluster.
nist-sp-800-190-host-namespacesRestricts containers with hostPID or hostIPC set to true.
nist-sp-800-190-host-networkRestricts containers from running with the hostNetwork flag set to true.
nist-sp-800-190-privileged-containersRestricts containers with securityContext.privileged set to true.
nist-sp-800-190-proc-mount-typeRequires the default /proc masks for Pods
nist-sp-800-190-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.
nist-sp-800-190-restrict-hostpath-volumesRestricts the use of HostPath volumes.
nist-sp-800-190-restrict-volume-typesRestricts the mountable volumes types to the allowed list.
nist-sp-800-190-seccompSeccomp profile must not be explicitly set to Unconfined.
nist-sp-800-190-selinuxRestricts the SELinux configuration for Pods.
nist-sp-800-190-sysctlsRestricts the allowed Sysctls for Pods.
nist-sp-800-190-apparmorRestricts AppArmor profiles allowed for Pods.CM-7 Least Functionality
nist-sp-800-190-capabilitiesRestricts additional Capabilities allowed for Pods.
nist-sp-800-190-host-namespacesRestricts containers with hostPID or hostIPC set to true.
nist-sp-800-190-host-networkRestricts containers from running with the hostNetwork flag set to true.
nist-sp-800-190-privileged-containersRestricts containers with securityContext.privileged set to true.
nist-sp-800-190-proc-mount-typeRequires the default /proc masks for Pods
nist-sp-800-190-restrict-clusteradmin-rolebindingsRestricts the use of the cluster-admin role.
nist-sp-800-190-restrict-hostpath-volumesRestricts the use of HostPath volumes.
nist-sp-800-190-restrict-volume-typesRestricts the mountable volumes types to the allowed list.
nist-sp-800-190-seccompSeccomp profile must not be explicitly set to Unconfined.
nist-sp-800-190-selinuxRestricts the SELinux configuration for Pods.
nist-sp-800-190-sysctlsRestricts the allowed Sysctls for Pods.
nist-sp-800-190-asm-peer-authn-strict-mtlsEnsures PeerAuthentications cannot overwrite strict mTLS.SC-8 Transmission Confidentiality and Integrity
nist-sp-800-190-block-creation-with-default-serviceaccountRestrict resource creation using a default service account.IA-4 Identifier Management
nist-sp-800-190-restrict-rbac-subjectsRestricts the use of names in RBAC subjects to permitted values.
nist-sp-800-190-block-secrets-of-type-basic-authRestricts the use of basic-auth type secrets.SI-7 Software, Firmware, and Information Integrity
nist-sp-800-190-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.
nist-sp-800-190-restrict-hostpath-volumesRestricts the use of HostPath volumes.
nist-sp-800-190-block-secrets-of-type-basic-authRestricts the use of basic-auth type secrets.CM-6 Configuration Settings
nist-sp-800-190-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.
nist-sp-800-190-restrict-hostpath-volumesRestricts the use of HostPath volumes.
nist-sp-800-190-restrict-volume-typesRestricts the mountable volumes types to the allowed list.
nist-sp-800-190-block-secrets-of-type-basic-authRestricts the use of basic-auth type secrets.AC-4 Information Flow Enforcement
nist-sp-800-190-require-namespace-network-policiesRequires that every namespace defined in the cluster has a NetworkPolicy.
nist-sp-800-190-restrict-hostpath-volumesRestricts the use of HostPath volumes.
nist-sp-800-190-cpu-and-memory-limits-requiredRequires Pods specify cpu and memory limits.SC-6 Resource Availability
nist-sp-800-190-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.
nist-sp-800-190-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.
nist-sp-800-190-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.
nist-sp-800-190-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.
nist-sp-800-190-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.
nist-sp-800-190-restrict-clusteradmin-rolebindingsRestricts the use of the cluster-admin role.
nist-sp-800-190-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.
nist-sp-800-190-restrict-clusteradmin-rolebindingsRestricts the use of the cluster-admin role.
nist-sp-800-190-nodes-have-consistent-timeEnsures consistent and correct time on Nodes by allowing only Container-Optimized OS(COS) or Ubuntu as the OS image.AU-8 Time Stamps
nist-sp-800-190-require-binauthzRequires the Binary Authorization Validating Admission Webhook.AC-6 Least Privilege
nist-sp-800-190-restrict-clusteradmin-rolebindingsRestricts the use of the cluster-admin role.
nist-sp-800-190-restrict-reposRestricts container images to an allowed repos list.
nist-sp-800-190-restrict-role-wildcardsRestricts the use of wildcards in Roles and ClusterRoles.
nist-sp-800-190-restrict-rbac-subjectsRestricts the use of names in RBAC subjects to permitted values.
nist-sp-800-190-require-namespace-network-policiesRequires that every namespace defined in the cluster has a NetworkPolicy.CA-9 Internal System Connections
nist-sp-800-190-require-namespace-network-policiesRequires that every namespace defined in the cluster has a NetworkPolicy.SC-4 Information in Shared Resources
nist-sp-800-190-restrict-rbac-subjectsRestricts the use of names in RBAC subjects to permitted values.AC-2 Account Management
nist-sp-800-190-restrict-rbac-subjectsRestricts the use of names in RBAC subjects to permitted values.AC-3 Access Enforcement
nist-sp-800-190-restrict-rbac-subjectsRestricts the use of names in RBAC subjects to permitted values.IA-2 Identification and Authentication (Organizational Users)
nist-sp-800-190-restrict-rbac-subjectsRestricts the use of names in RBAC subjects to permitted values.MA-4 Nonlocal Maintenance

Before you begin

  1. Install Policy Controller on your cluster with the default library of constraint templates.

Configure your cluster and workload

  1. Container images are limited to an allowed repos list, which can be customized if required in nist-sp-800-190-restrict-repos.
  2. Nodes must use Ubuntu for their image in nist-sp-800-190-nodes-have-consistent-time.

Audit NIST SP 800-190 policy bundle

Policy Controller lets you enforce policies for your Kubernetes cluster. To help test your workloads and their compliance with regard to the NIST policies outlined in the preceding table, you can deploy these constraints in “audit” mode to reveal violations and more importantly give yourself a chance to fix them before enforcing on your Kubernetes cluster.

You can apply these policies with spec.enforcementAction set to dryrun using kubectl.

  1. (Optional) Preview the policy constraints with kubectl:
kubectl kustomize https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/nist-sp-800-190
  1. Apply the policy constraints with kubectl:
kubectl apply -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/nist-sp-800-190

The output is the following:

asmpeerauthnstrictmtls.constraints.gatekeeper.sh/nist-sp-800-190-asm-peer-authn-strict-mtls created
k8sallowedrepos.constraints.gatekeeper.sh/nist-sp-800-190-restrict-repos created
k8sblockcreationwithdefaultserviceaccount.constraints.gatekeeper.sh/nist-sp-800-190-block-creation-with-default-serviceaccount created
k8sblockobjectsoftype.constraints.gatekeeper.sh/nist-sp-800-190-block-secrets-of-type-basic-auth created
k8spspapparmor.constraints.gatekeeper.sh/nist-sp-800-190-apparmor created
k8spspcapabilities.constraints.gatekeeper.sh/nist-sp-800-190-capabilities created
k8spspforbiddensysctls.constraints.gatekeeper.sh/nist-sp-800-190-sysctls created
k8spsphostfilesystem.constraints.gatekeeper.sh/nist-sp-800-190-restrict-hostpath-volumes created
k8spsphostnamespace.constraints.gatekeeper.sh/nist-sp-800-190-host-namespaces created
k8spsphostnetworkingports.constraints.gatekeeper.sh/nist-sp-800-190-host-network created
k8spspprivilegedcontainer.constraints.gatekeeper.sh/nist-sp-800-190-privileged-containers created
k8spspprocmount.constraints.gatekeeper.sh/nist-sp-800-190-proc-mount-type created
k8spspselinuxv2.constraints.gatekeeper.sh/nist-sp-800-190-selinux created
k8spspseccomp.constraints.gatekeeper.sh/nist-sp-800-190-seccomp created
k8spspvolumetypes.constraints.gatekeeper.sh/nist-sp-800-190-restrict-volume-types created
k8sprohibitrolewildcardaccess.constraints.gatekeeper.sh/nist-sp-800-190-restrict-role-wildcards created
k8srequirecosnodeimage.constraints.gatekeeper.sh/nist-sp-800-190-nodes-have-consistent-time created
k8srequirenamespacenetworkpolicies.constraints.gatekeeper.sh/nist-sp-800-190-require-namespace-network-policies created
k8srequiredlabels.constraints.gatekeeper.sh/nist-sp-800-190-require-managed-by-label created
k8srequiredresources.constraints.gatekeeper.sh/nist-sp-800-190-cpu-and-memory-limits-required created
k8srestrictrbacsubjects.constraints.gatekeeper.sh/nist-sp-800-190-restrict-rbac-subjects created
k8srestrictrolebindings.constraints.gatekeeper.sh/nist-sp-800-190-restrict-clusteradmin-rolebindings created
  1. Verify that policy constraints have been installed and check if violations exist across the cluster:
kubectl get constraints -l bundleName=nist-sp-800-190

The output is similar to the following:

NAME                                                            ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspapparmor.constraints.gatekeeper.sh/nist-sp-800-190-apparmor   dryrun               0

NAME                                                                                   ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srestrictrbacsubjects.constraints.gatekeeper.sh/nist-sp-800-190-restrict-rbac-subjects   dryrun               0

NAME                                                                   ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspforbiddensysctls.constraints.gatekeeper.sh/nist-sp-800-190-sysctls   dryrun               0

NAME                                                                            ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspvolumetypes.constraints.gatekeeper.sh/nist-sp-800-190-restrict-volume-types   dryrun               0

NAME                                                                           ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostnetworkingports.constraints.gatekeeper.sh/nist-sp-800-190-host-network   dryrun               0

NAME                                                                        ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostnamespace.constraints.gatekeeper.sh/nist-sp-800-190-host-namespaces   dryrun               0

NAME                                                                                          ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8sprohibitrolewildcardaccess.constraints.gatekeeper.sh/nist-sp-800-190-restrict-role-wildcards   dryrun               0

NAME                                                                                                                         ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8sblockcreationwithdefaultserviceaccount.constraints.gatekeeper.sh/nist-sp-800-190-block-creation-with-default-serviceaccount   dryrun               0

NAME                                                                                   ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostfilesystem.constraints.gatekeeper.sh/nist-sp-800-190-restrict-hostpath-volumes   dryrun               0

NAME                                                          ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspseccomp.constraints.gatekeeper.sh/nist-sp-800-190-seccomp   dryrun               0

NAME                                                                                      ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
asmpeerauthnstrictmtls.constraints.gatekeeper.sh/nist-sp-800-190-asm-peer-authn-strict-mtls   dryrun               0

NAME                                                                                        ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srequiredresources.constraints.gatekeeper.sh/nist-sp-800-190-cpu-and-memory-limits-required   dryrun               0

NAME                                                                    ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspprocmount.constraints.gatekeeper.sh/nist-sp-800-190-proc-mount-type   dryrun               0

NAME                                                                                           ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8sblockobjectsoftype.constraints.gatekeeper.sh/nist-sp-800-190-block-secrets-of-type-basic-auth   dryrun               0

NAME                                                                                                          ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srequirenamespacenetworkpolicies.constraints.gatekeeper.sh/nist-sp-800-190-require-namespace-network-policies   dryrun               0

NAME                                                                    ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspcapabilities.constraints.gatekeeper.sh/nist-sp-800-190-capabilities   dryrun               0

NAME                                                                               ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srequiredlabels.constraints.gatekeeper.sh/nist-sp-800-190-require-managed-by-label   dryrun               0

NAME                                                                                    ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspprivilegedcontainer.constraints.gatekeeper.sh/nist-sp-800-190-privileged-containers   dryrun               0

NAME                                                                   ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8sallowedrepos.constraints.gatekeeper.sh/nist-sp-800-190-restrict-repos   dryrun               0

NAME                                                            ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspselinuxv2.constraints.gatekeeper.sh/nist-sp-800-190-selinux   dryrun               0

NAME                                                                                               ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srestrictrolebindings.constraints.gatekeeper.sh/nist-sp-800-190-restrict-clusteradmin-rolebindings   dryrun               0

View policy violations

Once the policy constraints are installed in audit mode, violations on the cluster can be viewed in the UI using the Policy Controller Dashboard.

You can also use kubectl to view violations on the cluster using the following command:

kubectl get constraint -l bundleName=nist-sp-800-190 -o json | jq -cC '.items[]| [.metadata.name,.status.totalViolations]'

If violations are present, a listing of the violation messages per constraint can be viewed with:

kubectl get constraint -l bundleName=nist-sp-800-190 -o json | jq -C '.items[]| select(.status.totalViolations>0)| [.metadata.name,.status.violations[]?]''

Change NIST SP 800-190 policy bundle enforcement action

Once you’ve reviewed policy violations on your cluster, you can consider changing the enforcement mode so the Admission Controller will either warn on or even deny block non-compliant resource from getting applied to the cluster.

[!WARNING] The deny enforcement action should be used with care as it can potentially block required changes resulting in interruption to critical workloads or the cluster. It is strongly recommended that only the warn or dryrun enforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see Auditing using constraints .

  1. Use kubectl to set the policies' enforcement action to warn:
kubectl get constraints -l bundleName=nist-sp-800-190 -o name | xargs -I {} kubectl patch {} --type='json' -p='[{"op":"replace","path":"/spec/enforcementAction","value":"warn"}]'
  1. Verify that policy constraints enforcement action have been updated:
kubectl get constraints -l bundleName=nist-sp-800-190

Test policy enforcement

Create a non-compliant resource on the cluster using the following command:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: wp-non-compliant
spec:
  containers:
    ‐ image: wordpress
      name: wordpress
EOF

The admission controller should produce a warning listing out the policy violations that this resource violates, as shown in the following example:

Warning: [nist-sp-800-190-cpu-and-memory-limits-required] container <wordpress> does not have <{"cpu", "memory"}> limits defined
Warning: [nist-sp-800-190-restrict-repos] container <wordpress> has an invalid image repo <wordpress>, allowed repos are ["gcr.io/gke-release/", "gcr.io/anthos-baremetal-release/", "gcr.io/config-management-release/", "gcr.io/kubebuilder/", "gcr.io/gkeconnect/", "gke.gcr.io/"]
pod/wp-non-compliant created

Remove NIST SP 800-190 policy bundle

If needed, the NIST SP 800-190 policy bundle can be removed from the cluster.

  • Use kubectl to remove the policies:
kubectl delete constraint -l bundleName=nist-sp-800-190

Use NIST SP 800-53 Rev. 5 policy constraints

Policy Controller comes with a default library of constraint templates that can be used with the NIST SP 800-53 Rev. 5 bundle which implements controls listed in National Institute of Standards and Technology (NIST) Special Publication (SP) 800-53 Rev. 5 . The bundle may help organizations protect their systems and data from a variety of threats by implementing out-of-the-box security and privacy policies.

This page contains instructions for manually applying a policy bundle. Alternatively, you can apply policy bundles directly .

Important: This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.

Note: This bundle has not been certified by NIST.

NIST SP 800-53 Rev. 5 policy bundle constraints

Constraint NameConstraint DescriptionControl ID
nist-sp-800-53-r5-apparmorRestricts AppArmor profiles allowed for Pods.CM-3 Configuration Change Control
nist-sp-800-53-r5-block-secrets-of-type-basic-authRestricts the use of basic-auth type secrets.
nist-sp-800-53-r5-capabilitiesRestricts additional Capabilities allowed for Pods.
nist-sp-800-53-r5-host-namespacesRestricts containers with hostPID or hostIPC set to true.
nist-sp-800-53-r5-host-networkRestricts containers from running with the hostNetwork flag set to true.
nist-sp-800-53-r5-privileged-containersRestricts containers with securityContext.privileged set to true.
nist-sp-800-53-r5-proc-mount-typeRequires the default /proc masks for Pods
nist-sp-800-53-r5-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.
nist-sp-800-53-r5-restrict-hostpath-volumesRestricts the use of HostPath volumes.
nist-sp-800-53-r5-restrict-volume-typesRestricts the mountable volumes types to the allowed list.
nist-sp-800-53-r5-seccompSeccomp profile must not be explicitly set to Unconfined.
nist-sp-800-53-r5-selinuxRestricts the SELinux configuration for Pods.
nist-sp-800-53-r5-sysctlsRestricts the allowed Sysctls for Pods.
nist-sp-800-53-r5-apparmorRestricts AppArmor profiles allowed for Pods.CM-7 Least Functionality
nist-sp-800-53-r5-capabilitiesRestricts additional Capabilities allowed for Pods.
nist-sp-800-53-r5-host-namespacesRestricts containers with hostPID or hostIPC set to true.
nist-sp-800-53-r5-host-networkRestricts containers from running with the hostNetwork flag set to true.
nist-sp-800-53-r5-privileged-containersRestricts containers with securityContext.privileged set to true.
nist-sp-800-53-r5-proc-mount-typeRequires the default /proc masks for Pods
nist-sp-800-53-r5-restrict-clusteradmin-rolebindingsRestricts the use of the cluster-admin role.
nist-sp-800-53-r5-restrict-hostpath-volumesRestricts the use of HostPath volumes.
nist-sp-800-53-r5-restrict-volume-typesRestricts the mountable volumes types to the allowed list.
nist-sp-800-53-r5-seccompSeccomp profile must not be explicitly set to Unconfined.
nist-sp-800-53-r5-selinuxRestricts the SELinux configuration for Pods.
nist-sp-800-53-r5-sysctlsRestricts the allowed Sysctls for Pods.
nist-sp-800-53-r5-asm-peer-authn-strict-mtlsEnsures PeerAuthentications cannot overwrite strict mTLS.SC-8 Transmission Confidentiality and Integrity
nist-sp-800-53-r5-block-creation-with-default-serviceaccountRestrict resource creation using a default service account.IA-4 Identifier Management
nist-sp-800-53-r5-restrict-rbac-subjectsRestricts the use of names in RBAC subjects to permitted values.
nist-sp-800-53-r5-block-secrets-of-type-basic-authRestricts the use of basic-auth type secrets.SI-7 Software, Firmware, and Information Integrity
nist-sp-800-53-r5-require-binauthzRequires the Binary Authorization Validating Admission Webhook.
nist-sp-800-53-r5-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.
nist-sp-800-53-r5-restrict-hostpath-volumesRestricts the use of HostPath volumes.
nist-sp-800-53-r5-block-secrets-of-type-basic-authRestricts the use of basic-auth type secrets.CM-6 Configuration Settings
nist-sp-800-53-r5-require-binauthzRequires the Binary Authorization Validating Admission Webhook.
nist-sp-800-53-r5-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.
nist-sp-800-53-r5-restrict-hostpath-volumesRestricts the use of HostPath volumes.
nist-sp-800-53-r5-restrict-volume-typesRestricts the mountable volumes types to the allowed list.
nist-sp-800-53-r5-block-secrets-of-type-basic-authRestricts the use of basic-auth type secrets.SC-7 Boundary Protection
nist-sp-800-53-r5-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.
nist-sp-800-53-r5-require-namespace-network-policiesRequires that every namespace defined in the cluster has a NetworkPolicy.
nist-sp-800-53-r5-restrict-hostpath-volumesRestricts the use of HostPath volumes.
nist-sp-800-53-r5-restrict-volume-typesRestricts the mountable volumes types to the allowed list.
nist-sp-800-53-r5-block-secrets-of-type-basic-authRestricts the use of basic-auth type secrets.AC-4 Information Flow Enforcement
nist-sp-800-53-r5-require-binauthzRequires the Binary Authorization Validating Admission Webhook.
nist-sp-800-53-r5-require-namespace-network-policiesRequires that every namespace defined in the cluster has a NetworkPolicy.
nist-sp-800-53-r5-restrict-hostpath-volumesRestricts the use of HostPath volumes.
nist-sp-800-53-r5-block-secrets-of-type-basic-authRestricts the use of basic-auth type secrets.AC-16 Security and Privacy Attributes
nist-sp-800-53-r5-restrict-hostpath-volumesRestricts the use of HostPath volumes.
nist-sp-800-53-r5-block-secrets-of-type-basic-authRestricts the use of basic-auth type secrets.SA-8 Security and Privacy Engineering Principles
nist-sp-800-53-r5-restrict-hostpath-volumesRestricts the use of HostPath volumes.
nist-sp-800-53-r5-restrict-volume-typesRestricts the mountable volumes types to the allowed list.
nist-sp-800-53-r5-cpu-and-memory-limits-requiredRequires Pods specify cpu and memory limits.SC-6 Resource Availability
nist-sp-800-53-r5-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.
nist-sp-800-53-r5-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.
nist-sp-800-53-r5-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.
nist-sp-800-53-r5-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.
nist-sp-800-53-r5-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.
nist-sp-800-53-r5-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.
nist-sp-800-53-r5-restrict-clusteradmin-rolebindingsRestricts the use of the cluster-admin role.
nist-sp-800-53-r5-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.
nist-sp-800-53-r5-restrict-clusteradmin-rolebindingsRestricts the use of the cluster-admin role.
nist-sp-800-53-r5-nodes-have-consistent-timeEnsures consistent and correct time on Nodes by allowing only Container-Optimized OS (COS) or Ubuntu as the OS image.AU-8 Time Stamps
nist-sp-800-53-r5-require-av-daemonsetRequires the presence of an Anti-Virus daemonset.SI-3 Malicious Code Protection
nist-sp-800-53-r5-restrict-clusteradmin-rolebindingsRestricts the use of the cluster-admin role.
nist-sp-800-53-r5-restrict-reposRestricts container images to an allowed repos list.
nist-sp-800-53-r5-restrict-role-wildcardsRestricts the use of wildcards in Roles and ClusterRoles.
nist-sp-800-53-r5-restrict-rbac-subjectsRestricts the use of names in RBAC subjects to permitted values.
nist-sp-800-53-r5-restrict-storageclassRestricts StorageClass to a list of StorageClass which encrypt by default.
nist-sp-800-53-r5-require-namespace-network-policiesRequires that every namespace defined in the cluster has a NetworkPolicy.CA-9 Internal System Connections
nist-sp-800-53-r5-require-namespace-network-policiesRequires that every namespace defined in the cluster has a NetworkPolicy.SC-4 Information in Shared Resources
nist-sp-800-53-r5-restrict-rbac-subjectsRestricts the use of names in RBAC subjects to permitted values.AC-2 Account Management
nist-sp-800-53-r5-restrict-rbac-subjectsRestricts the use of names in RBAC subjects to permitted values.AC-3 Access Enforcement
nist-sp-800-53-r5-restrict-rbac-subjectsRestricts the use of names in RBAC subjects to permitted values.IA-2 Identification and Authentication (Organizational Users)
nist-sp-800-53-r5-restrict-rbac-subjectsRestricts the use of names in RBAC subjects to permitted values.MA-4 Nonlocal Maintenance
nist-sp-800-53-r5-restrict-storageclassRestricts StorageClass to a list of StorageClass which encrypt by default.SC-28 Protection of Information at Rest

Before you begin

  1. Install Policy Controller on your cluster with the default library of constraint templates.

Configure your cluster and workload

  1. An antivirus solution is required. The default is the presence of a daemonset named clamav in the clamav namespace, however the daemonset’s name and namespace can be customized to your implementation in the nist-sp-800-53-r5-require-av-daemonset constraint.
  2. Container images are limited to an allowed repos list, which can be customized if required in nist-sp-800-53-r5-restrict-repos.
  3. Nodes must use Ubuntu for their image in nist-sp-800-53-r5-nodes-have-consistent-time.
  4. Use of storage classes is limited to an allowed list, which can be customized to add additional classes with default encryption in nist-sp-800-53-r5-restrict-storageclass.

Audit NIST SP 800-53 Rev. 5 policy bundle

Policy Controller lets you enforce policies for your Kubernetes cluster. To help test your workloads and their compliance with regard to the NIST policies outlined in the preceding table, you can deploy these constraints in “audit” mode to reveal violations and more importantly give yourself a chance to fix them before enforcing on your Kubernetes cluster.

You can apply these policies with spec.enforcementAction set to dryrun using kubectl.

  1. (Optional) Preview the policy constraints with kubectl:
kubectl kustomize https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/nist-sp-800-53-r5
  1. Apply the policy constraints with kubectl:
kubectl apply -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/nist-sp-800-53-r5
The output is similar to the following: Click to expand output
asmpeerauthnstrictmtls.constraints.gatekeeper.sh/nist-sp-800-53-r5-asm-peer-authn-strict-mtls created
k8sallowedrepos.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-repos created
k8sblockcreationwithdefaultserviceaccount.constraints.gatekeeper.sh/nist-sp-800-53-r5-block-creation-with-default-serviceaccount created
k8sblockobjectsoftype.constraints.gatekeeper.sh/nist-sp-800-53-r5-block-secrets-of-type-basic-auth created
k8spspapparmor.constraints.gatekeeper.sh/nist-sp-800-53-r5-apparmor created
k8spspcapabilities.constraints.gatekeeper.sh/nist-sp-800-53-r5-capabilities created
k8spspforbiddensysctls.constraints.gatekeeper.sh/nist-sp-800-53-r5-sysctls created
k8spsphostfilesystem.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-hostpath-volumes created
k8spsphostnamespace.constraints.gatekeeper.sh/nist-sp-800-53-r5-host-namespaces created
k8spsphostnetworkingports.constraints.gatekeeper.sh/nist-sp-800-53-r5-host-network created
k8spspprivilegedcontainer.constraints.gatekeeper.sh/nist-sp-800-53-r5-privileged-containers created
k8spspprocmount.constraints.gatekeeper.sh/nist-sp-800-53-r5-proc-mount-type created
k8spspselinuxv2.constraints.gatekeeper.sh/nist-sp-800-53-r5-selinux created
k8spspseccomp.constraints.gatekeeper.sh/nist-sp-800-53-r5-seccomp created
k8spspvolumetypes.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-volume-types created
k8sprohibitrolewildcardaccess.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-role-wildcards created
k8srequirecosnodeimage.constraints.gatekeeper.sh/nist-sp-800-53-r5-nodes-have-consistent-time created
k8srequiredaemonsets.constraints.gatekeeper.sh/nist-sp-800-53-r5-require-av-daemonset created
k8srequirenamespacenetworkpolicies.constraints.gatekeeper.sh/nist-sp-800-53-r5-require-namespace-network-policies created
k8srequiredlabels.constraints.gatekeeper.sh/nist-sp-800-53-r5-require-managed-by-label created
k8srequiredresources.constraints.gatekeeper.sh/nist-sp-800-53-r5-cpu-and-memory-limits-required created
k8srestrictrbacsubjects.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-rbac-subjects created
k8srestrictrolebindings.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-clusteradmin-rolebindings created
k8sstorageclass.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-storageclass created
  1. Verify that policy constraints have been installed and check if violations exist across the cluster:
kubectl get constraints -l bundleName=nist-sp-800-53-r5
The output is similar to the following: Click to expand output
NAME                                                            ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspapparmor.constraints.gatekeeper.sh/nist-sp-800-53-r5-apparmor   dryrun               0

NAME                                                                                   ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srestrictrbacsubjects.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-rbac-subjects   dryrun               0

NAME                                                                   ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspforbiddensysctls.constraints.gatekeeper.sh/nist-sp-800-53-r5-sysctls   dryrun               0

NAME                                                                            ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspvolumetypes.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-volume-types   dryrun               0

NAME                                                                           ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostnetworkingports.constraints.gatekeeper.sh/nist-sp-800-53-r5-host-network   dryrun               0

NAME                                                                        ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostnamespace.constraints.gatekeeper.sh/nist-sp-800-53-r5-host-namespaces   dryrun               0

NAME                                                                              ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srequiredaemonsets.constraints.gatekeeper.sh/nist-sp-800-53-r5-require-av-daemonset   dryrun               0

NAME                                                                                          ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8sprohibitrolewildcardaccess.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-role-wildcards   dryrun               0

NAME                                                                                                                         ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8sblockcreationwithdefaultserviceaccount.constraints.gatekeeper.sh/nist-sp-800-53-r5-block-creation-with-default-serviceaccount   dryrun               0

NAME                                                                                   ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostfilesystem.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-hostpath-volumes   dryrun               0

NAME                                                          ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspseccomp.constraints.gatekeeper.sh/nist-sp-800-53-r5-seccomp   dryrun               0

NAME                                                                                      ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
asmpeerauthnstrictmtls.constraints.gatekeeper.sh/nist-sp-800-53-r5-asm-peer-authn-strict-mtls   dryrun               0

NAME                                                                                        ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srequiredresources.constraints.gatekeeper.sh/nist-sp-800-53-r5-cpu-and-memory-limits-required   dryrun               0

NAME                                                                    ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspprocmount.constraints.gatekeeper.sh/nist-sp-800-53-r5-proc-mount-type   dryrun               0

NAME                                                                                           ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8sblockobjectsoftype.constraints.gatekeeper.sh/nist-sp-800-53-r5-block-secrets-of-type-basic-auth   dryrun               0

NAME                                                                                                          ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srequirenamespacenetworkpolicies.constraints.gatekeeper.sh/nist-sp-800-53-r5-require-namespace-network-policies   dryrun               0

NAME                                                                    ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspcapabilities.constraints.gatekeeper.sh/nist-sp-800-53-r5-capabilities   dryrun               0

NAME                                                                          ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8sstorageclass.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-storageclass   dryrun               0

NAME                                                                               ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srequiredlabels.constraints.gatekeeper.sh/nist-sp-800-53-r5-require-managed-by-label   dryrun               0

NAME                                                                                    ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspprivilegedcontainer.constraints.gatekeeper.sh/nist-sp-800-53-r5-privileged-containers   dryrun               0

NAME                                                                   ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8sallowedrepos.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-repos   dryrun               0

NAME                                                            ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspselinuxv2.constraints.gatekeeper.sh/nist-sp-800-53-r5-selinux   dryrun               0

NAME                                                                                               ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srestrictrolebindings.constraints.gatekeeper.sh/nist-sp-800-53-r5-restrict-clusteradmin-rolebindings   dryrun               0

NAME                                                                                      ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srequirecosnodeimage.constraints.gatekeeper.sh/nist-sp-800-53-r5-nodes-have-consistent-time   dryrun               0

View policy violations

Once the policy constraints are installed in audit mode, violations on the cluster can be viewed in the UI using the Policy Controller Dashboard.

You can also use kubectl to view violations on the cluster using the following command:

kubectl get constraint -l bundleName=nist-sp-800-53-r5 -o json | jq -cC '.items[]| [.metadata.name,.status.totalViolations]'

If violations are present, a listing of the violation messages per constraint can be viewed with:

kubectl get constraint -l bundleName=nist-sp-800-53-r5 -o json | jq -C '.items[]| select(.status.totalViolations>0)| [.metadata.name,.status.violations[]?]'

Change NIST SP 800-53 Rev. 5 policy bundle enforcement action

Once you’ve reviewed policy violations on your cluster, you can consider changing the enforcement mode so the Admission Controller will either warn on or even deny block non-compliant resource from getting applied to the cluster.

[!WARNING] The deny enforcement action should be used with care as it can potentially block required changes resulting in interruption to critical workloads or the cluster. It is strongly recommended that only the warn or dryrun enforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see Auditing using constraints .

  1. Use kubectl to set the policies' enforcement action to warn:
kubectl get constraints -l bundleName=nist-sp-800-53-r5 -o name | xargs -I {} kubectl patch {} --type='json' -p='[{"op":"replace","path":"/spec/enforcementAction","value":"warn"}]'
  1. Verify that policy constraints enforcement action have been updated:
kubectl get constraints -l bundleName=nist-sp-800-53-r5

Test policy enforcement

Create a non-compliant resource on the cluster using the following command:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  name: wp-non-compliant
spec:
  containers:
    ‐ image: wordpress
      name: wordpress
EOF

The admission controller should produce a warning listing out the policy violations that this resource violates, as shown in the following example:

Warning: [nist-sp-800-53-r5-cpu-and-memory-limits-required] container <wordpress> does not have <{"cpu", "memory"}> limits defined
Warning: [nist-sp-800-53-r5-restrict-repos] container <wordpress> has an invalid image repo <wordpress>, allowed repos are ["gcr.io/gke-release/", "gcr.io/anthos-baremetal-release/", "gcr.io/config-management-release/", "gcr.io/kubebuilder/", "gcr.io/gkeconnect/", "gke.gcr.io/"]
pod/wp-non-compliant created

Remove NIST SP 800-53 Rev. 5 policy bundle

If needed, the NIST SP 800-53 Rev. 5 policy bundle can be removed from the cluster.

  • Use kubectl to remove the policies:
kubectl delete constraint -l bundleName=nist-sp-800-53-r5

Use NSA CISA Kubernetes Hardening policy constraints

Policy Controller comes with a default library of constraint templates that can be used with the National Security Agency (NSA) Cybersecurity and Infrastructure Security Agency (CISA) Kubernetes Hardening Guide v1.2 Policy bundle to evaluate the compliance of your cluster resources against some aspects of the NSA CISA Kubernetes Hardening Guide v1.2 .

This page contains instructions for manually applying a policy bundle. Alternatively, you can apply policy bundles directly .

This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.

[!Note] This bundle has not been certified by NSA and CISA.

NSA CISA Kubernetes Hardening policy bundle constraints

Constraint NameConstraint DescriptionControl ID
nsa-cisa-k8s-v1.2-apparmorRestricts AppArmor profile for Pods.CM-3 Configuration Change Control
nsa-cisa-k8s-v1.2-automount-serviceaccount-token-podRestricts Pods from using automountServiceAccountToken.
nsa-cisa-k8s-v1.2-block-all-ingressRestricts the creation of Ingress objects.
nsa-cisa-k8s-v1.2-block-secrets-of-type-basic-authRestricts the use of kubernetes.io/basic-auth type secrets.
nsa-cisa-k8s-v1.2-capabilitiesContainers must drop all capabilities, and are not permitted to add back any capabilities.
nsa-cisa-k8s-v1.2-cpu-and-memory-limits-requiredAll workload pods must specify cpu and memory limits.
nsa-cisa-k8s-v1.2-host-namespacesRestricts containers with hostPID or hostIPC set to true.
nsa-cisa-k8s-v1.2-host-namespaces-hostnetworkSharing the host namespaces must be disallowed.
nsa-cisa-k8s-v1.2-host-networkRestricts containers from running with the hostNetwork flag set to true.
nsa-cisa-k8s-v1.2-hostportRestricts containers from running with hostPort configured.
nsa-cisa-k8s-v1.2-privilege-escalationRestricts containers with allowPrivilegeEscalation set to true.
nsa-cisa-k8s-v1.2-privileged-containersRestricts containers with securityContext.privileged set to true.
nsa-cisa-k8s-v1.2-readonlyrootfilesystemRequires the use of a read-only root file system by pod containers.
nsa-cisa-k8s-v1.2-require-namespace-network-policiesRequires that every namespace defined in the cluster has a NetworkPolicy.
nsa-cisa-k8s-v1.2-restrict-clusteradmin-rolebindingsRestricts the use of the cluster-admin role.CM-7 Least Functionality
nsa-cisa-k8s-v1.2-restrict-edit-rolebindingsRestricts the use of the edit role.
nsa-cisa-k8s-v1.2-restrict-hostpath-volumesRestricts the use of HostPath volumes.
nsa-cisa-k8s-v1.2-restrict-pods-execRestricts the use of pods/exec in Roles and ClusterRoles.
nsa-cisa-k8s-v1.2-running-as-non-rootRestricts containers from running as the root user.
nsa-cisa-k8s-v1.2-seccompSeccomp profile must not be explicitly set to Unconfined.
nsa-cisa-k8s-v1.2-selinuxCannot set the SELinux type or set a custom SELinux user or role option.

Before you begin

  1. Install Policy Controller on your cluster with the default library of constraint templates.

Audit NSA CISA Kubernetes Hardening v1.2 policy bundle

Policy Controller lets you enforce policies for your Kubernetes cluster. To help test your workloads and their compliance with regard to the NSA CISA Kubernetes Hardening Guide v1.2 policies outlined in the preceding table, you can deploy these constraints in “audit” mode to reveal violations and more importantly give yourself a chance to fix them before optionally enforcing on your Kubernetes cluster.

You can apply these policies with spec.enforcementAction set to dryrun using kubectl.

  1. (Optional) Preview the policy constraints with kubectl:
kubectl kustomize https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/nsa-cisa-k8s-v1.2
  1. Apply the policy constraints with kubectl:
kubectl apply -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/nsa-cisa-k8s-v1.2

The output is the following:

k8sblockallingress.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-block-all-ingress created
k8sblockobjectsoftype.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-block-secrets-of-type-basic-auth created
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-privilege-escalation created
k8spspallowedusers.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-running-as-non-root created
k8spspapparmor.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-apparmor created
k8spspautomountserviceaccounttokenpod.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-automount-serviceaccount-token-pod created
k8spspcapabilities.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-capabilities created
k8spsphostfilesystem.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-restrict-hostpath-volumes created
k8spsphostnamespace.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-host-namespaces created
k8spsphostnetworkingports.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-host-network created
k8spsphostnetworkingports.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-hostport created
k8spspprivilegedcontainer.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-privileged-containers created
k8spspreadonlyrootfilesystem.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-readonlyrootfilesystem created
k8spspselinuxv2.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-selinux created
k8spspseccomp.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-seccomp created
k8srequirenamespacenetworkpolicies.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-require-namespace-network-policies created
k8srequiredresources.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-cpu-and-memory-limits-required created
k8srestrictrolebindings.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-restrict-clusteradmin-rolebindings created
k8srestrictrolebindings.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-restrict-edit-rolebindings created
k8srestrictrolerules.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-restrict-pods-exec created
  1. Verify that policy constraints have been installed and check if violations exist across the cluster:
kubectl get constraints -l bundleName=nsa-cisa-k8s-v1.2

The output is similar to the following:

NAME                                                                               ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8sblockallingress.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-block-all-ingress   dryrun               0

NAME                                                                                                 ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8sblockobjectsoftype.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-block-secrets-of-type-basic-auth   dryrun               0

NAME                                                                                 ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspallowedusers.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-running-as-non-root   dryrun               0

NAME                                                                                                       ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-privilege-escalation   dryrun               0

NAME                                                                  ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspapparmor.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-apparmor   dryrun               0

NAME                                                                                                                   ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspautomountserviceaccounttokenpod.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-automount-serviceaccount-token-pod   dryrun               0

NAME                                                                          ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspcapabilities.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-capabilities   dryrun               0

NAME                                                                                         ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostfilesystem.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-restrict-hostpath-volumes   dryrun               0

NAME                                                                              ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostnamespace.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-host-namespaces   dryrun               0

NAME                                                                                 ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostnetworkingports.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-host-network   dryrun               0
k8spsphostnetworkingports.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-hostport       dryrun               0

NAME                                                                                          ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspprivilegedcontainer.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-privileged-containers   dryrun               0

NAME                                                                                              ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspreadonlyrootfilesystem.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-readonlyrootfilesystem   dryrun               0

NAME                                                                ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspseccomp.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-seccomp   dryrun               0

NAME                                                                  ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspselinuxv2.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-selinux   dryrun               0

NAME                                                                                              ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srequiredresources.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-cpu-and-memory-limits-required   dryrun               0

NAME                                                                                                                ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srequirenamespacenetworkpolicies.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-require-namespace-network-policies   dryrun               0

NAME                                                                                                     ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srestrictrolebindings.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-restrict-clusteradmin-rolebindings   dryrun               0
k8srestrictrolebindings.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-restrict-edit-rolebindings           dryrun               0

NAME                                                                                  ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srestrictrolerules.constraints.gatekeeper.sh/nsa-cisa-k8s-v1.2-restrict-pods-exec   dryrun               0

View policy violations

Once the policy constraints are installed in audit mode, violations on the cluster can be viewed in the UI using the Policy Controller Dashboard.

You can also use kubectl to view violations on the cluster using the following command:

  kubectl get constraint -l bundleName=nsa-cisa-k8s-v1.2 -o json | jq -cC '.items[]| [.metadata.name,.status.totalViolations]'

If violations are present, a listing of the violation messages per constraint can be viewed with:

  kubectl get constraint -l bundleName=nsa-cisa-k8s-v1.2 -o json | jq -C '.items[]| select(.status.totalViolations>0)| [.metadata.name,.status.violations[]?]'

Change NSA CISA Kubernetes Hardening v1.2 policy bundle enforcement action

Once you’ve reviewed policy violations on your cluster, you can consider changing the enforcement mode so the Admission Controller will either warn on or even deny block non-compliant resource from getting applied to the cluster.

[!WARNING] The deny enforcement action should be used with care as it can potentially block required changes resulting in interruption to critical workloads or the cluster. It is strongly recommended that only the warn or dryrun enforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see Auditing using constraints .

  1. Use kubectl to set the policies' enforcement action to warn:
kubectl get constraints -l bundleName=nsa-cisa-k8s-v1.2 -o name | xargs -I {} kubectl patch {} --type='json' -p='[{"op":"replace","path":"/spec/enforcementAction","value":"warn"}]'
  1. Verify that policy constraints enforcement action have been updated:
kubectl get constraints -l bundleName=nsa-cisa-k8s-v1.2

Test policy enforcement

Create a non-compliant resource on the cluster using the following command:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  namespace: default
  name: wp-non-compliant
  labels:
    app: wordpress
spec:
  containers:
    - image: wordpress
      name: wordpress
      ports:
      - containerPort: 80
        name: wordpress
EOF

The admission controller should produce a warning listing out the policy violations that this resource violates, as shown in the following example:

Warning: [nsa-cisa-k8s-v1.2-automount-serviceaccount-token-pod] Automounting service account token is disallowed, pod: wp-non-compliant
Warning: [nsa-cisa-k8s-v1.2-running-as-non-root] Container wordpress is attempting to run without a required securityContext/runAsGroup. Allowed runAsGroup: {"ranges": [{"max": 65536, "min": 1000}], "rule": "MustRunAs"}
Warning: [nsa-cisa-k8s-v1.2-running-as-non-root] Container wordpress is attempting to run without a required securityContext/runAsUser
Warning: [nsa-cisa-k8s-v1.2-privilege-escalation] Privilege escalation container is not allowed: wordpress
Warning: [nsa-cisa-k8s-v1.2-cpu-and-memory-limits-required] container <wordpress> does not have <{"cpu", "memory"}> limits defined
Warning: [nsa-cisa-k8s-v1.2-capabilities] container <wordpress> is not dropping all required capabilities. Container must drop all of ["ALL"] or "ALL"
Warning: [nsa-cisa-k8s-v1.2-readonlyrootfilesystem] only read-only root filesystem container is allowed: wordpress
pod/wp-non-compliant created

Remove NSA CISA Kubernetes Hardening v1.2 policy bundle

If needed, the NSA CISA Kubernetes Hardening v1.2 policy bundle can be removed from the cluster.

  • Use kubectl to remove the policies:
kubectl delete constraint -l bundleName=nsa-cisa-k8s-v1.2

Use PCI-DSS v4.0 policy constraints

Policy Controller comes with a default library of constraint templates that can be used with the PCI-DSS v4.0 bundle to evaluate the compliance of your cluster resources against some aspects of the Payment Card Industry Data Security Standard (PCI-DSS) v4.0 .

This page contains instructions for manually applying a policy bundle. Alternatively, you can apply policy bundles directly .

This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.

Note: This bundle has not been certified by PCI.

PCI-DSS v4.0 policy bundle constraints

Constraint NameConstraint DescriptionControl IDs
pci-dss-v4.0-require-apps-annotationsRequires that all apps in the cluster have a network-controls/date annotation.2.2.5
pci-dss-v4.0-require-av-daemonsetRequires the presence of an Anti-Virus DaemonSet.5.2.1, 5.2.2, 5.2.3, 5.3.1, 5.3.2, 5.3.5
pci-dss-v4.0-require-default-deny-network-policiesRequires that every namespace defined in the cluster have a default deny NetworkPolicy for egress.1.3.2, 1.4.4
pci-dss-v4.0-require-managed-by-labelRequires all apps have a valid app.kubernetes.io/managed-by label.1.2.8, 2.2.6, 5.3.5, 6.3.2, 6.5.1
pci-dss-v4.0-require-namespace-network-policiesRequires that every Namespace defined in the cluster has a NetworkPolicy.1.2.5, 1.2.6, 1.4.1, 1.4.4
pci-dss-v4.0-require-peer-authentication-strict-mtlsEnsures PeerAuthentications cannot overwrite strict mTLS.2.2.7, 4.2.1, 8.3.2
pci-dss-v4.0-require-valid-network-rangesRestricts CIDR ranges permitted for use with ingress and egress.1.3.1, 1.3.2, 1.4.2, 1.4.4
pci-dss-v4.0-resources-have-required-labelsRequires all apps to contain a specified label to meet firewall requirements.1.2.7
pci-dss-v4.0-restrict-cluster-admin-roleRestricts the use of the cluster-admin role.7.2.1, 7.2.2, 7.2.5, 8.2.4
pci-dss-v4.0-restrict-creation-with-default-serviceaccountRestricts the creation of resources using a default service account. Has no effect during audit.2.2.2
pci-dss-v4.0-restrict-default-namespaceRestricts pods from using the default namespace.2.2.3
pci-dss-v4.0-restrict-ingressRestricts the creation of Ingress objects.1.3.1, 1.4.2, 1.4.4
pci-dss-v4.0-restrict-node-imageEnsures consistent and correct time on Nodes by allowing only Container-Optimized OS or Ubuntu as the OS image.10.6.1, 10.6.2, 10.6.3
pci-dss-v4.0-restrict-pods-execRestricts the use of pods/exec in Roles and ClusterRoles.8.6.1
pci-dss-v4.0-restrict-rbac-subjectsRestricts the use of names in RBAC subjects to permitted values.7.3.2, 8.2.1, 8.2.2, 8.2.4
pci-dss-v4.0-restrict-role-wildcardsRestricts the use of wildcards in Roles and ClusterRoles.7.3.3, 8.2.4
pci-dss-v4.0-restrict-storageclassRestricts StorageClass to a list of StorageClass which encrypt by default.3.3.2, 3.3.3

Before you begin

  1. Install Policy Controller on your cluster with the default library of constraint templates.

Configure your cluster’s workload for PCI-DSS v4.0

  1. All apps (ReplicaSet, Deployment, StatefulSet, DaemonSet) must include a network-controls/date annotation with the schema of YYYY-MM-DD.
  2. An antivirus solution is required. The default is the presence of a daemonset named clamav in the clamav Namespace, however the daemonset’s name and namespace can be customized to your implementation in the pci-dss-v4.0-require-av-daemonset constraint.
  3. Every Namespace defined in the cluster have a default deny NetworkPolicy for egress, permitted exceptions can be specific in pci-dss-v4.0-require-namespace-network-policies.
  4. Every Namespace defined in the cluster must have a NetworkPolicy.
  5. If using Cloud Service Mesh, ASM PeerAuthentication must use strict mTLS spec.mtls.mode: STRICT.
  6. Only permitted IP ranges can be used for Ingress and Express, these can be specified in pci-dss-v4.0-require-valid-network-ranges.
  7. All apps (ReplicaSet, Deployment, StatefulSet, and DaemonSet) must include a pci-dss-firewall-audit label with the schema of pci-dss-[0-9]{4}q[1-4].
  8. The use of the cluster-admin ClusterRole is not permitted.
  9. Resources cannot be created using the default service account.
  10. The default Namespace cannot be used for pods.
  11. Only permitted Ingress objects (Ingress, Gateway, and Service types of NodePort and LoadBalancer) can be created, these can be specified in pci-dss-v4.0-restrict-ingress.
  12. All nodes must use Ubuntu for their image for consistent time.
  13. The use of the wildcard character or the pods/exec permission in Roles and ClusterRoles is not permitted.
  14. Only permitted subjects can be used in RBAC bindings, your domain name(s) can be specified in pci-dss-v4.0-restrict-rbac-subjects.
  15. The use of encrypt by default StorageClass is required in pci-dss-v4.0-restrict-storageclass.

Audit PCI-DSS v4.0 policy bundle

Policy Controller lets you enforce policies for your Kubernetes cluster. To help test your workloads and their compliance with regard to the PCI-DSS v4.0 policies outlined in the preceding table, you can deploy these constraints in “audit” mode to reveal violations and more importantly give yourself a chance to fix them before enforcing on your Kubernetes cluster.

You can apply these policies with spec.enforcementAction set to dryrun using kubectl.

  1. (Optional) Preview the policy constraints with kubectl:
kubectl kustomize https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/pci-dss-v4.0
  1. Apply the policy constraints with kubectl:
kubectl apply -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/pci-dss-v4.0
The output is similar to the following: Click to expand output
asmpeerauthnstrictmtls.constraints.gatekeeper.sh/pci-dss-v4.0-require-peer-authentication-strict-mtls created
k8sblockallingress.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-ingress created
k8sblockcreationwithdefaultserviceaccount.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-creation-with-default-serviceaccount created
k8sprohibitrolewildcardaccess.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-role-wildcards created
k8srequirecosnodeimage.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-node-image created
k8srequiredaemonsets.constraints.gatekeeper.sh/pci-dss-v4.0-require-av-daemonset created
k8srequiredefaultdenyegresspolicy.constraints.gatekeeper.sh/pci-dss-v4.0-require-default-deny-network-policies created
k8srequirenamespacenetworkpolicies.constraints.gatekeeper.sh/pci-dss-v4.0-require-namespace-network-policies created
k8srequirevalidrangesfornetworks.constraints.gatekeeper.sh/pci-dss-v4.0-require-valid-network-ranges created
k8srequiredannotations.constraints.gatekeeper.sh/pci-dss-v4.0-require-apps-annotations created
k8srequiredlabels.constraints.gatekeeper.sh/pci-dss-v4.0-require-managed-by-label created
k8srequiredlabels.constraints.gatekeeper.sh/pci-dss-v4.0-resources-have-required-labels created
k8srestrictnamespaces.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-default-namespace created
k8srestrictrbacsubjects.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-rbac-subjects created
k8srestrictrolebindings.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-cluster-admin-role created
k8srestrictrolerules.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-pods-exec created
k8sstorageclass.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-storageclass created
  1. Verify that policy constraints have been installed and check if violations exist across the cluster:
kubectl get constraints -l bundleName=pci-dss-v4.0
The output is similar to the following: Click to expand output
NAME                                                                                                    ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
asmpeerauthnstrictmtls.constraints.gatekeeper.sh/pci-dss-v4.0-require-peer-authentication-strict-mtls   dryrun               0

NAME                                                                         ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8sblockallingress.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-ingress   dryrun               0

NAME                                                                                                                             ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8sblockcreationwithdefaultserviceaccount.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-creation-with-default-serviceaccount   dryrun               0

NAME                                                                                           ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8sprohibitrolewildcardaccess.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-role-wildcards   dryrun               0

NAME                                                                                ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srequirecosnodeimage.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-node-image   dryrun               0

NAME                                                                               ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srequiredaemonsets.constraints.gatekeeper.sh/pci-dss-v4.0-require-av-daemonset   dryrun               0

NAME                                                                                     ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srequiredannotations.constraints.gatekeeper.sh/pci-dss-v4.0-require-apps-annotations   dryrun               0

NAME                                                                                                             ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srequiredefaultdenyegresspolicy.constraints.gatekeeper.sh/pci-dss-v4.0-require-default-deny-network-policies   dryrun               0

NAME                                                                                      ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srequiredlabels.constraints.gatekeeper.sh/pci-dss-v4.0-require-managed-by-label         dryrun               0
k8srequiredlabels.constraints.gatekeeper.sh/pci-dss-v4.0-resources-have-required-labels   dryrun               0

NAME                                                                                                           ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srequirenamespacenetworkpolicies.constraints.gatekeeper.sh/pci-dss-v4.0-require-namespace-network-policies   dryrun               0

NAME                                                                                                   ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srequirevalidrangesfornetworks.constraints.gatekeeper.sh/pci-dss-v4.0-require-valid-network-ranges   dryrun               0

NAME                                                                                      ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srestrictnamespaces.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-default-namespace   dryrun               0

NAME                                                                                    ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srestrictrbacsubjects.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-rbac-subjects   dryrun               0

NAME                                                                                         ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srestrictrolebindings.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-cluster-admin-role   dryrun               0

NAME                                                                             ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srestrictrolerules.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-pods-exec   dryrun               0

NAME                                                                           ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8sstorageclass.constraints.gatekeeper.sh/pci-dss-v4.0-restrict-storageclass   dryrun               0

View policy violations

Once the policy constraints are installed in audit mode, violations on the cluster can be viewed in the UI using the Policy Controller Dashboard.

You can also use kubectl to view violations on the cluster using the following command:

  kubectl get constraint -l bundleName=pci-dss-v4.0 -o json | jq -cC '.items[]| [.metadata.name,.status.totalViolations]'

If violations are present, a listing of the violation messages per constraint can be viewed with:

  kubectl get constraint -l bundleName=pci-dss-v4.0 -o json | jq -C '.items[]| select(.status.totalViolations>0)| [.metadata.name,.status.violations[]?]'

Change PCI-DSS v4.0 policy bundle enforcement action

Once you’ve reviewed policy violations on your cluster, you can consider changing the enforcement mode so the Admission Controller will either warn on or even deny block non-compliant resource from getting applied to the cluster.

[!WARNING] The deny enforcement action should be used with care as it can potentially block required changes resulting in interruption to critical workloads or the cluster. It is strongly recommended that only the warn or dryrun enforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see Auditing using constraints .

  1. Use kubectl to set the policies' enforcement action to warn:
kubectl get constraint -l bundleName=pci-dss-v4.0 -o name | xargs -I {} kubectl patch {} --type='json' -p='[{"op":"replace","path":"/spec/enforcementAction","value":"warn"}]'
  1. Verify that policy constraints enforcement action have been updated:
kubectl get constraint -l bundleName=pci-dss-v4.0

Test policy enforcement

Create a non-compliant resource on the cluster using the following command:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  namespace: default
  name: wp-non-compliant
  labels:
    app: wordpress
spec:
  containers:
    - image: wordpress
      name: wordpress
      ports:
      - containerPort: 80
        name: wordpress
EOF

The admission controller should produce a warning listing out the policy violations that this resource violates, as shown in the following example:

Warning: [pci-dss-v4.0-restrict-default-namespace] <default> namespace is restricted
pod/wp-non-compliant created

Remove PCI-DSS v4.0 policy bundle

If needed, the PCI-DSS v4.0 policy bundle can be removed from the cluster.

  • Use kubectl to remove the policies:
kubectl delete constraint -l bundleName=pci-dss-v4.0

Use Pod Security Policy constraints

Policy Controller comes with a default library of constraint templates that can be used with the Pod Security Policy bundle to achieve many of the same protections as Kubernetes Pod Security Policy (PSP) , with the added ability to test your policies before enforcing them and exclude coverage of specific resources.

This page contains instructions for manually applying a policy bundle. Alternatively, you can apply policy bundles directly .

This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.

The bundle includes these constraints which provide parameters which map to the following Kubernetes Pod Security Policy (PSP) Field Names (Control IDs):

Constraint NameControl IDType
psp-v2022-flexvolume-driversAllow specific FlexVolume driversallowedFlexVolumes
psp-v2022-psp-allow-privilege-escalationRestricting escalation to root privilegesallowPrivilegeEscalation
psp-v2022-psp-apparmorThe AppArmor profile used by containersannotations
psp-v2022-psp-capabilitiesLinux capabilitiesallowedCapabilities, requiredDropCapabilities
psp-v2022-psp-forbidden-sysctlsThe sysctl profile used by containersforbiddenSysctls
psp-v2022-psp-fsgroupAllocating an FSGroup that owns the pod’s volumesfsGroup
psp-v2022-psp-host-filesystemUsage of the host filesystemallowedHostPaths
psp-v2022-psp-host-namespaceUsage of host namespaceshostPID, hostIPC
psp-v2022-psp-host-network-portsUsage of host networking and portshostNetwork, hostPorts
psp-v2022-psp-pods-allowed-user-rangesThe user and group IDs of the containerrunAsUser, runAsGroup, supplementalGroups, fsGroup
psp-v2022-psp-privileged-containerRunning of privileged containersprivileged
psp-v2022-psp-proc-mountThe Allowed Proc Mount types for the containerallowedProcMountTypes
psp-v2022-psp-readonlyrootfilesystemRequiring the use of a read-only root file systemreadOnlyRootFilesystem
psp-v2022-psp-seccompThe seccomp profile used by containersannotations
psp-v2022-psp-selinux-v2The SELinux context of the containerseLinux
psp-v2022-psp-volume-typesUsage of volume typesvolumes

Before you begin

  1. Install Policy Controller on your cluster with the default library of constraint templates.

Audit Pod Security Policy policy bundle

Policy Controller lets you enforce policies for your Kubernetes cluster. To help test your workloads and their compliance with regard to the KOSMOS recommended best practices outlined in the preceding table, you can deploy these constraints in “audit” mode to reveal violations and more importantly give yourself a chance to fix them before enforcing on your Kubernetes cluster.

You can apply these policies with spec.enforcementAction set to dryrun using kubectl.

  1. (Optional) Preview the policy constraints with kubectl:
kubectl kustomize https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/psp-v2022
  1. Apply the policy constraints with kubectl:
kubectl apply -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/psp-v2022
The output is similar to the following: Click to expand output
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/psp-v2022-psp-allow-privilege-escalation created
k8spspallowedusers.constraints.gatekeeper.sh/psp-v2022-psp-pods-allowed-user-ranges created
k8spspapparmor.constraints.gatekeeper.sh/psp-v2022-psp-apparmor created
k8spspcapabilities.constraints.gatekeeper.sh/psp-v2022-psp-capabilities created
k8spspfsgroup.constraints.gatekeeper.sh/psp-v2022-psp-fsgroup created
k8spspflexvolumes.constraints.gatekeeper.sh/psp-v2022-psp-flexvolume-drivers created
k8spspforbiddensysctls.constraints.gatekeeper.sh/psp-v2022-psp-forbidden-sysctls created
k8spsphostfilesystem.constraints.gatekeeper.sh/psp-v2022-psp-host-filesystem created
k8spsphostnamespace.constraints.gatekeeper.sh/psp-v2022-psp-host-namespace created
k8spsphostnetworkingports.constraints.gatekeeper.sh/psp-v2022-psp-host-network-ports created
k8spspprivilegedcontainer.constraints.gatekeeper.sh/psp-v2022-psp-privileged-container created
k8spspprocmount.constraints.gatekeeper.sh/psp-v2022-psp-proc-mount created
k8spspreadonlyrootfilesystem.constraints.gatekeeper.sh/psp-v2022-psp-readonlyrootfilesystem created
k8spspselinuxv2.constraints.gatekeeper.sh/psp-v2022-psp-selinux-v2 created
k8spspseccomp.constraints.gatekeeper.sh/psp-v2022-psp-seccomp created
k8spspvolumetypes.constraints.gatekeeper.sh/psp-v2022-psp-volume-types created
  1. Verify that policy constraints have been installed and check if violations exist across the cluster:
kubectl get -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/psp-v2022
The output is similar to the following: Click to expand output
NAME                                                                                                         ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/psp-v2022-psp-allow-privilege-escalation   dryrun               0

NAME                                                                                  ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspallowedusers.constraints.gatekeeper.sh/psp-v2022-psp-pods-allowed-user-ranges   dryrun               0

NAME                                                              ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspapparmor.constraints.gatekeeper.sh/psp-v2022-psp-apparmor                        0

NAME                                                                      ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspcapabilities.constraints.gatekeeper.sh/psp-v2022-psp-capabilities   dryrun               0

NAME                                                            ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspfsgroup.constraints.gatekeeper.sh/psp-v2022-psp-fsgroup                        0

NAME                                                                           ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspflexvolumes.constraints.gatekeeper.sh/psp-v2022-psp-flexvolume-drivers                        0

NAME                                                                               ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspforbiddensysctls.constraints.gatekeeper.sh/psp-v2022-psp-forbidden-sysctls                        0

NAME                                                                           ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostfilesystem.constraints.gatekeeper.sh/psp-v2022-psp-host-filesystem                        0

NAME                                                                         ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostnamespace.constraints.gatekeeper.sh/psp-v2022-psp-host-namespace   dryrun               0

NAME                                                                                   ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostnetworkingports.constraints.gatekeeper.sh/psp-v2022-psp-host-network-ports   dryrun               0

NAME                                                                                     ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspprivilegedcontainer.constraints.gatekeeper.sh/psp-v2022-psp-privileged-container   dryrun               0

NAME                                                                 ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspprocmount.constraints.gatekeeper.sh/psp-v2022-psp-proc-mount                        0

NAME                                                                                          ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspreadonlyrootfilesystem.constraints.gatekeeper.sh/psp-v2022-psp-readonlyrootfilesystem                        0

NAME                                                                 ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspselinuxv2.constraints.gatekeeper.sh/psp-v2022-psp-selinux-v2                        0

NAME                                                            ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspseccomp.constraints.gatekeeper.sh/psp-v2022-psp-seccomp   dryrun               0

NAME                                                                     ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspvolumetypes.constraints.gatekeeper.sh/psp-v2022-psp-volume-types                        0
  1. (Optional) Adjust the PSP Field Name parameters in the constraint files as required for your cluster environment. For more details check the link for the specific PSP Field Name in the table above. For example in psp-host-network-ports:
parameters:
  hostNetwork: true
  min: 80
  max: 9000

View policy violations

Once the policy constraints are installed in audit mode, violations on the cluster can be viewed in the UI using the Policy Controller Dashboard.

You can also use kubectl to view violations on the cluster using the following command:

kubectl get constraint -l policycontroller.gke.io/bundleName=psp-v2022 -o json | jq -cC '.items[]| [.metadata.name,.status.totalViolations]'

If violations are present, a listing of the violation messages per constraint can be viewed with:

kubectl get constraint -l policycontroller.gke.io/bundleName=psp-v2022 -o json | jq -C '.items[]| select(.status.totalViolations>0)| [.metadata.name,.status.violations[]?]'

Change Pod Security Policy policy bundle enforcement action

Once you’ve reviewed policy violations on your cluster, you can consider changing the enforcement mode so the Admission Controller will either warn on or even deny block non-compliant resource from getting applied to the cluster.

[!WARNING] The deny enforcement action should be used with care as it can potentially block required changes resulting in interruption to critical workloads or the cluster. It is strongly recommended that only the warn or dryrun enforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see Auditing using constraints .

  1. Use kubectl to set the policies' enforcement action to warn:
kubectl get constraint -l bundleName=psp-v2022 -o name | xargs -I {} kubectl patch {} --type='json' -p='[{"op":"replace","path":"/spec/enforcementAction","value":"warn"}]'
  1. Verify that policy constraints enforcement action have been updated:
kubectl get constraint -l bundleName=psp-v2022

Test policy enforcement

Create a non-compliant resource on the cluster using the following command:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  namespace: default
  name: wp-non-compliant
  labels:
    app: wordpress
spec:
  containers:
    - image: wordpress
      name: wordpress
      ports:
      - containerPort: 80
        name: wordpress
EOF

The admission controller should produce a warning listing out the policy violations that this resource violates, as shown in the following example:

Warning: [psp-v2022-psp-pods-allowed-user-ranges] Container wordpress is attempting to run without a required securityContext/fsGroup. Allowed fsGroup: {"ranges": [{"max": 200, "min": 100}], "rule": "MustRunAs"}
Warning: [psp-v2022-psp-pods-allowed-user-ranges] Container wordpress is attempting to run without a required securityContext/runAsGroup. Allowed runAsGroup: {"ranges": [{"max": 200, "min": 100}], "rule": "MustRunAs"}
Warning: [psp-v2022-psp-pods-allowed-user-ranges] Container wordpress is attempting to run without a required securityContext/runAsUser
Warning: [psp-v2022-psp-pods-allowed-user-ranges] Container wordpress is attempting to run without a required securityContext/supplementalGroups. Allowed supplementalGroups: {"ranges": [{"max": 200, "min": 100}], "rule": "MustRunAs"}
Warning: [psp-v2022-psp-allow-privilege-escalation] Privilege escalation container is not allowed: wordpress
Warning: [psp-v2022-psp-seccomp] Seccomp profile 'not configured' is not allowed for container 'wordpress'. Found at: no explicit profile found. Allowed profiles: {"RuntimeDefault", "docker/default", "runtime/default"}
Warning: [psp-v2022-psp-capabilities] container <wordpress> is not dropping all required capabilities. Container must drop all of ["must_drop"] or "ALL"
Warning: [psp-v2022-psp-readonlyrootfilesystem] only read-only root filesystem container is allowed: wordpress
pod/wp-non-compliant created

Remove Pod Security Policy policy bundle

If needed, the Pod Security Policy policy bundle can be removed from the cluster.

  • Use kubectl to remove the policies:
kubectl delete constraint -l bundleName=psp-v2022

Use Pod Security Standards Baseline policy constraints

Policy Controller comes with a default library of constraint templates that can be used with the Pod Security Standards Baseline bundle to achieve many of the same protections as Kubernetes Pod Security Standards (PSS) Baseline policy , with the added ability to test your policies before enforcing them and exclude coverage of specific resources.

This page contains instructions for manually applying a policy bundle. Alternatively, you can apply policy bundles directly .

This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.

Pod Security Standards Baseline policy bundle constraints

Constraint NameControl DescriptionType
pss-baseline-v2022-apparmorThe AppArmor profile used by containersAppArmor
pss-baseline-v2022-capabilitiesLinux capabilitiesCapabilities
pss-baseline-v2022-host-namespaces-host-pid-ipcUsage of host namespacesHost Namespaces
pss-baseline-v2022-host-namespaces-hostnetworkUse of host networking
pss-baseline-v2022-host-portsUsage of host portsHost Ports (configurable)
pss-baseline-v2022-hostpath-volumesUsage of the host filesystemHostPath Volumes
pss-baseline-v2022-hostprocessUsage of Windows HostProcessHostProcess
pss-baseline-v2022-privileged-containersRunning of privileged containersPrivileged Containers
pss-baseline-v2022-proc-mount-typeThe Allowed Proc Mount types for the container/proc Mount Type
pss-baseline-v2022-seccompThe seccomp profile used by containersSeccomp
pss-baseline-v2022-selinuxThe SELinux context of the containerSELinux
pss-baseline-v2022-sysctlsThe sysctl profile used by containersSysctls

Before you begin

  1. Install Policy Controller on your cluster with the default library of constraint templates.

Audit Pod Security Standards Baseline policy bundle

Policy Controller lets you enforce policies for your Kubernetes cluster. To help test your workloads and their compliance with regard to the KOSMOS recommended best practices outlined in the preceding table, you can deploy these constraints in “audit” mode to reveal violations and more importantly give yourself a chance to fix them before enforcing on your Kubernetes cluster.

You can apply these policies with spec.enforcementAction set to dryrun using kubectl.

  1. (Optional) Preview the policy constraints with kubectl:
kubectl kustomize https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/pss-baseline-v2022
  1. Apply the policy constraints with kubectl:
kubectl apply -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/pss-baseline-v2022
The output is similar to the following: Click to expand output
k8spspapparmor.constraints.gatekeeper.sh/pss-baseline-v2022-apparmor created
k8spspcapabilities.constraints.gatekeeper.sh/pss-baseline-v2022-capabilities created
k8spsphostfilesystem.constraints.gatekeeper.sh/pss-baseline-v2022-hostpath-volumes created
k8spsphostnamespace.constraints.gatekeeper.sh/pss-baseline-v2022-host-namespaces-host-pid-ipc created
k8spsphostnetworkingports.constraints.gatekeeper.sh/pss-baseline-v2022-host-namespaces-hostnetwork created
k8spsphostnetworkingports.constraints.gatekeeper.sh/pss-baseline-v2022-host-ports created
k8spspprivilegedcontainer.constraints.gatekeeper.sh/pss-baseline-v2022-privileged-containers created
k8spspprocmount.constraints.gatekeeper.sh/pss-baseline-v2022-proc-mount-type created
k8spspselinuxv2.constraints.gatekeeper.sh/pss-baseline-v2022-selinux created
k8spspseccomp.constraints.gatekeeper.sh/pss-baseline-v2022-seccomp created
k8spspforbiddensysctls.constraints.gatekeeper.sh/pss-baseline-v2022-sysctls created
  1. Verify that policy constraints have been installed and check if violations exist across the cluster:
kubectl get -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/pss-baseline-v2022
The output is similar to the following: Click to expand output
NAME                                                                   ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspapparmor.constraints.gatekeeper.sh/pss-baseline-v2022-apparmor                        0

NAME                                                                           ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspcapabilities.constraints.gatekeeper.sh/pss-baseline-v2022-capabilities   dryrun               0

NAME                                                                                 ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostfilesystem.constraints.gatekeeper.sh/pss-baseline-v2022-hostpath-volumes                        0

NAME                                                                                            ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostnamespace.constraints.gatekeeper.sh/pss-baseline-v2022-host-namespaces-host-pid-ipc   dryrun               0

NAME                                                                                                 ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostnetworkingports.constraints.gatekeeper.sh/pss-baseline-v2022-host-namespaces-hostnetwork   dryrun               0
k8spsphostnetworkingports.constraints.gatekeeper.sh/pss-baseline-v2022-host-ports                    dryrun               0

NAME                                                                                           ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspprivilegedcontainer.constraints.gatekeeper.sh/pss-baseline-v2022-privileged-containers   dryrun               0

NAME                                                                           ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspprocmount.constraints.gatekeeper.sh/pss-baseline-v2022-proc-mount-type                        0

NAME                                                                   ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspselinuxv2.constraints.gatekeeper.sh/pss-baseline-v2022-selinux                        0

NAME                                                                 ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspseccomp.constraints.gatekeeper.sh/pss-baseline-v2022-seccomp   dryrun               0

NAME                                                                          ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspforbiddensysctls.constraints.gatekeeper.sh/pss-baseline-v2022-sysctls   dryrun               0
  1. (Optional) Adjust the PSP Field Name parameters in the constraint files as required for your cluster environment. For more details check the link for the specific PSP Field Name in the table above. For example in psp-host-network-ports:
parameters:
  # A minimum restricted known list can be implemented here.
  min: 0
  max: 0

View policy violations

Once the policy constraints are installed in audit mode, violations on the cluster can be viewed in the UI using the Policy Controller Dashboard.

You can also use kubectl to view violations on the cluster using the following command:

kubectl get constraint -l bundleName=pss-baseline-v2022 -o json | jq -cC '.items[]| [.metadata.name,.status.totalViolations]'

If violations are present, a listing of the violation messages per constraint can be viewed with:

kubectl get constraint -l bundleName=pss-baseline-v2022 -o json | jq -C '.items[]| select(.status.totalViolations>0)| [.metadata.name,.status.violations[]?]'

Change Pod Security Standards Baseline policy bundle enforcement action

Once you’ve reviewed policy violations on your cluster, you can consider changing the enforcement mode so the Admission Controller will either warn on or even deny block non-compliant resource from getting applied to the cluster.

[!WARNING] The deny enforcement action should be used with care as it can potentially block required changes resulting in interruption to critical workloads or the cluster. It is strongly recommended that only the warn or dryrun enforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see Auditing using constraints .

  1. Use kubectl to set the policies' enforcement action to warn:
kubectl get constraint -l bundleName=pss-baseline-v2022 -o name | xargs -I {} kubectl patch {} --type='json' -p='[{"op":"replace","path":"/spec/enforcementAction","value":"warn"}]'
  1. Verify that policy constraints enforcement action have been updated:
kubectl get constraint -l bundleName=pss-baseline-v2022

Test policy enforcement

Create a non-compliant resource on the cluster using the following command:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  namespace: default
  name: wp-non-compliant
  labels:
    app: wordpress
spec:
  containers:
    - image: wordpress
      name: wordpress
      ports:
      - containerPort: 80
        hostPort: 80
        name: wordpress
EOF

The admission controller should produce a warning listing out the policy violations that this resource violates, as shown in the following example:

Warning:  [pss-baseline-v2022-host-ports] The specified hostNetwork and hostPort are not allowed, pod: wp-non-compliant. Allowed values: {"max": 0, "min": 0}
pod/wp-non-compliant created

Remove Pod Security Standards Baseline policy bundle

If needed, the Pod Security Standards Baseline policy bundle can be removed from the cluster.

  • Use kubectl to remove the policies:
kubectl delete constraint -l bundleName=pss-baseline-v2022

Use Pod Security Standards Restricted policy constraints

Policy Controller comes with a default library of constraint templates that can be used with the Pod Security Standards Restricted bundle to achieve many of the same protections as Kubernetes Pod Security Standards (PSS) Restricted policy , with the added ability to test your policies before enforcing them and exclude coverage of specific resources.

This page contains instructions for manually applying a policy bundle. Alternatively, you can apply policy bundles directly .

This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.

The bundle includes these constraints which map to the following Kubernetes Pod Security Standards (PSS) Restricted policy controls:

Constraint NameControl DescriptionType
pss-restricted-v2022-capabilitiesLinux capabilitiesCapabilities
pss-restricted-v2022-privilege-escalationRestricting escalation to root privilegesPrivilege Escalation
pss-restricted-v2022-psp-volume-typesUsage of volume typesVolume Types
pss-restricted-v2022-running-as-non-rootThe runAsNonRoot value of the containerRunning as Non-root
pss-restricted-v2022-running-as-non-root-userThe user ID of the containerRunning as Non-root user
pss-restricted-v2022-seccompThe seccomp profile used by containersSeccomp

Before you begin

  1. Install Policy Controller on your cluster with the default library of constraint templates.

Audit Pod Security Standards Restricted policy bundle

Policy Controller lets you enforce policies for your Kubernetes cluster. To help test your workloads and their compliance with regard to the KOSMOS recommended best practices outlined in the preceding table, you can deploy these constraints in “audit” mode to reveal violations and more importantly give yourself a chance to fix them before enforcing on your Kubernetes cluster.

You can apply these policies with spec.enforcementAction set to dryrun using kubectl.

  1. (Optional) Preview the policy constraints with kubectl:
kubectl kustomize https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/pss-restricted-v2022
  1. Apply the policy constraints with kubectl:
kubectl apply -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/pss-restricted-v2022
The output is similar to the following: Click to expand output
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/pss-restricted-v2022-privilege-escalation created
k8spspallowedusers.constraints.gatekeeper.sh/pss-restricted-v2022-running-as-non-root created
k8spspcapabilities.constraints.gatekeeper.sh/pss-restricted-v2022-capabilities created
k8spspseccomp.constraints.gatekeeper.sh/pss-restricted-v2022-seccomp created
k8spspvolumetypes.constraints.gatekeeper.sh/pss-restricted-v2022-psp-volume-types created
  1. Verify that policy constraints have been installed and check if violations exist across the cluster:
kubectl get -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/anthos-bundles/pss-restricted-v2022
The output is similar to the following: Click to expand output
NAME                                                                                                          ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/pss-restricted-v2022-privilege-escalation   dryrun               0

NAME                                                                                    ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspallowedusers.constraints.gatekeeper.sh/pss-restricted-v2022-running-as-non-root   dryrun               0

NAME                                                                             ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspcapabilities.constraints.gatekeeper.sh/pss-restricted-v2022-capabilities   dryrun               0

NAME                                                                   ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspseccomp.constraints.gatekeeper.sh/pss-restricted-v2022-seccomp   dryrun               0

NAME                                                                                ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspvolumetypes.constraints.gatekeeper.sh/pss-restricted-v2022-psp-volume-types   dryrun               0

View policy violations

Once the policy constraints are installed in audit mode, violations on the cluster can be viewed in the UI using the Policy Controller Dashboard.

You can also use kubectl to view violations on the cluster using the following command:

kubectl get constraint -l bundleName=pss-restricted-v2022 -o json | jq -cC '.items[]| [.metadata.name,.status.totalViolations]'

If violations are present, a listing of the violation messages per constraint can be viewed with:

kubectl get constraint -l bundleName=pss-restricted-v2022 -o json | jq -C '.items[]| select(.status.totalViolations>0)| [.metadata.name,.status.violations[]?]'

Change Pod Security Standards Restricted policy bundle enforcement action

Once you’ve reviewed policy violations on your cluster, you can consider changing the enforcement mode so the Admission Controller will either warn on or even deny block non-compliant resource from getting applied to the cluster.

[!WARNING] The deny enforcement action should be used with care as it can potentially block required changes resulting in interruption to critical workloads or the cluster. It is strongly recommended that only the warn or dryrun enforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see Auditing using constraints .

  1. Use kubectl to set the policies' enforcement action to warn:
kubectl get constraint -l bundleName=pss-restricted-v2022 -o name | xargs -I {} kubectl patch {} --type='json' -p='[{"op":"replace","path":"/spec/enforcementAction","value":"warn"}]'
  1. Verify that policy constraints enforcement action have been updated:
kubectl get constraint -l bundleName=pss-restricted-v2022

Test policy enforcement

Create a non-compliant resource on the cluster using the following command:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  namespace: default
  name: wp-non-compliant
  labels:
    app: wordpress
spec:
  containers:
    - image: wordpress
      name: wordpress
      ports:
      - containerPort: 80
        hostPort: 80
        name: wordpress
EOF

The admission controller should produce a warning listing out the policy violations that this resource violates, as shown in the following example:

Warning: [pss-baseline-v2022-host-ports] The specified hostNetwork and hostPort are not allowed, pod: wp-non-compliant. Allowed values: {"max": 0, "min": 0}
pod/wp-non-compliant created

Remove Pod Security Standards Restricted policy bundle

If needed, the Pod Security Standards Restricted policy bundle can be removed from the cluster.

  • Use kubectl to remove the policies:
kubectl delete constraint -l bundleName=pss-restricted-v2022

Policy Controller bundles

This page describes what Policy Controller bundles are and provides an overview of the available policy bundles.

This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.

About Policy Controller bundles

You can use Policy Controller to apply individual constraints to your cluster or write your own custom policies. You can also use policy bundles, which let you audit your clusters without writing any constraints. Policy bundles are a group of constraints that can help apply best practices, meet industry standards, or solve regulatory problems across your cluster resources.

You can apply policy bundles to your existing clusters to check if your workloads are compliant. When you apply a policy bundle, it audits your cluster by applying constraints with the dryrun enforcement type. The dryrun enforcement type lets you see violations without blocking your workloads. It’s also recommended that only the warn or dryrun enforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see Auditing using constraints .

For example, one type of policy bundle is the CIS Kubernetes Benchmark bundle, which can help audit your cluster resources against the CIS Kubernetes Benchmark . This benchmark is a set of recommendations for configuring Kubernetes resources to support a strong security posture.

Available Policy Controller bundles

The following table lists the available policy bundles. Select the name of the policy bundle to read documentation on how to apply the bundle, audit resources, and enforce policies.

Name and descriptionBundle aliasType
CIS Kubernetes Benchmark :
Audit compliance of your clusters against the CIS Kubernetes Benchmark v1.5, a set of recommendations for configuring Kubernetes to support a strong security posture.
cis-k8s-v1.5.1Kubernetes standard
Pod Security Policy :
Apply protections based on the Kubernetes Pod Security Policy (PSP).
psp-v2022Kubernetes standard
Pod Security Standards Baseline :
Apply protections based on the Kubernetes Pod Security Standards (PSS) Baseline policy.
pss-baseline-v2022Kubernetes standard
Pod Security Standards Restricted :
Apply protections based on the Kubernetes Pod Security Standards (PSS) Restricted policy.
pss-restricted-v2022Kubernetes standard
Policy Essentials :
Apply best practices to your cluster resources.
policy-essentialsBest practices
Samsung Security Checklist :
Apply best practices to conform Samsung Security Checklist items in your cluster resources.
samsung-security-checklistBest practices
NIST SP 800-53 Rev. 5 :
The NIST SP 800-53 Rev. 5 bundle implements controls listed in NIST Special Publication (SP) 800-53, Revision 5. The bundle may help organizations protect their systems and data from a variety of threats by implementing out-of-the-box security and privacy policies.
nist-sp-800-53-r5Industry standard
NIST SP 800-190 :
The NIST SP 800-190 bundle implements controls listed in NIST Special Publication (SP) 800-190, Application Container Security Guide. The bundle is intended to help organizations with application container security including image security, container runtime security, network security and host system security to name a few.
nist-sp-800-190Industry standard
NSA CISA Kubernetes Hardening Guide v1.2 :
Apply protections based on the NSA CISA Kubernetes Hardening Guide v1.2.
nsa-cisa-k8s-v1.2Industry standard
PCI-DSS v4.0 :
Apply protections based on the Payment Card Industry Data Security Standard (PCI-DSS) v4.0.
pci-dss-v4.0Industry standard

What’s next


Use Policy Essentials policy constraints

Policy Controller comes with a default library of constraint templates that can be used with the Policy Essentials bundle to apply KOSMOS recommended best practices to your cluster resources.

This page contains instructions for manually applying a policy bundle. Alternatively, you can apply policy bundles directly .

This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.

This bundle of constraints addresses and enforces policies in the following domains:

  • RBAC and service accounts
  • Pod Security Policies
  • Container Network Interface (CNI)
  • Secrets management
  • General policies

Policy Essentials policy bundle constraints

Constraint NameConstraint Description
policy-essentials-no-secrets-as-env-varsPrefer using Secrets as files over Secrets as environment variables
policy-essentials-pods-require-security-contextApply Security Context to your Pods and containers
policy-essentials-prohibit-role-wildcard-accessMinimize the use of wildcards in Roles and ClusterRoles.
policy-essentials-psp-allow-privilege-escalation-containerMinimize the admission of containers with allowPrivilegeEscalation
policy-essentials-psp-capabilitiesContainers must drop the NET_RAW capability and aren’t permitted to add back any capabilities.
policy-essentials-psp-host-namespaceMinimize the admission of containers with hostPID or hostIPC set to true.
policy-essentials-psp-host-network-portsMinimize the admission of containers wanting to share the host network namespace
policy-essentials-psp-pods-must-run-as-nonrootMinimize the admission of root containers
policy-essentials-psp-privileged-containerMinimize the admission of privileged containers
policy-essentials-psp-seccomp-defaultEnsure that the seccomp profile is set to runtime/default or docker/default in your Pod definitions
policy-essentials-restrict-clusteradmin-rolebindingsMinimize the use of the cluster-admin role.

Before you begin

  1. Install Policy Controller on your cluster with the default library of constraint templates.

Audit Policy Essentials policy bundle

Policy Controller lets you enforce policies for your Kubernetes cluster. To help test your workloads and their compliance with regard to the Google recommended best practices outlined in the preceding table, you can deploy these constraints in “audit” mode to reveal violations and more importantly give yourself a chance to fix them before enforcing on your Kubernetes cluster.

You can apply these policies with spec.enforcementAction set to dryrun using kubectl.

  1. (Optional) Preview the policy constraints with kubectl:
kubectl kustomize https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/policy-essentials
  1. Apply the policy constraints with kubectl:
kubectl apply -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/policy-essentials
The output is similar to the following: Click to expand output
k8snoenvvarsecrets.constraints.gatekeeper.sh/policy-essentials-no-secrets-as-env-vars created
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/policy-essentials-psp-allow-privilege-escalation created
k8spspallowedusers.constraints.gatekeeper.sh/policy-essentials-psp-pods-must-run-as-nonroot created
k8spspcapabilities.constraints.gatekeeper.sh/policy-essentials-psp-capabilities created
k8spsphostnamespace.constraints.gatekeeper.sh/policy-essentials-psp-host-namespace created
k8spsphostnetworkingports.constraints.gatekeeper.sh/policy-essentials-psp-host-network-ports created
k8spspprivilegedcontainer.constraints.gatekeeper.sh/policy-essentials-psp-privileged-container created
k8spspseccomp.constraints.gatekeeper.sh/policy-essentials-psp-seccomp-default created
k8spodsrequiresecuritycontext.constraints.gatekeeper.sh/policy-essentials-pods-require-security-context created
k8sprohibitrolewildcardaccess.constraints.gatekeeper.sh/policy-essentials-prohibit-role-wildcard-access created
k8srestrictrolebindings.constraints.gatekeeper.sh/policy-essentials-restrict-clusteradmin-rolebindings created
  1. Verify that policy constraints have been installed and check if violations exist across the cluster:
kubectl get -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/policy-essentials
The output is similar to the following: Click to expand output
NAME                                                                                          ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8snoenvvarsecrets.constraints.gatekeeper.sh/policy-essentials-no-secrets-as-env-vars   dryrun               0

NAME                                                                                                                       ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspallowprivilegeescalationcontainer.constraints.gatekeeper.sh/policy-essentials-psp-allow-privilege-escalation   dryrun               0

NAME                                                                                                ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspallowedusers.constraints.gatekeeper.sh/policy-essentials-psp-pods-must-run-as-nonroot   dryrun               0

NAME                                                                                    ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspcapabilities.constraints.gatekeeper.sh/policy-essentials-psp-capabilities   dryrun               0

NAME                                                                                       ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostnamespace.constraints.gatekeeper.sh/policy-essentials-psp-host-namespace   dryrun               0

NAME                                                                                                 ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spsphostnetworkingports.constraints.gatekeeper.sh/policy-essentials-psp-host-network-ports   dryrun               0

NAME                                                                                                   ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspprivilegedcontainer.constraints.gatekeeper.sh/policy-essentials-psp-privileged-container   dryrun               0

NAME                                                                                  ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spspseccomp.constraints.gatekeeper.sh/policy-essentials-psp-seccomp-default   dryrun               0

NAME                                                                                                            ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8spodsrequiresecuritycontext.constraints.gatekeeper.sh/policy-essentials-pods-require-security-context   dryrun               0

NAME                                                                                                            ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8sprohibitrolewildcardaccess.constraints.gatekeeper.sh/policy-essentials-prohibit-role-wildcard-access   dryrun               0

NAME                                                                                                           ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
k8srestrictrolebindings.constraints.gatekeeper.sh/policy-essentials-restrict-clusteradmin-rolebindings   dryrun               0

View policy violations

Once the policy constraints are installed in audit mode, violations on the cluster can be viewed in the UI using the Policy Controller Dashboard.

You can also use kubectl to view violations on the cluster using the following command:

kubectl get constraint -l bundleName=policy-essentials -o json | jq -cC '.items[]| [.metadata.name,.status.totalViolations]'

If violations are present, a listing of the violation messages per constraint can be viewed with:

kubectl get constraint -l bundleName=policy-essentials -o json | jq -C '.items[]| select(.status.totalViolations>0)| [.metadata.name,.status.violations[]?]'

Change Policy Essentials policy bundle enforcement action

Once you’ve reviewed policy violations on your cluster, you can consider changing the enforcement mode so the Admission Controller will either warn on or even deny block non-compliant resource from getting applied to the cluster.

[!WARNING] The deny enforcement action should be used with care as it can potentially block required changes resulting in interruption to critical workloads or the cluster. It is strongly recommended that only the warn or dryrun enforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see Auditing using constraints .

  1. Use kubectl to set the policies' enforcement action to warn:
kubectl get constraint -l bundleName=policy-essentials -o name | xargs -I {} kubectl patch {} --type='json' -p='[{"op":"replace","path":"/spec/enforcementAction","value":"warn"}]'
  1. Verify that policy constraints enforcement action have been updated:
kubectl get constraint -l bundleName=policy-essentials

Test policy enforcement

Create a non-compliant resource on the cluster using the following command:

cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Pod
metadata:
  namespace: default
  name: wp-non-compliant
  labels:
    app: wordpress
spec:
  containers:
    - image: wordpress
      name: wordpress
      ports:
      - containerPort: 80
        name: wordpress
EOF

The admission controller should produce a warning listing out the policy violations that this resource violates, as shown in the following example:

Warning: [policy-essentials-psp-capabilities] container <wordpress> is not dropping all required capabilities. Container must drop all of ["NET_RAW"] or "ALL"
pod/wp-non-compliant created

Remove Policy Essentials policy bundle

If needed, the Policy Essentials policy bundle can be removed from the cluster.

  • Use kubectl to remove the policies:
kubectl delete constraint -l bundleName=policy-essentials

Use Samsung security checklist policy constraints

Policy Controller comes with a default library of constraint templates that can be used with the Samsung Security Checklist bundle to conform Samsung Security Checklist items to your cluster resources.

This page contains instructions for manually applying a policy bundle. Alternatively, you can apply policy bundles directly .

This page is for IT administrators and Operators who want to ensure that all resources running within the cloud platform meet organizational compliance requirements by providing and maintaining automation to audit or enforce.

This bundle of constraints addresses and enforces policies in the following domains:

  • RBAC and service accounts
  • Pod Security Policies
  • Container Network Interface (CNI)
  • Secrets management
  • General policies

Samsung security checklist policy bundle constraints

Constraint NameConstraint DescriptionCluster Type
samsung-security-checklist-app-gw-require-tls-versionRequires Azure Application Gateway should apply a cipher policy that allows TLSv1.2 or higherAKS
samsung-security-checklist-lb-restrict-traffic-rulesEnsure that Security Group’s inbound / outbound rules comply the following Samsung Security Checklist requirementsAKS
samsung-security-checklist-aks-require-private-clusterEnable a private cluster to restrict worker node to API accessAKS
samsung-security-checklist-aks-restrict-public-access-sourcesRestricts public access sources for API server endpoints to prevent unrestricted access from the internetAKS
samsung-security-checklist-alb-require-https-backendRequires Application LoadBalancer (ALB) target groups are using HTTPS protocol to encrypt communicationEKS
samsung-security-checklist-alb-require-https-protocolRequires Application LoadBalancer (ALB) should use only allow encrypted protocols (HTTPS:443)EKS
samsung-security-checklist-alb-require-https-redirectRequires Application LoadBalancer (ALB) should enable SSL Redirect and specifies the 443 port that redirects toEKS
samsung-security-checklist-alb-require-tls-versionRequires Application LoadBalancer (ALB) should apply a cipher policy that allows TLSv1.2 or higherEKS
samsung-security-checklist-alb-restrict-traffic-rulesEnsure that Security Group’s inbound / outbound rules comply the following Samsung Security Checklist requirementsEKS
samsung-security-checklist-nlb-require-tls-protocolRequires Network & Classic LoadBalancer should use only allow encrypted protocols (TLS:443).EKS
samsung-security-checklist-nlb-require-tls-versionRequires Network & Classic LoadBalancer should apply a cipher policy that allows TLSv1.2 or higherEKS
samsung-security-checklist-nlb-restrict-traffic-rulesEnsure that Security Group’s inbound / outbound rules comply the following Samsung Security Checklist requirementsEKS
samsung-security-checklist-eks-disable-ssh-accessDisable SSH access into any nodegroupsEKS
samsung-security-checklist-eks-require-loggingLogging must be enabled to detect abnormal access to EKS cluster services and systems and provide audit recordsEKS
samsung-security-checklist-eks-restrict-public-access-sourcesRestricts public access sources for API server endpoints to prevent unrestricted access from the internetEKS
samsung-security-checklist-alb-require-https-protocolRequires Application LoadBalancer (ALB) should use only allow encrypted protocols (HTTPS:443)GKE
samsung-security-checklist-alb-require-https-redirectRequires Application LoadBalancer (ALB) should enable SSL Redirect and specifies the 443 port that redirects toGKE
samsung-security-checklist-alb-require-tls-versionRequires Application LoadBalancer (ALB) should apply a cipher policy that allows TLSv1.2 or higherGKE
samsung-security-checklist-gke-require-private-clusterEnable a private cluster to restrict worker node to API accessGKE
samsung-security-checklist-gke-require-secrets-encryptionProtects your secrets in ETCD with a key you manage in Cloud KMSGKE
samsung-security-checklist-gke-restrict-public-access-sourcesRestricts public access sources for API server endpoints to prevent unrestricted access from the internetGKE
samsung-security-checklist-alb-require-https-backendRequires Application LoadBalancer (ALB) target groups are using HTTPS protocol to encrypt communicationMKS
samsung-security-checklist-alb-require-https-protocolRequires Application LoadBalancer (ALB) should use only allow encrypted protocols (HTTPS:443)MKS
samsung-security-checklist-alb-require-https-redirectRequires Application LoadBalancer (ALB) should enable SSL Redirect and specifies the 443 port that redirects toMKS
samsung-security-checklist-alb-require-tls-versionRequires Application LoadBalancer (ALB) should apply a cipher policy that allows TLSv1.2 or higherMKS
samsung-security-checklist-alb-restrict-traffic-rulesEnsure that Security Group’s inbound / outbound rules comply the following Samsung Security Checklist requirementsMKS
samsung-security-checklist-nlb-require-tls-protocolRequires Network & Classic LoadBalancer should use only allow encrypted protocols (TLS:443).MKS
samsung-security-checklist-nlb-require-tls-versionRequires Network & Classic LoadBalancer should apply a cipher policy that allows TLSv1.2 or higherMKS
samsung-security-checklist-nlb-restrict-traffic-rulesEnsure that Security Group’s inbound / outbound rules comply the following Samsung Security Checklist requirementsMKS
samsung-security-checklist-mks-disable-ssh-accessDisable SSH access into any nodegroupsMKS
samsung-security-checklist-mks-require-loggingLogging must be enabled to detect abnormal access to EKS cluster services and systems and provide audit recordsMKS
samsung-security-checklist-mks-restrict-public-access-sourcesRestricts public access sources for API server endpoints to prevent unrestricted access from the internetMKS

Before you begin

  1. Install Policy Controller on your cluster with the default library of constraint templates.

Audit Samsung security checklist policy bundle

Policy Controller lets you enforce policies for your Kubernetes cluster. To help test your workloads and their compliance with regard to the Samsung Security Checklist policies outlined in the preceding table, you can deploy these constraints in “audit” mode to reveal violations and more importantly give yourself a chance to fix them before enforcing on your Kubernetes cluster.

You can apply these policies with spec.enforcementAction set to dryrun using kubectl.

  1. (Optional) Preview the policy constraints with kubectl:
kubectl kustomize https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/samsung-security-checklist
  1. Apply the policy constraints with kubectl:
kubectl apply -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/samsung-security-checklist
The output is the following: Click to expand output
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-alb-require-https-backend created
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-alb-require-https-protocol created
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-alb-require-https-redirect created
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-alb-require-tls-version created
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-alb-restrict-traffic-rules created
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-nlb-require-tls-protocol created
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-nlb-require-tls-version created
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-nlb-restrict-traffic-rules created
  1. Verify that policy constraints have been installed and check if violations exist across the cluster:
kubectl get -k https://github.com/GoogleCloudPlatform/gke-policy-library.git/bundles/samsung-security-checklist
The output is similar to the following: Click to expand output
NAME                                                                                                     ENFORCEMENT-ACTION   TOTAL-VIOLATIONS
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-alb-require-https-backend    dryrun               1
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-alb-require-https-protocol   dryrun               1
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-alb-require-https-redirect   dryrun               1
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-alb-require-tls-version      dryrun               2
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-alb-restrict-traffic-rules   dryrun               2
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-nlb-require-tls-protocol     dryrun               0
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-nlb-require-tls-version      dryrun               0
mksrequiredannotations.constraints.gatekeeper.sh/samsung-security-checklist-nlb-restrict-traffic-rules   dryrun               0

View policy violations

Once the policy constraints are installed in audit mode, violations on the cluster can be viewed in the UI using the Policy Controller Dashboard.

You can also use kubectl to view violations on the cluster using the following command:

kubectl get constraint -l bundleName=samsung-security-checklist -o json | jq -cC '.items[]| [.metadata.name,.status.totalViolations]'

If violations are present, a listing of the violation messages per constraint can be viewed with:

kubectl get constraint -l bundleName=samsung-security-checklist -o json | jq -C '.items[]| select(.status.totalViolations>0)| [.metadata.name,.status.violations[]?]'

Change Samsung security checklist policy bundle enforcement action

Once you’ve reviewed policy violations on your cluster, you can consider changing the enforcement mode so the Admission Controller will either warn on or even deny block non-compliant resource from getting applied to the cluster.

[!WARNING] The deny enforcement action should be used with care as it can potentially block required changes resulting in interruption to critical workloads or the cluster. It is strongly recommended that only the warn or dryrun enforcement actions are used on clusters with production workloads, when testing new constraints, or performing migrations such as upgrading platforms. For more information about enforcement actions, see Auditing using constraints .

  1. Use kubectl to set the policies' enforcement action to warn:
kubectl get constraint -l bundleName=samsung-security-checklist -o name | xargs -I {} kubectl patch {} --type='json' -p='[{"op":"replace","path":"/spec/enforcementAction","value":"warn"}]'
  1. Verify that policy constraints enforcement action have been updated:
kubectl get constraint -l bundleName=samsung-security-checklist

Test policy enforcement

Create a non-compliant resource on the cluster using the following command:

cat <<EOF | kubectl apply -f -
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: nginx
  annotations:
    alb.ingress.kubernetes.io/load-balancer-name: nginx
    alb.ingress.kubernetes.io/scheme: internet-facing
    alb.ingress.kubernetes.io/target-type: ip
    alb.ingress.kubernetes.io/listen-ports: '[{"HTTPS": 443}]'
    alb.ingress.kubernetes.io/ssl-redirect: "443"
spec:
  ingressClassName: alb
  rules:
    - host: this.is.sample.host
      http:
        paths:
          - path: /
            pathType: Prefix
            backend:
              service:
                name: nginx
                port:
                  number: 443
EOF

The admission controller should produce a warning listing out the policy violations that this resource violates, as shown in the following example:

Warning: [samsung-security-checklist-samsung-security-checklist-alb-require-tls-version] container <wordpress> is not dropping all required capabilities. Application LoadBalancer (ALB) is not using TLSv1.2 or higher version.
ingress/nginx created

Remove Samsung security checklist policy bundle

If needed, the Samsung Security Checklist policy bundle can be removed from the cluster.

  • Use kubectl to remove the policies:
kubectl delete constraint -l bundleName=samsung-security-checklist

Edit this page on GitHub