TKG 1.4 on Azure – Part 3 : Create workload cluster

Reading Time: < 1 min

When you deploy Tanzu Kubernetes (workload) clusters to Microsoft Azure, you must specify options in the cluster configuration file to connect to your Azure account and identify the resources that the cluster will use.

  • Create a yaml file with variables given in below template. for ex: wc-config.yaml 
#! ---------------------------------------------------------------------
#! Cluster creation basic configuration
#! ---------------------------------------------------------------------

# CLUSTER_NAME:
CLUSTER_PLAN: dev
NAMESPACE: default
CNI: antrea
IDENTITY_MANAGEMENT_TYPE: oidc


#! ---------------------------------------------------------------------
#! Azure Configuration
#! ---------------------------------------------------------------------

AZURE_ENVIRONMENT: "AzurePublicCloud"
AZURE_TENANT_ID:
AZURE_SUBSCRIPTION_ID:
AZURE_CLIENT_ID:
AZURE_CLIENT_SECRET:
AZURE_LOCATION:
AZURE_SSH_PUBLIC_KEY_B64:


#! ---------------------------------------------------------------------
#! Machine Health Check configuration
#! ---------------------------------------------------------------------

ENABLE_MHC:
ENABLE_MHC_CONTROL_PLANE: true
ENABLE_MHC_WORKER_NODE: true
MHC_UNKNOWN_STATUS_TIMEOUT: 5m
MHC_FALSE_STATUS_TIMEOUT: 12m

#! ---------------------------------------------------------------------
#! Common configuration
#! ---------------------------------------------------------------------



ENABLE_AUDIT_LOGGING: true
ENABLE_DEFAULT_STORAGE_CLASS: true

CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13

Note: Below example is only for ref, please change the values accordingly.

#####AZURE_SSH_PUBLIC_KEY_B64: The base64-encoded public key from the SSH key pair that was created earlier. You can run a command similar to base64 /tmp/id_rsa.pub to get this value.

reddye@reddye-a02 tkg % cat wc-config.yaml
#! ---------------------------------------------------------------------
#! Cluster creation basic configuration
#! ---------------------------------------------------------------------

# CLUSTER_NAME:
CLUSTER_PLAN: dev
NAMESPACE: default
CNI: antrea
CONTROL_PLANE_MACHINE_COUNT: 1
WORKER_MACHINE_COUNT: 1
AZURE_VNET_NAME: capv-vnet
AZURE_VNET_CIDR: 10.0.0.0/16
AZURE_CONTROL_PLANE_MACHINE_TYPE: Standard_D2as_v4
AZURE_NODE_MACHINE_TYPE: Standard_D2as_v4

#! ---------------------------------------------------------------------
#! Azure Configuration
#! ---------------------------------------------------------------------

AZURE_ENVIRONMENT: "AzurePublicCloud"
AZURE_TENANT_ID: b6ca-3c2e-4b4a-a4d6-cd862f0
AZURE_SUBSCRIPTION_ID: 82added431-f0b2-4e6b-aa9b-45wrong5244
AZURE_CLIENT_ID: c35f234b-1e51-hey-ac3e-f91wrongaf97c1
AZURE_CLIENT_SECRET: hZw7Q~jH~SlTq7XAmrvjUZ81gv
AZURE_LOCATION: centralindia
AZURE_RESOURCE_GROUP: capv-worker-ci
AZURE_SSH_PUBLIC_KEY_B64: c3NoLXJzYSBBQUFBQjNOemFDMXljMkVBQUFBREFRQUJBWNMYkoybElTcW12Qk84R2FGWjhGaUFkWU5Dd0NHZTZrS0xGb0pxczhoTjE0SUtPZi9xVkRndkNPbmRPamUxY2VjLzFZNEJpbzU3WkIyV0NuWHhKQkJGcGdLaXAwY2Rpa2c2eFc2MzhjY2FmVkNBcUp0WE1ZR2ZURkxZYVpSeUNFemNIdUllQlRNTnM2Z2hSamxLTXpWdHN3THFuVVM5UGNiZWN3MVpjVjdqbGRzYUZvZDkrRFNpNjVxb0tSYUFiUEQ0ZTZ6V2lMcFRHRjRqbiswSXZyN1FORkRjenl4OGJPR1A1ODVOYU5kWWNnUmtKSXArN1VzQWt6N2FYREw1dElYUSt2a2oyNWIxSGJNdUtQUjc2dGYyK2V2UHdCNlhybzczbk1maDRSK2VTQnljN212dkZKcmpCOEN0MUxuemdDdVp4RHlNd2U4ak5tQktLZFNvQklJTzhYLys5TmJ1K1BaYWNCeUdvV3g5dEFZSFJQUGhldDRwZGt0WkRqTU82dm1JQVlNK0hOOFM5NGJPNFlPMGI2cmZEbE84cnlwYmJjNkRuMjFieFpLalpRTUxWYnVkaXpweTBCS3M9IGVrbmF0aAo=

#! ---------------------------------------------------------------------
#! Machine Health Check configuration
#! ---------------------------------------------------------------------

ENABLE_MHC_CONTROL_PLANE: true
ENABLE_MHC_WORKER_NODE: true
MHC_UNKNOWN_STATUS_TIMEOUT: 5m
MHC_FALSE_STATUS_TIMEOUT: 12m
MACHINE_HEALTH_CHECK_ENABLED: "true"

#! ---------------------------------------------------------------------
#! Common configuration
#! ---------------------------------------------------------------------

ENABLE_AUDIT_LOGGING: true
ENABLE_DEFAULT_STORAGE_CLASS: true
CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13
  • When you’re ready, you can run a command similar to the following to kick off the deployment. You can use lots of other parameters to fine-tune the cluster, here I have used a very simple approach.
# capv-workload is the workload cluster name and wc-config.yaml is the file I have created in my previous step from a template. 

reddye@reddye-a02 tkg % tanzu cluster create capv-workload -f wc-config.yaml
Validating configuration...
Warning: Pinniped configuration not found. Skipping pinniped configuration in workload cluster. Please refer to the documentation to check if you can configure pinniped on workload cluster manually
Creating workload cluster 'capv-workload'...
Waiting for cluster to be initialized...
Waiting for cluster nodes to be available...
Waiting for addons installation...
Waiting for packages to be up and running...

Workload cluster 'capv-workload' created
  • And back in Azure you should see a new Resource Group created whose name matches the name of your workload cluster.

#Show workload clusters

reddye@reddye-a02 tkg % tanzu clusters list
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN
capv-workload default running 1/1 1/1 v1.21.2+vmware.1 <none> dev

#Get the credentials for newly created cluster, it shows command to switch the context.

reddye@reddye-a02 tkg % tanzu cluster kubeconfig get capv-workload --admin
Credentials of cluster 'capv-workload' have been saved
You can now access the cluster by running 'kubectl config use-context capv-workload-admin@capv-workload'

#Change the context

reddye@reddye-a02 tkg % kubectl config use-context capv-workload-admin@capv-workload
Switched to context "capv-workload-admin@capv-workload".

#List the pods in workload cluster
reddye@reddye-a02 tkg % kubectl get pods -A
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system antrea-agent-2gbms 2/2 Running 0 13m
kube-system antrea-agent-fzdr5 2/2 Running 0 13m
kube-system antrea-controller-8576d95474-j8ldw 1/1 Running 0 13m
kube-system coredns-8dcb5c56b-bfwdh 1/1 Running 0 16m
kube-system coredns-8dcb5c56b-s7xzn 1/1 Running 0 16m
kube-system etcd-capv-workload-control-plane-4p2qw 1/1 Running 0 16m
kube-system kube-apiserver-capv-workload-control-plane-4p2qw 1/1 Running 0 16m
kube-system kube-controller-manager-capv-workload-control-plane-4p2qw 1/1 Running 0 16m
kube-system kube-proxy-q24k7 1/1 Running 0 15m
kube-system kube-proxy-w89k9 1/1 Running 0 16m
kube-system kube-scheduler-capv-workload-control-plane-4p2qw 1/1 Running 0 16m
kube-system metrics-server-666c4ffc7c-4zxj4 1/1 Running 0 13m
tkg-system kapp-controller-574996f8b-rlxdf 1/1 Running 0 16m
tkg-system tanzu-capabilities-controller-manager-6ff97656b8-pktrd 1/1 Running 0 15m

Refer to doc for more info