Tanzu Kubernetes Grid – Create workload cluster on vSphere

Reading Time: 2 mins

Hola, In this post I will be demonstrating how to use Tanzu Kubernetes Grid to deploy and manage Tanzu Kubernetes Workload clusters on a vSphere environment. TKG provides commands and options to perform life cycle management operations like Create, Delete, Scale up/down of kubernetes workload cluster.

Prerequisite

  • Before you create Tanzu Kubernetes workload clusters, you must have TKG management cluster deployed, up and running in healthy state.
#To list all the clusters including management
$ tanzu cluster list --include-management-cluster
  NAME                             NAMESPACE   STATUS   CONTROLPLANE  WORKERS  KUBERNETES        ROLES       PLAN
  tkg-cluster-1                    default     running  1/1           1/1      v1.21.2+vmware.1  <none>      dev
  tkg-mgmt-vsphere-20211117113036  tkg-system  running  1/1           1/1      v1.21.2+vmware.1  management  dev
  • Login to vCenter > check for management and workload vm’s as shown below. Output from above (CONTROLPLANE and WORKERS) should be matching.

Create Cluster

Workload cluster can be created using a simple command “tanzu create cluster <desired cluster name> –plan dev” deploys min configuration ( 1 control and worker node ) which is ideal for dev environments. But, if you are looking to deploy with custom setup like having more control plane nodes or worker nodes, then change –plan dev to –plan prod or dry run using below command

Dry run command to review the config
# where tkg-captainv-demo is name of workload cluster, cluster-config.yaml is the config file created at the time of management cluster deployment. 

tanzu cluster create tkg-captainv-demo --plan dev -f cluster-config.yaml --controlplane-machine-count 5 --worker-machine-count 10 --dry-run >> review-config.yaml

# You can do read the review-config.yaml, can make the changes too if you wish: 

cat review-config.yaml

# Check below for control plane config in review-config.yaml:

apiVersion: controlplane.cluster.x-k8s.io/v1alpha3
kind: KubeadmControlPlane
metadata:
  name: tkg-captainv-demo-control-plane
  namespace: default

# Check below for worker node config in review-config.yaml: 

kind: MachineDeployment
metadata:
  labels:
    cluster.x-k8s.io/cluster-name: tkg-captainv-demo
  name: tkg-captainv-demo-md-0
  namespace: default
spec:
  clusterName: tkg-captainv-demo
  replicas: 10
  • I have provisioned workload cluster with 3 control and worker nodes
cluster created
$ tanzu cluster create tkg-captainv-demo --plan prod -f cluster-config.yaml --controlplane-machine-count 3 --worker-machine-count 3
Validating configuration...
Warning: Pinniped configuration not found. Skipping pinniped configuration in workload cluster. Please refer to the documentation to check if you can configure pinniped on workload cluster manually
Creating workload cluster 'tkg-captainv-demo'...
Waiting for cluster to be initialized...
Waiting for cluster nodes to be available...
Waiting for addons installation...
Waiting for packages to be up and running...
Warning: Cluster is created successfully, but some packages are failing. Failure while waiting for packages to be installed: package reconciliation failed: kapp: Error: Timed out waiting after 30s

Workload cluster 'tkg-captainv-demo' created

#verify the cluster list
$ tanzu cluster list --include-management-cluster
  NAME                             NAMESPACE   STATUS   CONTROLPLANE  WORKERS  KUBERNETES        ROLES       PLAN
  tkg-captainv-demo                default     running  3/3           3/3      v1.21.2+vmware.1  <none>      prod
  tkg-cluster-1                    default     running  1/1           1/1      v1.21.2+vmware.1  <none>      dev
  tkg-mgmt-vsphere-20211117113036  tkg-system  running  1/1           1/1      v1.21.2+vmware.1  management  dev
  • You can also view the progress of vm creation in vCenter task bar, new vm’s are deployed into your cluster.

Commands
# Get the credentials 

$ tanzu cluster kubeconfig get tkg-captainv-demo --admin
Credentials of cluster 'tkg-captainv-demo' have been saved
You can now access the cluster by running 'kubectl config use-context tkg-captainv-demo-admin@tkg-captainv-demo'

#Set the context for newly created workload cluster: 
$ kubectl config use-context tkg-captainv-demo-admin@tkg-captainv-demo
Switched to context "tkg-captainv-demo-admin@tkg-captainv-demo".

# List the available contexts and check * is pointing to new cluster. 
ubuntu@ubuntu-526:~$ kubectl config get-contexts
CURRENT   NAME                                                                    CLUSTER                           AUTHINFO                                NAMESPACE
*         tkg-captainv-demo-admin@tkg-captainv-demo                               tkg-captainv-demo                 tkg-captainv-demo-admin
          tkg-cluster-1-admin@tkg-cluster-1                                       tkg-cluster-1                     tkg-cluster-1-admin
          tkg-mgmt-vsphere-20211117113036-admin@tkg-mgmt-vsphere-20211117113036   tkg-mgmt-vsphere-20211117113036   tkg-mgmt-vsphere-20211117113036-admin

#List the available nodes in workload cluster
$ kubectl get nodes
NAME                                      STATUS   ROLES                  AGE   VERSION
tkg-captainv-demo-control-plane-4td6h     Ready    control-plane,master   22m   v1.21.2+vmware.1
tkg-captainv-demo-control-plane-smv9g     Ready    control-plane,master   20m   v1.21.2+vmware.1
tkg-captainv-demo-control-plane-ttjd9     Ready    control-plane,master   25m   v1.21.2+vmware.1
tkg-captainv-demo-md-0-5f569447f9-6zjd6   Ready    <none>                 24m   v1.21.2+vmware.1
tkg-captainv-demo-md-0-5f569447f9-ktqks   Ready    <none>                 23m   v1.21.2+vmware.1
tkg-captainv-demo-md-0-5f569447f9-qnpmb   Ready    <none>                 24m   v1.21.2+vmware.1

#Check all the pods running in new workload cluster: 
$ kubectl get pods -A
Created VM’s in vCenter

Deploy an application in Workload cluster

deploy and service
# Create new name space
$ kubectl create ns bba-news
namespace/bba-news created

#Create a new deployment with name bbanews-deploy
$ kubectl create deploy bbanews-deploy -n bba-news --image eknath009/nginx-bbanews
deployment.apps/bbanews-deploy created

# Get the deployment status in bba-news namespace
$ kubectl get deploy -n bba-news
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
bbanews-deploy   1/1     1            1           2m31s

#Expose the deployment by creating a new service of type Load balancer
$ kubectl expose deployment bbanews-deploy --type LoadBalancer --port 80 -n bba-news
service/bbanews-deploy exposed

#Get the external IP from the created load balancer

ubuntu@ubuntu-526:~$ kubectl get svc -n bba-news
NAME             TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
bbanews-deploy   LoadBalancer   100.71.42.150   10.212.195.10   80:31636/TCP   8s
  • Access the IP from browser and webpage should load as below:

web page

Demo Video