TKG 1.4 on Azure – Part 2: Deploy management cluster

Reading Time: 5 mins

In this post, we will go through the steps to create management cluster through UI.

# Run the command: 

tanzu management-cluster create --ui

# Gives below output and automatically opens a browser window

Validating the pre-requisites...
Serving kickstart UI at http://127.0.0.1:8080
  • Select Deploy under Microsoft Azure

  • Fill in TENANT ID, CLIENT ID, CLIENT SECRET, SUBSCRIPTION ID that is collected in earlier post.
  • Select the Region where you would like that management cluster to be deployed.
  • Paste the SSH Public Key that is generated in previous post. Hint: file will have extension as .pub
  • Create a new resource group or select existing one. If it is new setup, then recommendation is to create new resource group.
  • Click the Next button.

  • If you have an existing VNET you’d like to use, then choose it from the dropdown. You’ll also need to choose appropriate control plane and worker node subnets. For this example, we’re letting the installer create a new VNET in resource created in earlier step and leaving the subnet CIDR to default values.
  • click Next

  • Development vs. Production: This option decides the number of control plane nodes to be deployed (1 vs 3). I’ve selected a Development deployment with the smallest node size available and have named the management cluster capv-mgmt
  • Instance Type: This will dictate the compute and storage characteristics of the control plane virtual machines deployed to Azure. Before selecting the size, make sure you have sufficient cpu’s available in that region.
  • Worker Node Instance Type: This will dictate the compute and storage characteristics of the worker virtual machines deployed to Azure.
  • Machine Health Checks: This will dictate whether ClusterAPI will monitor the health of the deployed virtual machines and recreate them if they are deemed unhealthy.

  • The Metadata page is entirely optional so complete it as you feel is appropriate.
  • To Enable Proxy Settings, toggle on and fill in the values if you want to configure proxy. In this case, I left it disabled.
  • Click Next

  • It is strongly recommended to implement identity management in production deployments. If you disable identity management, you can reenable it later. For instructions on how to reenable identity management, see Enable Identity Management in an Existing Deployment.

  • The OS Image drop-down menu includes OS images, select the latest one.

  • At the time of writing this post, Registering TKG 1.4.0 management cluster is not supported. Tanzu Mission Control does not support cluster lifecycle management of 1.4.0 workload clusters, so I had to skip this.
  • click Next

  • Choose to participate in the CEIP or not.
  • click Next

  • Review Configuration and click on Deploy management cluster if everything looks good.
  • You’ll be able to follow the high-level progress in the UI.

 

  • Identify the kubeconfig path in the install logs as shown below

  • From your boot strap machine, try running below command to check the status of boot strap cluster that is deployed temporarily in KIND.
# kubeconfig path needs to be changed, which can be found in UI logs. 

reddye@reddye-a02 ~ % kubectl get nodes --kubeconfig /Users/reddye/.kube-tkg/tmp/config_ULpBrYbm
NAME STATUS ROLES AGE VERSION
tkg-kind-c6f1fe7blarglh7hb8kg-control-plane Ready control-plane,master 7m40s v1.21.2+vmware.1-360497810732255795

# -w will help
kubectl get pods -A --kubeconfig /Users/reddye/.kube-tkg/tmp/config_ULpBrYbm

# You might see few pods in container creating state, wait for few mins and it should go through:

NAMESPACE NAME READY STATUS RESTARTS AGE
capi-kubeadm-bootstrap-system capi-kubeadm-bootstrap-controller-manager-6494884869-9gmbs 0/2 ContainerCreating 0 3m27s
capi-kubeadm-control-plane-system capi-kubeadm-control-plane-controller-manager-857d687b9d-h4rcd 0/2 ContainerCreating 0 3m26s
capi-system capi-controller-manager-778bd4dfb9-wdv82 0/2 ContainerCreating 0 3m29s
capi-webhook-system capi-controller-manager-9995bdc94-225xw 0/2 ContainerCreating 0 3m29s
capi-webhook-system capi-kubeadm-bootstrap-controller-manager-68845b65f8-nkdcv 0/2 ContainerCreating 0 3m28s
capi-webhook-system capi-kubeadm-control-plane-controller-manager-9847c6747-txg2m 0/2 ContainerCreating 0 3m27s
capi-webhook-system capz-controller-manager-85b5fb4dfc-pw24l 0/2 ContainerCreating 0 3m23s
capz-system capz-controller-manager-7f94686c74-lgbw5 0/2 ContainerCreating 0 3m23s
capz-system capz-nmi-xh52j 0/1 ContainerCreating 0 3m23s
  • You should see more activity in the UI as the deployment progresses, especially as the bootstrap cluster is being instantiated.

# where containers are manager and kube-rbac-proxy, which can be found using describe command: 

#kubeconfig to be changed which can be seen in UI logs

kubectl describe pod capi-controller-manager-778bd4dfb9-wdv82 -n capi-system --kubeconfig /Users/reddye/.kube-tkg/tmp/config_ULpBrYbm

kubectl logs capi-controller-manager-778bd4dfb9-wdv82 -n capi-system --kubeconfig /Users/reddye/.kube-tkg/tmp/config_ULpBrYbm manager -f

# capi-controller-manager-778bd4dfb9-wdv82 is the pod name I collected by running kubectl get pods -A --kubeconfig <path found in UI logs>
  • When the deployment is finished, you should be presented with a screen similar in the UI:

  • Verify the management cluster status using tanzu command
reddye@reddye-a02 ~ % tanzu management-cluster get
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
capv-mgmt tkg-system running 1/1 1/1 v1.21.2+vmware.1 management


Details:

NAME READY SEVERITY REASON SINCE MESSAGE
/capv-mgmt True 3m9s
├─ClusterInfrastructure - AzureCluster/capv-mgmt True 3m13s
├─ControlPlane - KubeadmControlPlane/capv-mgmt-control-plane True 3m9s
│ └─Machine/capv-mgmt-control-plane-lvglk True 3m12s
└─Workers
└─MachineDeployment/capv-mgmt-md-0
└─Machine/capv-mgmt-md-0-57db8c6b48-9t5s2 True 3m12s


Providers:

NAMESPACE NAME TYPE PROVIDERNAME VERSION WATCHNAMESPACE
capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider kubeadm v0.3.23
capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider kubeadm v0.3.23
capi-system cluster-api CoreProvider cluster-api v0.3.23
capz-system infrastructure-azure InfrastructureProvider azure v0.4.15

reddye@reddye-a02 ~ % kind get clusters
No kind clusters found.
  • If you go back to the command line where you issued the tanzu management-cluster create –ui command,  you should see output similar to the following:
reddye@reddye-a02 ~ % tanzu management-cluster create --ui

Validating the pre-requisites...
Serving kickstart UI at http://127.0.0.1:8080
Identity Provider not configured. Some authentication features won't work.
Validating configuration...
web socket connection established
sending pending 2 logs to UI
Using infrastructure provider azure:v0.4.15
Generating cluster configuration...
Setting up bootstrapper...
Bootstrapper created. Kubeconfig: /Users/reddye/.kube-tkg/tmp/config_ULpBrYbm
Installing providers on bootstrapper...
Fetching providers
Installing cert-manager Version="v1.1.0"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.23" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.23" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.23" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-azure" Version="v0.4.15" TargetNamespace="capz-system"
Start creating management cluster...
Saving management cluster kubeconfig into /Users/reddye/.kube/config
Installing providers on management cluster...
Fetching providers
Installing cert-manager Version="v1.1.0"
Waiting for cert-manager to be available...
Installing Provider="cluster-api" Version="v0.3.23" TargetNamespace="capi-system"
Installing Provider="bootstrap-kubeadm" Version="v0.3.23" TargetNamespace="capi-kubeadm-bootstrap-system"
Installing Provider="control-plane-kubeadm" Version="v0.3.23" TargetNamespace="capi-kubeadm-control-plane-system"
Installing Provider="infrastructure-azure" Version="v0.4.15" TargetNamespace="capz-system"
Waiting for the management cluster to get ready for move...
Waiting for addons installation...
Moving all Cluster API objects from bootstrap cluster to management cluster...
Performing move...
Discovering Cluster API objects
Moving Cluster API objects Clusters=1
Creating objects in the target cluster
Deleting objects from the source cluster
Waiting for additional components to be up and running...
Waiting for packages to be up and running...
Context set for management cluster capv-mgmt as 'capv-mgmt-admin@capv-mgmt'.

Management cluster created!


You can now create your first workload cluster by running the following:

tanzu cluster create [name] -f [file]