TKG 1.4 on AWS – Part 2: Deploy management cluster

Reading Time: 4 mins

Overview

A management cluster is the first Key component that you deploy when you create Tanzu Kubernetes Grid. The management cluster is a Kubernetes cluster that performs the role of the primary management and operational control plane for the Tanzu Kubernetes Grid. This is where Cluster API runs to create the Tanzu Kubernetes clusters in which your application workloads run, and where you configure the shared and in-cluster services that the clusters use. This cluster manages the life-cycle of TKG workload clusters e.g., creating, scaling, upgrading, deleting, managing TKG workload clusters.

Deploy Management Cluster

# 

$ tanzu mc create --ui --bind 0.0.0.0:8080

Validating the pre-requisites...
Serving kickstart UI at http://[::]:8080
unable to open browser: exec: "xdg-open": executable file not found in $PATH

In your local browser, access the installer interface using http://<bootstrap machine public ip>:8080

  • Select Amazon EC2 Deploy

  • Select Credential Profile that is created in previous post
  • Select appropriate region > Connect
  • Next

  • Select Create new VPC for AWS and the CIDR is auto filled as below
  • If you want to use existing VPC to deploy management cluster, please refer to doc

  • click here to check various ec2 instance types
  • Provide instance type for management control plane, worker node
  • Provide Management cluster name and EC2 key Pair which we created in post
  • If this is your first deployment, then check the box AWS cloudformation stack 

  • Metadata (Optional) > Next
  • Kubernetes Network: You have the option to enable Proxy settings. I have disabled here, but you can refer to doc for more details > Next

  • Identity management: It is recommended to enable for production workloads, for more details refer to doc  > Next

  • Select the OS Image > Next

  • Register with Tanzu Mission Control (Optional ) > Next

  • CEIP Agreement > check the box Participate in the Customer Experience Improvement Program > Next
  • Review Configuration
  • Deploy Management cluster
  • You can monitor the progress in terminal and it should take 15 mins for the complete process to complete.
mc cluster creation output
# Can see this output in terminal 

Resource |Type |Status
AWS::IAM::InstanceProfile |control-plane.tkg.cloud.vmware.com |CREATE_COMPLETE
AWS::IAM::InstanceProfile |controllers.tkg.cloud.vmware.com |CREATE_COMPLETE
AWS::IAM::InstanceProfile |nodes.tkg.cloud.vmware.com |CREATE_COMPLETE
AWS::IAM::ManagedPolicy |arn:aws:iam::778018584600:policy/control-plane.tkg.cloud.vmware.com |CREATE_COMPLETE
AWS::IAM::ManagedPolicy |arn:aws:iam::778018584600:policy/nodes.tkg.cloud.vmware.com |CREATE_COMPLETE
AWS::IAM::ManagedPolicy |arn:aws:iam::778018584600:policy/controllers.tkg.cloud.vmware.com |CREATE_COMPLETE
AWS::IAM::Role |control-plane.tkg.cloud.vmware.com |CREATE_COMPLETE
AWS::IAM::Role |controllers.tkg.cloud.vmware.com |CREATE_COMPLETE
AWS::IAM::Role |nodes.tkg.cloud.vmware.com |CREATE_COMPLETE
  • In AWS console, navigate to Cloud Formation > Stacks
  • Can see the list of resources created during management cluster creation.

Verify the cluster

  • Login to AWS console > EC2
  • Once the management cluster is created, you can see EC2 vm’s deployed and running as below

  • Check the cluster status using kubectl command:
# Check the context

$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* tkg-mgmt-aws-admin@tkg-mgmt-aws tkg-mgmt-aws tkg-mgmt-aws-admin

$ tanzu mc get
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES
tkg-mgmt-aws tkg-system running 1/1 1/1 v1.21.2+vmware.1 management


Details:

NAME READY SEVERITY REASON SINCE MESSAGE
/tkg-mgmt-aws True 23h
├─ClusterInfrastructure - AWSCluster/tkg-mgmt-aws True 23h
├─ControlPlane - KubeadmControlPlane/tkg-mgmt-aws-control-plane True 23h
│ └─Machine/tkg-mgmt-aws-control-plane-rrdzt True 23h
└─Workers
└─MachineDeployment/tkg-mgmt-aws-md-0
└─Machine/tkg-mgmt-aws-md-0-557c4b66d7-kq79h True 23h


Providers:

NAMESPACE NAME TYPE PROVIDERNAME VERSION WATCHNAMESPACE
capa-system infrastructure-aws InfrastructureProvider aws v0.6.6
capi-kubeadm-bootstrap-system bootstrap-kubeadm BootstrapProvider kubeadm v0.3.23
capi-kubeadm-control-plane-system control-plane-kubeadm ControlPlaneProvider kubeadm v0.3.23
capi-system cluster-api CoreProvider cluster-api v0.3.23

# To check the of pods in cluster

$ kubectl get pods -A