TKG 1.4 on AWS – Part 3 : Create workload cluster

Reading Time: 2 mins

Overview

In VMware Tanzu Kubernetes Grid, Tanzu Kubernetes clusters are the Kubernetes clusters in which your application workloads run. Tanzu Kubernetes Grid automatically deploys clusters to the platform on which you deployed the management cluster. For example, you cannot deploy clusters to Amazon EC2 or Azure from a management cluster that is running in vSphere, or the reverse. It is not possible to use shared services between the different providers.

When you deploy Tanzu Kubernetes (workload) clusters to AWS, you must specify options in the cluster configuration file to identify the resources that cluster uses.

Deploy workload cluster

  • The template below includes all of the options that are relevant to deploy Tanzu Kubernetes clusters on Amazon EC2
config yaml file
#! ---------------------------------------------------------------------
#! Cluster creation basic configuration
#! ---------------------------------------------------------------------

#! CLUSTER_NAME:
CLUSTER_PLAN: dev
NAMESPACE: default
CNI: antrea
IDENTITY_MANAGEMENT_TYPE: oidc

#! ---------------------------------------------------------------------
#! Node configuration
#! AWS-only MACHINE_TYPE settings override cloud-agnostic SIZE settings.
#! ---------------------------------------------------------------------

# SIZE:
# CONTROLPLANE_SIZE:
# WORKER_SIZE:
CONTROL_PLANE_MACHINE_TYPE: t3.2xlarge
NODE_MACHINE_TYPE: t3.large
CONTROL_PLANE_MACHINE_COUNT: 1
WORKER_MACHINE_COUNT: 1
# WORKER_MACHINE_COUNT_0:
# WORKER_MACHINE_COUNT_1:
# WORKER_MACHINE_COUNT_2:

#! ---------------------------------------------------------------------
#! AWS Configuration
#! ---------------------------------------------------------------------

AWS_REGION: ap-south-1
AWS_NODE_AZ: "ap-south-1a"
# AWS_NODE_AZ_1: ""
# AWS_NODE_AZ_2: ""
AWS_VPC_ID: "<vpc id>"
AWS_PRIVATE_SUBNET_ID: "<subnet id>"
AWS_PUBLIC_SUBNET_ID: "<subnet id>"
# AWS_PUBLIC_SUBNET_ID_1: ""
# AWS_PRIVATE_SUBNET_ID_1: ""
# AWS_PUBLIC_SUBNET_ID_2: ""
# AWS_PRIVATE_SUBNET_ID_2: ""
# AWS_VPC_CIDR: 10.0.0.0/16
# AWS_PRIVATE_NODE_CIDR: 10.0.0.0/24
# AWS_PUBLIC_NODE_CIDR: 10.0.1.0/24
# AWS_PRIVATE_NODE_CIDR_1: 10.0.2.0/24
# AWS_PUBLIC_NODE_CIDR_1: 10.0.3.0/24
# AWS_PRIVATE_NODE_CIDR_2: 10.0.4.0/24
# AWS_PUBLIC_NODE_CIDR_2: 10.0.5.0/24
AWS_SSH_KEY_NAME: <key pair name>
BASTION_HOST_ENABLED: false

#! ---------------------------------------------------------------------
#! Machine Health Check configuration
#! ---------------------------------------------------------------------

ENABLE_MHC:
ENABLE_MHC_CONTROL_PLANE: true
ENABLE_MHC_WORKER_NODE: true
MHC_UNKNOWN_STATUS_TIMEOUT: 5m
MHC_FALSE_STATUS_TIMEOUT: 12m

#! ---------------------------------------------------------------------
#! Common configuration
#! ---------------------------------------------------------------------

# TKG_CUSTOM_IMAGE_REPOSITORY: ""
# TKG_CUSTOM_IMAGE_REPOSITORY_CA_CERTIFICATE: ""

# TKG_HTTP_PROXY: ""
# TKG_HTTPS_PROXY: ""
# TKG_NO_PROXY: ""

ENABLE_AUDIT_LOGGING: true
ENABLE_DEFAULT_STORAGE_CLASS: true

CLUSTER_CIDR: 100.96.0.0/11
SERVICE_CIDR: 100.64.0.0/13

# OS_NAME: ""
# OS_VERSION: ""
# OS_ARCH: ""

#! ---------------------------------------------------------------------
#! Autoscaler configuration
#! ---------------------------------------------------------------------

ENABLE_AUTOSCALER: false
# AUTOSCALER_MAX_NODES_TOTAL: "0"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_ADD: "10m"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_DELETE: "10s"
# AUTOSCALER_SCALE_DOWN_DELAY_AFTER_FAILURE: "3m"
# AUTOSCALER_SCALE_DOWN_UNNEEDED_TIME: "10m"
# AUTOSCALER_MAX_NODE_PROVISION_TIME: "15m"
# AUTOSCALER_MIN_SIZE_0:
# AUTOSCALER_MAX_SIZE_0:
# AUTOSCALER_MIN_SIZE_1:
# AUTOSCALER_MAX_SIZE_1:
# AUTOSCALER_MIN_SIZE_2:
# AUTOSCALER_MAX_SIZE_2:

#! ---------------------------------------------------------------------
#! Antrea CNI configuration
#! ---------------------------------------------------------------------

# ANTREA_NO_SNAT: false
# ANTREA_TRAFFIC_ENCAP_MODE: "encap"
# ANTREA_PROXY: false
# ANTREA_POLICY: true
# ANTREA_TRACEFLOW: false
#Command to create workload cluster with config yaml created in earlier step: 

#Syntax: tanzu cluster create <workload-cluster name> -f <config file>

tanzu cluster create tkg-workload-aws-1 -f wc-config.yaml

# List the available workload clusters

$ tanzu cluster list
NAME NAMESPACE STATUS CONTROLPLANE WORKERS KUBERNETES ROLES PLAN
tkg-workload-aws-1 default running 1/1 1/1 v1.21.2+vmware.1 <none> dev

 

Create an application

  • Since the workload cluster is deployed, lets deploy a simple application. I have used the image that is already present in my docker registry.
Create Tag for public subnets
# Create tag 

aws ec2 create-tags --resources < public subnet id created by TKG during management cluster creation> --tags Key=kubernetes.io/cluster/<workload cluster name>,Value=shared
  • Login to AWS console > AWS services > VPC > Subnets > select public subnet created by TKG > Tags

Deploy an application
# Ensure the context is set to workload cluster. 

#Syntax: tanzu cluster kubeconfig get <workload cluster name> --admin

$ tanzu cluster kubeconfig get tkg-workload-aws-1 --admin
Credentials of cluster 'tkg-workload-aws-1' have been saved
You can now access the cluster by running 'kubectl config use-context tkg-workload-aws-1-admin@tkg-workload-aws-1'

$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
tkg-mgmt-aws-admin@tkg-mgmt-aws tkg-mgmt-aws tkg-mgmt-aws-admin
* tkg-workload-aws-1-admin@tkg-workload-aws-1 tkg-workload-aws-1 tkg-workload-aws-1-admin


# Deploy an application. In below example, spring-deploy is the name of deployment, image used is eknath009/tbs-spring-image:3

$ kubectl create deployment spring-deploy --port=8080 --image=eknath009/tbs-spring-image:3 --replicas=3
deployment.apps/spring-deploy created
$ kubectl get deploy,pods
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/spring-deploy 3/3 3 3 9s

NAME READY STATUS RESTARTS AGE
pod/spring-deploy-844c6c7688-7bc5h 1/1 Running 0 9s
pod/spring-deploy-844c6c7688-sttnk 1/1 Running 0 9s
pod/spring-deploy-844c6c7688-t5l6b 1/1 Running 0 9s

#Expose the application

$ kubectl expose deployment spring-deploy --port=8080 --type=LoadBalancer
service/spring-deploy exposed

# Get the Load balancer

$ kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 20h
spring-deploy LoadBalancer 100.70.45.90 a5fcf72eaa411442e8db72dce089fa1f-1555891019.ap-south-1.elb.amazonaws.com 8080:30516/TCP 5s
  • Login to AWS console, navigate to EC2 > Load Balancers
  • Check for newly created load balancer as shown below and verify the state, health check.

Access the Application

  • Open the browser and copy the load balancer DNS name followed by port 8080

Cute Puppies right ! .. Thanks for Reading.