TAP on EKS (beta-4) : Part 1 – Prepare the setup

Reading Time: 6 mins

Tanzu Application Platform (TAP) is a packaged set of components that helps developers and operators build, deploy, and manage apps on Kubernetes. It is currently in beta ( use for testing only).

Note: Please go through VMware official documentation for latest updates.

Overview

 

Prerequisites

  • Tanzu Network account to download Tanzu Application Platform packages
  • A container image registry, such as Harbor or Docker Hub or  google container registry with at least 20 GB of available storage for application images, base images, and runtime dependencies. In this demo, I will be using gcr.io for storing images.
  • Network access to https://registry.tanzu.vmware.com
  • Git infrastructure access like GitHub, GitLab, Azure DevOps
  • Kubernetes cluster v1.19 or later running on any of the environments including AKS, EKS, GKE, Kind, MiniKube, TKG 1.4, TCE with at least 50 GB disk per node. In this demo, we will deploy TAP on Amazon EKS

Note: Refer to doc for more details.

Create EKS Cluster

In this section, I will be taking you through the steps to create a Kubernetes cluster on Amazon EKS.

Create Cluster Service Role

  • Login to AWS management console > IAM > Access Management > Roles > Create Role
  • Select AWS Service
  • Under select a service to view its use cases , select EKS
  • Select your use case > EKS Cluster and click on Next Permissions
  • Leave AmazonEKSClusterPolicy to default and click on Next: Tags
  • Add Tag (Optional) and click on Next: Review
  • Give a name and click on Create role.

 

Create EKS Cluster

  • Login to AWS management console > Elastic Kubernetes Service > Add Cluster > Create

  • Click Next
  • Under Specify networking section, leave the values to default and click Next
  • Under Configure logging section, leave the values to default and click Next
  • Click Create

Cluster creation should take a while to complete, upon successful completion, status should show Active

Create Node IAM Role

  • Login to AWS management console > IAM > Access Management > Roles > Create Role
  • Select AWS Service > EC2 and click on Next:Permissions
  • Under Attach Permissions, select below policies and click on Next:Tags
        • AmazonEKSWorkerNodePolicy
        • AmazonEC2ContainerRegistryReadOnly
        • AmazonEKS_CNI_Policy
  • Tags (Optional) and click Next: Review
  • Give a Name and Create role.

Create Node group in EKS cluster

  • Click on newly created EKS cluster > Configuration > Compute > Add Node group
  • Name: Give a Name
  • Node IAM Role: Select the node role created in previous step

Note: There are various fields which can be used like launch templates, Labels, Taints etc .. In this demo, I sticked to default values.

  • Next

  • Node group compute configuration: To deploy all TAP packages, your cluster must have at least 8 GB of RAM across all nodes available, At least 8 CPUs for i9 or equivalent or 12 CPUs for i7. So, for this EKS cluster I preferred to choose t3.xlarge and 30 GiB Disk size.

  • Node Group scaling configuration: I preferred to choose min and max size as 2, you can certainly have more nodes based on requirement.

  • Node Group update configuration: Leave as default and click Next
  • Node Group network configuration: Leave as default (To be able to take ssh to worker nodes, enable the option Configure SSH access to nodes) and click Next
  • Review and Create

Node creation takes 5-10 mins based on the region selected, upon successful completion, status should turn Active.

In this demo, I am using a ubuntu instance deployed on AWS as jumpbox to perform TAP install, you can also follow the same or use your local workstation to connect EKS cluster and install TAP.

Install Tanzu CLI, plugins and kubectl

Install Kubectl

Kubectl
# Download the latest release with the command: 

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/linux/amd64/kubectl"

# Install kubectl
sudo install -o root -g root -m 0755 kubectl /usr/local/bin/kubectl

# Verify the version
kubectl version

Install AWS CLI

AWS CLI
# Download the latest awscli 

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"

# Install unzip if not already present
sudo apt install unzip

#Unzip the aws cli file
unzip awscliv2.zip

# Install
sudo ./aws/install

# verify the awscli version
$ aws --version
aws-cli/2.4.8 Python/3.8.8 Linux/5.11.0-1022-aws exe/x86_64.ubuntu.20 prompt/off

# Configure AWS CLI, refer to https://docs.aws.amazon.com/powershell/latest/userguide/pstools-appendix-sign-up.html to know the steps to fetch AWS access key ID and secret.

$ aws configure
AWS Access Key ID [None]: <copy the access key ID>
AWS Secret Access Key [None]: <copy the secret>
Default region name [None]: ap-south-1
Default output format [None]:

# Optional if you have AWS session Token:

$ aws configure set aws_session_token <Copy the token>
# Update the context to connect to EKS cluster, syntax: aws eks update-kubeconfig --region <region>--name <EKS cluster name>

$ aws eks update-kubeconfig --region ap-south-1 --name tap-demo-cluster
Added new context arn:aws:eks:ap-south-1:778018584600:cluster/tap-demo-cluster to /home/ubuntu/.kube/config

# Get EKS cluster nodes

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-31-70.ap-south-1.compute.internal Ready <none> 45m v1.21.5-eks-bc4871b
ip-172-31-44-37.ap-south-1.compute.internal Ready <none> 45m v1.21.5-eks-bc4871b

# verify the context and ensure * is pointing to correct cluster.

$ kubectl config get-contexts
CURRENT NAME CLUSTER AUTHINFO NAMESPACE
* arn:aws:eks:ap-south-1:778018584600:cluster/tap-demo-cluster arn:aws:eks:ap-south-1:778018584600:cluster/tap-demo-cluster arn:aws:eks:ap-south-1:778018584600:cluster/tap-demo-cluster

Install Tanzu CLI

Sign to Tanzu Network and accept below EULAs:

Sign in to Tanzu Network , download tanzu-cluster-essentials-linux-amd64-1.0.0.tgz (for Linux) into local machine and copy to destination jumpbox using scp or download directly using pivnet cli.

scp command
Syntax: scp -i < key file (include path)> <file name (include path )> ubuntu@<ip>:/tmp

$ scp -i ~/.ssh/jumpbox-aws.pem tanzu-cluster-essentials-linux-amd64-1.0.0.tgz ubuntu@13.232.59.142:/tmp

ubuntu@ip-172-31-35-32:~$ ls
tanzu tanzu-cluster-essentials tanzu-cluster-essentials-linux-amd64-1.0.0.tgz

# Configure and run install.sh, which installs kapp-controller and secretgen-controller on your cluster:

$ tar -xvf tanzu-cluster-essentials-linux-amd64-1.0.0.tgz -C $HOME/tanzu-cluster-essentials
install.sh
imgpkg
kbld
kapp
ytt

export INSTALL_BUNDLE=registry.tanzu.vmware.com/tanzu-cluster-essentials/cluster-essentials-bundle@sha256:82dfaf70656b54dcba0d4def85ccae1578ff27054e7533d08320244af7fb0343
export INSTALL_REGISTRY_HOSTNAME=registry.tanzu.vmware.com
export INSTALL_REGISTRY_USERNAME=TANZU-NET-USER
export INSTALL_REGISTRY_PASSWORD=TANZU-NET-PASSWORD
cd $HOME/tanzu-cluster-essentials
./install.sh

Note: Succeeded indicates successful installation

# Install the kapp CLI onto your $PATH:

sudo cp $HOME/tanzu-cluster-essentials/kapp /usr/local/bin/kapp

# verify kapp version
$ kapp version
kapp version 0.42.0

Succeeded

Sign in to Tanzu Network , click on folder tanzu-cli-0.12.0 and download tanzu-framework-bundle-linux (for Linux) into local machine and copy to destination jumpbox using scp or download directly using pivnet cli.

# Create a directory named tanzu:

$ mkdir $HOME/tanzu

# unpack the TAR file into the tanzu directory:

$ tar -xvf tanzu-framework-linux-amd64.tar -C $HOME/tanzu

# Install the CLI core:

cd $HOME/tanzu
sudo install cli/core/v0.12.0/tanzu-core-linux_amd64 /usr/local/bin/tanzu

$ tanzu version
version: v0.12.0
buildDate: 2021-11-25
sha: ff02d464

# Disable the context-aware CLI for plugins feature so that the downloaded plugins can be installed without errors:

$ tanzu config set features.global.context-aware-cli-for-plugins false

# Install the local versions of the plugins you downloaded:

$ tanzu plugin install --local cli all

# Check the plugin installation status:

$ tanzu plugin list

NAME LATEST VERSION DESCRIPTION REPOSITORY VERSION STATUS
accelerator Manage accelerators in a Kubernetes cluster v0.5.0 installed
apps Applications on Kubernetes v0.3.0 installed
cluster v0.14.0 Kubernetes cluster operations core v0.12.0 upgrade available
kubernetes-release v0.14.0 Kubernetes release operations core v0.12.0 upgrade available
login v0.14.0 Login to the platform core v0.12.0 upgrade available
management-cluster v0.14.0 Kubernetes management cluster operations core v0.12.0 upgrade available
package v0.14.0 Tanzu package management core v0.12.0 upgrade available
pinniped-auth v0.14.0 Pinniped authentication operations (usually not directly invoked) core v0.12.0 upgrade available
secret v0.14.0 Tanzu secret management core v0.12.0 upgrade available
services Discover Service Types and manage Service Instances (ALPHA) v0.1.0 installed

Install Docker

  • Refer to Docker page to get the detailed steps of Installing Docker.

Image Repo (GCR) – Optional

Note: If you have Docker hub pro account, please use it as it do not have any pull limits. Otherwise, use gcr.io which I will cover in this demo or harbor.

  • Login to Google cloud console > IAM & Admin > Service Accounts > Create Service Account
  • Give a name and click on Create and Continue
  • Add below given roles and Done

  • Click on newly created service account > Keys > Create new key
  • Select Key type as JSON and Create – this downloads a json file, keep it safe and secured.

Test Access

  • Copy the downloaded json key into jumpbox or any machine where you are planning execute Tanzu commands for TAP install.
#Login to gcr.io with docker login command using downloaded json key

$ docker login -u _json_key -p "$(cat eknath-se-cc6b9fe1ac86.json)" https://gcr.io
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /home/ubuntu/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

######Test the Access

# Tag Image
docker tag busybox gcr.io/eknath-se/test-repo/busybox:latest

#Push the image to gcr.io

$ docker push gcr.io/eknath-se/test-repo/busybox:latest
The push refers to repository [gcr.io/eknath-se/test-repo/busybox]
01fd6df81c8e: Layer already exists
latest: digest: sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee size: 527

Now we are all set to proceed with TAP Install on EKS cluster using gcr image repo.