Contents

Kubernetes Cluster Design

Kubernetes Cluster Install Tools

Considerations

Purpose

Education

  • Minikube
  • Single node cluster with kubeadm/GCP/AWS

Development & Testing

  • Multi-Node Cluster with a Single Master and Multiple Works
  • Setup using kubeadm tool or quick provision on GCP or AWS or AKS

Hosting Production Applications

  • High Availability Multi-Node Cluster with multiple master nodes.
  • kubeadm or kOps on GCP or AWS or other supported platforms.
  • Upto 5,000 nodes
  • Upto 150,000 PODs in the cluster
  • Upto 300,000 total containers
  • Upto 100 PODs per node
NodesGCP TypeGCP SpecAWS TypeAWS Spec
1~5n1-standard-11 vCPU 3.75 GBm3.medium1 vCPU 3.75 GB
6~10n1-standard-22 vCPU 7.5 GBm3.large2 vCPU 7.5 GB
11~100n1-standard-44 vCPU 15 GBm3.xlarge4 vCPU 15 GB
101~250n1-standard-88 vCPU 30 GBm3.2xlarge8 vCPU 30 GB
251~500n1-standard-1616 vCPU 60 GBc4.4xlarge16 vCPU 60 GB
over 500n1-standard-3232 vCPU 120 GBc4.8xlarge32 vCPU 120 GB

On-Premise or Cloud Service

  • Use kubeadm for On-Premise
  • GKE(Google Cloud Kubernetes Engine) for GCP(Google Cloud Platform)
  • kOps for AWS
  • AKS(Azure Kubernetes Service) for Azure

Storage

  • High Performance - SSD Backed Storage
  • Multiple Concurrent Connections - Network Based Storage
  • Persistent shared volumes for shared access across multiple PODs
  • Label nodes with specific disk types
  • Use Node Selectors to assign applications to nodes with specific disk types

Nodes

  • Virtual or Physical Machines
  • Minimum of 4 Node Cluster(Size based on workload)
  • Master vs Worker Nodes
  • Linux x86_64 Architecture
  • Master nodes can host workloads
  • Best practice is to not host workloads on Master nodes

Master Nodes Structure

Structure
The best way is to set 3 ETCDs and 2 Master nodes at least.
/kubernetes-cluster-design/design.png

Kubernetes Infrastructure

On-Premise vs Cloud Service

On-Premise
On-Premise is to set their own computer room.
Cloud Service
AWS, GCP, and Azure etc.

Our Choice

Our Laptop!
This is for studying Kubernetes. So we just choose our laptop.

Linux vs Windows

Linux
Kubernetes is running under Linux System.

Our Choice

Ubuntu
I’m going to use Ubuntu!

minikube vs Kubeadm

minikube
minikube is local Kubernetes, focusing on making it easy to learn and develop for Kubernetes.
Kubeadm
Kubeadm is a tool built to provide kubeadm init and kubeadm join as best-practice “fast paths” for creating Kubernetes clusters.

Our Choice

Kubeadm
I’m going to build Kubernetes cluster with multiple nodes. So I’m going to use kubeadm.

Turnkey Solutions vs Hosted Solutions(Managed Solutions)

Turnkey Solutions

Turnkey Solution Responsibility
  • You provision VMs
  • You configure VMs
  • You use scripts to deploy cluster
  • You maintain VMs yourself
Turnkey Solution Products

Hosted Solutions(Managed Solutions)

Hosted Solutions Responsibility
  • Kubernetes As A Service
  • Provider provisions VMs
  • Provider installs Kubernetes
  • Provider maintains VMs
Hosted Solution Products

Our Choice

VirtualBox
I’m going to build VMs with VirtualBox.

Networking Solution

Network Solution Products

Our Choice

weaveworks
weaveworks is our choice!

HA(High Availability) Kubernetes Cluster

HA
We need 2 master nodes at least for HA. Of course, we need more than 2 worker nodes.

Master Nodes

/kubernetes-cluster-design/design-2-master-nodes.png

API Server

/kubernetes-cluster-design/design-api-server.png

Controller Manager

1
2
3
4
kube-controller-manager --leader-elect true [other options]
                        --leader-elect-lease-duration 15s
                        --leader-elect-renew-deadline 10s
                        --leader-elect-retry-period 2s
/kubernetes-cluster-design/design-contoller-manager.png

Stacked Topology

/kubernetes-cluster-design/design-stacked-topology.png
  • Easier to setup
  • Easier to manage
  • Fewer servers
  • Risk during failures

External ETCD Topology

/kubernetes-cluster-design/design-external-etcd-topology.png
  • Less risky
  • Harder to setup
  • More servers

Config ETCD to kube-apiserver

--etcd-servers
Check --etcd-servers
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
spec:
  containers:
    - command:
        - kube-apiserver
        - --advertise-address=192.168.49.2
        - --allow-privileged=true
        - --authorization-mode=Node,RBAC
        - --client-ca-file=/var/lib/minikube/certs/ca.crt
        - --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota
        - --enable-bootstrap-token-auth=true
        - --etcd-cafile=/var/lib/minikube/certs/etcd/ca.crt
        - --etcd-certfile=/var/lib/minikube/certs/apiserver-etcd-client.crt
        - --etcd-keyfile=/var/lib/minikube/certs/apiserver-etcd-client.key
        - --etcd-servers=https://10.240.0.10:2379,https://10.240.0.11:2479
        - --insecure-port=0
        - --kubelet-client-certificate=/var/lib/minikube/certs/apiserver-kubelet-client.crt
        - --kubelet-client-key=/var/lib/minikube/certs/apiserver-kubelet-client.key
        - --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname
        - --proxy-client-cert-file=/var/lib/minikube/certs/front-proxy-client.crt
        - --proxy-client-key-file=/var/lib/minikube/certs/front-proxy-client.key
        - --requestheader-allowed-names=front-proxy-client
        - --requestheader-client-ca-file=/var/lib/minikube/certs/front-proxy-ca.crt
        - --requestheader-extra-headers-prefix=X-Remote-Extra-
        - --requestheader-group-headers=X-Remote-Group
        - --requestheader-username-headers=X-Remote-User
        - --secure-port=8443
        - --service-account-issuer=https://kubernetes.default.svc.cluster.local
        - --service-account-key-file=/var/lib/minikube/certs/sa.pub
        - --service-account-signing-key-file=/var/lib/minikube/certs/sa.key
        - --service-cluster-ip-range=10.96.0.0/12
        - --tls-cert-file=/var/lib/minikube/certs/apiserver.crt
        - --tls-private-key-file=/var/lib/minikube/certs/apiserver.key

Our Choice

Our Structure
2 stacked topology master nodes
2 worker nodes
1 ETCD on host

ETCD in HA

Design
ETCD is a distributed, reliable key-value store for the most critical data of a distributed system.

key-value store

JSON

1
2
3
4
5
{
  "name": "John Doe",
  "age": 43,
  "location": "New York"
}

YAML

1
2
3
name: "John Doe"
age: 43
location: "New Work"

TOML

1
2
3
name = "John Doe"
age = 43
location = "New York"

distributed

Consistent

/kubernetes-cluster-design/design-consistent.png
Read
Read
There’s no big problem for consistent reading data.
/kubernetes-cluster-design/design-etcd-read.png
Write
Write
When users write a data that is same key at same time, there’s a problem!
At this time, it is possible to write to only ETCD Leader.
/kubernetes-cluster-design/design-write-at-same-time.png
Leader Election - RAFT
Majority
More than half the votes.
N/2+1
Quorum
Minimum number of the votes.
N/2+1
Quorum
InstancesQuorumFault Tolerance
110
220
321
431
532
642
743
Odd or Even?
ManagersMajorityFault Tolerance
110
220
321
431
532
642
743

Getting Started

1
2
3
4
5
6
wget -q --https-only 
    \ "https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz"
tar -xvf etcd-v3.3.9-linux-amd64.tar.gz
mv etcd-v3.3.9-linux-amd64/etcd* /usr/local/bin/
mkdir -p /etc/etcd /var/lib/etcd
cp ca.pem kubernetes-key.pem kubernetes.pem /etc/etcd/

etcd.service

--initial-cluster
--initial-cluster peer-1=https://${PEER1_IP}:2380,peer-2=https://${PEER2_IP}:2380.
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
ExecStart=/usr/local/bin/etcd \\
  --name ${ETCD_NAME} \\
  --cert-file=/etc/etcd/kubernetes.pem \\
  --key-file=/etc/etcd/kubernetes-key.pem \\
  --peer-cert-file=/etc/etcd/kubernetes.pem \\
  --peer-key-file=/etc/etcd/kubernetes-key.pem \\
  --trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-trusted-ca-file=/etc/etcd/ca.pem \\
  --peer-client-cert-auth \\
  --client-cert-auth \\
  --initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-peer-urls https://${INTERNAL_IP}:2380 \\
  --listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
  --advertise-client-urls https://${INTERNAL_IP}:2379 \\
  --initial-cluster-token etcd-cluster-0 \\
  --initial-cluster peer-1=https://${PEER1_IP}:2380,peer-2=https://${PEER2_IP}:2380 \\
  --initial-cluster-state new \\
  --data-dir=/var/lib/etcd

etcdctl

Set ETCD API Version

1
export ETCDCTL_API=3

Data Control Commands

Put
1
etcdctl put name john
Get
1
etcdctl get name
1
2
name
john
Get List
1
etcdctl get --prefix --keys-only

Our Design

Design
I’m going to set below design.
/kubernetes-cluster-design/design-cluster-design.png