Single master kubernets install with kubeadm

 Single master Kubernets install with Kubeadm

 

In this scenario I will deploy the single master node architecture with two nodes.

 

Before to begin

  • A compatible Linux host. The Kubernetes project provides generic instructions for Linux distributions based on Debian and Red Hat, and those distributions without a package manager.
  • 2 GB or more of RAM per machine (any less will leave little room for your apps).
  • 2 CPUs or more.
  • Full network connectivity between all machines in the cluster (public or private network is fine).
  • Unique hostname, MAC address, and product_uuid for every node. See here for more details.
  • Certain ports are open on your machines. See here for more details.
  • Swap disabled. You MUST disable swap in order for the kubelet to work properly.

 

 

Run on All Nodes  

 

First thing first switch the root user

 sudo -i 


Make sure that the br_netfilter module is loaded

lsmod | grep br_netfilter

 #if not run br_netfiter 

 sudo modprobe br_netfilter 


Verify

Master node

lsmod | grep br_netfilter

 

ubuntu@master:~$ lsmod | grep br_netfilter 

br_netfilter 28672 0 

bridge 249856 1 br_netfilter

 

Node01

lsmod | grep br_netfilter

ubuntu@node01:~$ lsmod | grep br_netfilter

 br_netfilter 28672 0 

bridge 249856 1 br_netfilter'

 

Node02

lsmod | grep br_netfilter

ubuntu@node02:~$ lsmod | grep br_netfilter

 br_netfilter 28672 0 

bridge 249856 1 br_netfilter


As a requirement for your Linux Node's iptables to correctly see bridged traffic, you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your sysctl config, e.g.


cat <<EOF | sudo tee /etc/modules-load.d/k8s.conf
br_netfilter
EOF

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sudo sysctl --system 


* Applying /etc/sysctl.d/10-console-messages.conf ... kernel.printk = 4 4 1 7 * Applying /etc/sysctl.d/10-ipv6-privacy.conf ... net.ipv6.conf.all.use_tempaddr = 2 net.ipv6.conf.default.use_tempaddr = 2 * Applying /etc/sysctl.d/10-kernel-hardening.conf ... kernel.kptr_restrict = 1 * Applying /etc/sysctl.d/10-link-restrictions.conf ... fs.protected_hardlinks = 1 fs.protected_symlinks = 1


Install container runtime

sudo apt-get update
sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

GPG key

curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

 

stable repository  

echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null


sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io


configure the docker daemon

sudo mkdir /etc/docker
cat <<EOF | sudo tee /etc/docker/daemon.json
{
  "exec-opts": ["native.cgroupdriver=systemd"],
  "log-driver": "json-file",
  "log-opts": {
    "max-size": "100m"
  },
  "storage-driver": "overlay2"
}
EOF

sudo systemctl enable docker 

sudo systemctl daemon-reload 

sudo systemctl restart docker 

 

Swap off 

swapoff -a

#keep swap off  after reboot

sudo sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab


install kubeadm, kubectl ,kubelet

sudo apt-get update 

sudo apt-get install -y apt-transport-https ca-certificates curl

 

sudo curl -fsSLo /usr/share/keyrings/kubernetes-archive-keyring.gpg https://packages.cloud.google.com/apt/doc/apt-key.gpg

echo "deb [signed-by=/usr/share/keyrings/kubernetes-archive-keyring.gpg] https://apt.kubernetes.io/ kubernetes-xenial main" | sudo tee /etc/apt/sources.list.d/kubernetes.list

 

sudo apt-get update
sudo apt-get install -y kubelet kubeadm kubectl
sudo apt-mark hold kubelet kubeadm kubectl

 

On Master node

Make sure the port 6443 allow on your network.

--pod-nework-cidr   pod ip address pool

--apiserver-advertise-address    masternode ip address  


kubeadm init --pod-network-cidr 10.25.0.0/16 --apiserver-advertise-address=172.31.20.217

 

mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

vi kubeadmjoin.sh


#!/bin/bash
kubeadm join 172.31.20.217:6443 --token t4urps.u8j8bdstc82u01i4 \
    --discovery-token-ca-cert-hash sha256:0f04f8d7aee85fcdd2e92e63699831f212ffd376fd3b3ae82c509bdfc4f3dfd9

chmod +x kubeadmjoin.sh

bash kubeadmjoin.sh


Calico CNI

kubectl create -f https://docs.projectcalico.org/manifests/tigera-operator.yaml

sudo mkdir -p /etc/calico

sudo cp calicoctl.cfg /etc/calico

 cat <<EOF >custom-resources.yaml
 # This section includes base Calico installation configuration.
 # For more information, see: https://docs.projectcalico.org/v3.20/reference/installation/api#operator.tigera.io/v1.Installation
    apiVersion: operator.tigera.io/v1
    kind: Installation
    metadata:
      name: default
    spec:
      # Configures Calico networking.
      calicoNetwork:
        # Note: The ipPools section cannot be modified post-install.
        ipPools:
        - blockSize: 26
          cidr: 10.25.0.0/16
          encapsulation: None
          natOutgoing: Enabled
          nodeSelector: all()
    EOF


kubectl create -f custom-resources.yaml
 

kubectl taint nodes --all node-role.kubernetes.io/master-

root@k8s-master:~# kubectl get node
NAME         STATUS     ROLES                  AGE     VERSION
k8s-master   NotReady   control-plane,master   3m51s   v1.22.4

Run kubeadmjoin.sh on master node and worker nodes

vi kubeadmjoin.sh


#!/bin/bash
kubeadm join 172.31.20.217:6443 --token t4urps.u8j8bdstc82u01i4 \
    --discovery-token-ca-cert-hash sha256:0f04f8d7aee85fcdd2e92e63699831f212ffd376fd3b3ae82c509bdfc4f3dfd9

chmod +x kubeadmjoin.sh

bash kubeadmjoin.sh

$ watch kubectl get nodes

NAME         STATUS   ROLES                  AGE   VERSION
k8s-master   Ready    control-plane,master   11m   v1.22.4
node01       Ready    <none>                 45s   v1.22.4

node02       Ready    <none>                  47s  v1.22.4

 

Now we can deploy the kubernetes staff on this cluster.

 

Ref: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/

https://docs.projectcalico.org/getting-started/kubernetes/quickstart


 

 

Comments

Popular posts from this blog

How to point DNS form Godaddy Domain to AWS Route 53

How to setup AWS VPC Peering (VPC to VPC)

AWS Root Account MFA setup