Kubeadm deployed high available K8S clusters (V1.14.2)

1. Introduction

Test environment Kubernetes version 1.14.2 high availability build document, build method is kubeadm

2. Server version and architecture information

System version: CentOS Linux release 7.6.1810 (Core) Kernel: 4.4.1184-1.el7.elrepo.x86_64 Note: Yes The kernel version installed later may be higher than this version. Kubernetes: v1.14.2Docker-ce: 18.06 Network components: calico Hardware configuration: 16 core 64GKeepalived to ensure that the IP of the appiserever server is highly available. Haproxy realizes the load balancing of apiserver

3. Server role planning

Be sure to pay attention to your own server IP and host name

The master01/02 node has deployed kubelet, keepalived, haproxy, controllmanager, apiserver, scheduler, docker, kube-proxy, calico components

kubelet, controllmanager, apiserver, scheduler, docker, kube-proxy, calico components are deployed on the master03 node

The kubelet, kube-proxy, docker, and calico components are deployed on the node01/node02 nodes

In addition to the kubelet and docker components, the other components exist in static pod mode

< table>

node name role IP install software Load VIP VIP 192.168.4.110 master-01 master 192.168.4.129 kubeadm, kubelet, kubectl, docker, haproxy, keepalived

< /tr>

master-02 master 192.168.4.130 kubeadm, kubelet, kubectl, docker, haproxy, keepalived master-03 master 192.168.4.133 kubeadm, kubelet, kubectl, docker node-01 node 192.168.4.128 kubeadm, kubelet, kubectl, docker node-03 node 192.168. 4.132 kubeadm, kubelet, kubectl, docker service network segment 10.209.0.0/16

4. Server initialization

4.1 Close Selinux/firewalld/iptables (execution on all machines)

setenforce 0 && sed -i's/^SELINUX=.*$/SELINUX=disable d/' /etc/selinux/config && getenforce

systemctl stop firewalld && systemctl daemon-reload && systemctl disable firewalld && systemctl daemon-reload && systemctl status firewalld

yum install -y iptables-services && systemctl stop iptables && systemctl disable iptables && systemctl status iptables

4.2 Add host resolution records for each server (all Machine execution)

cat >>/etc/hosts<192.168.4.129 master01
192.168.4.130 master02
192.168.4.133 master03
192.168. 4.128 node01
192.168.4.132 node03
EOF

4.3 Replacement of Aliyuan (executed on all machines)

yum install wget -ycp -r /etc/yum.repos.d /etc/yum.repos.d.bakrm -f /etc/yum.repos.d/*.repowget -O /etc/yum.repos .d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo && wget -O /etc/yum.repos.d/epel.repo http://mirrors.aliyun.com /repo/epel-7.repoyum clean all && yum makecache

4.4 Set limits.conf (executed on all machines)

cat> > / etc/security/limits.conf <# End of file
* soft nproc 10240000
* hard nproc 10240000
* soft nofile 10240000
* hard nofile 10240000
EOF

4.5 Set sysctl.conf (execute on all machines)

[! -e " /etc/sysctl.conf_bk"] && /bin/mv /etc/sysctl.conf{,_bk} && cat> /etc/sysctl.conf << EOFfs.file-max=1000000fs.nr_open=20480000net.ipv4.tcp_max_tw_buckets = 180000net.ipv4.tcp_sack = 1net.ipv4.tcp_window_scaling = 1net.ipv4.tcp_rmem = 4096 87380 4194304net.ipv4.tcp_wmem = 4096 16384 4194304net.ipv4.tcp_max_syn_backlog = 16384net.max_core.netcore768net = 32768net.core.netmax. wmem_default = 8388608net.core.rmem_default = 8388608net.core.rmem_max = 16777216net.core.wmem_max = 16777216net.ipv4.tcp_timestamps = 0net.ipv4.tcp_fin_timeout = 20net.ipv4.tcp_synack_vipries = 2net.ipyn cookies 1#net.ipv4.tcp_tw_len = 1net.ipv4.tcp_tw_reuse = 1net.ipv4.tcp_mem = 94500000 915000000 927 000000net.ipv4.tcp_max_orphans = 3276800net.ipv4.ip_local_port_range = 1024 65000#net.nf_conntrack_max = 6553500#net.netfilter.nf_conntrack_max = 6553500#net.netfilter.nf_conntrack_tcp_timeout_closenet_track_tcp_timefilter.nettimeout_fintime_tcp_nf_filter.nf_conntrack_tcp_timeout. = 120#net.netfilter.nf_conntrack_tcp_timeout_established = 3600EOF
sysctl -p

4.6 Configure time synchronization (execution for all machines)

ntpdate -u pool.ntp.orgcrontab -e #Join time task*/15 * * * * /usr/sbin/ntpdate -u pool.ntp.org >/dev/null 2>&1 

4.7 Configure k8s.conf (execute on all machines)

cat < /etc/sysctl. d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_nonlocal_bind = 1net.ipv4.ip_forward = 1vm.swappiness=0EOF#Execute the command to make it The modification takes effect modprobe br_netfilter && sysctl -p /etc/sysctl.d/k8s.conf

4.8 Close the swap partition (execute on all machines)

swapoff -ayes | cp /etc/fstab /etc/fstab_bakcat /etc/fstab_bak |grep -v swap> /etc/fstab

4.9 Upgrade system kernel (execution on all machines) h3>

yum update -y

rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-2.el7.elrepo.noarch.rpm ;yum --enablerepo =elrepo-kernel install kernel-lt-devel kernel-lt -y

View the kernel modification results
grub2-editenv list

#Note that the following is executed here The command will show multiple kernel versions
[[emailprotected] ~]# cat /boot/grub2/grub.cfg |grep "menuentry "
menuentry'CentOS Linux (4.4.184-1.el7. elrepo.x86_64) 7 (Core)' --class centos --class gnu-linux --class gnu --class os --unrestricted $menuentry_id_option'gnulinux-3.10.0-862.el7.x86_64-advanced-021a955b-781d -425a-8250-f39857437658'


Set the default kernel version, the modified version must already exist, please pay attention to execute the command cat /boot/grub2/grub.cfg |grep "menuentry" to generate Please do not copy the contents of grub2-set-default'CentOS Linux (4.4.184-1.el7.elrepo.x86_64) 7 (Core)'

View the kernel modification results
grub2-set-default'CentOS Linux (4.4.184-1.el7.elrepo.x86_64) 7 (Core)'

br />grub2-editenv list

# Check that the default kernel version is higher than 4.1, otherwise please adjust the default startup parameters
# View kernel modification results
grub2-editenv list
< br />#Restart to change the kernel to take effect
reboot

4.10 Load ipvs module (execute on all machines)

cat > /etc/sysconfig/modules/ipvs.modules <#!/bin/bash
modprobe - ip_vs
modprobe - ip_vs_rr
modprobe - ip_vs_wrr
modprobe - ip_vs_sh
modprobe - nf_conntrack_ipv4
EOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

4.11 Add k8s yum source (executed on all machines)

cat <  /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck= 1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF

4.12 Install server prerequisite software

yum -y install wget vim iftop iotop net-tools nmon telnet lsof iptraf nmap httpd-tools lrzsz mlocate ntp ntp date strace libpcap nethogs iptraf iftop nmon bridge-utils bind-utils telnet nc nfs-utils rpcbind nfs-utils dnsmasq python python-devel tcpdump mlocate tree

5. Install keepalived and haproxy

5.1 Install keepalived and haproxy on master01 and master02

Master01’s priority is 250 , The priority of master02 is 200, and the other configurations are the same.

master01(192.168.4.129)

vim /etc/keepalived/keepalived.conf

Pay attention to the configuration of interface, which is configured as the network card of your server , Don’t paste it randomly

! Configuration File for keepalived

global_defs {
router_id LVS_DEVEL
}

vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}

vrrp_instance VI_1 {
state MASTER
interface ens160
virtual_router_id 51
priority 250
advert_int 1
authentication {
auth_type PASS
auth_pass 35f18af7190d51c9f7f78f37300a0cbd
}
virtual_ipaddress {
192.168.4.110
}
track_script {
check_haproxy
}
}

master02(192.168.4.130)

vim /etc/keepalived/keepalived.conf

! Configuration File for keepalived

global_defs {
router_id LVS_DEVEL
}

vrrp_script check_haproxy {
script "killall -0 haproxy"
interval 3
weight -2
fall 10
rise 2
}< br />
vrrp_instance VI_1 {
state BACKUP
interface ens160
virtual_router_id 51
priority 200
advert_int 1
authentication {
auth_type PASS
auth_pass 35f18af7190d51c9f7f78f37300a0cbd
}
virtual_ipaddress {
192.168.4.110
}
track_script {
check_haproxy
}
}

5.2 haproxy configuration

The haproxy configuration of master01 and master02 is the same. Here we are listening on port 8443 of 192.168.4.110, because haproxy and k8s apiserver are deployed on the same server, using 6443 will conflict.

192.168.4.129
vim /etc/haproxy/haproxy.cfg

#--------------- -------------------------------------------------- ----# Global settings#------------------------------------------ ---------------------------global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the'-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log /haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # #log 127.0.0.1 local2 log 127.0.0.1 local0 info chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 4000 user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats#----------- -------------------------------------------------- --------# common defaults that all the'listen' and'backend' sections will# use if not designated in their block#------------------ -------------------------------------------------- -defaults mode http log global option httplog option dontlognull option http-server-close option forwardfor except 127.0.0.0/8 option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep- alive 10s timeout check 10s maxconn 3000#------------------------------------------ ---------------------------# kubernetes apiserver frontend which proxys to the backends#------------- -------------------------------------------------- ------frontend kubernetes-apiserver mode tcp bind *:8443 option tcplog default_backend kubernetes-apiserver#------------------------------------- --------------------------------# round robin balancing between the various backends#--------- -------------------------------------------------- ----------backend kubernetes-apiserver mode tcp balance roundrobin server master01 192.168.4.129:6443 check server master02 192.168.4.130:6443 check server master03 192.168.4.133:6443 check#------- -------------------------------------------------- ------------# collection haproxy statistics message#-------------------------------- -------------------------------------listen stats bind *:1080 stats auth admin:awesomePassword stats refresh 5s stats realm HAProxy\ Statistics stats uri /admin?stats

192.168.4.130
vim /etc/haproxy/haproxy.cfg

#-------- -------------------- -----------------------------------------

# Global settings

#--------------------------------------- ------------------------------
global
# to have these messages end up in /var/ log/haproxy.log you will
# need to:
#
# 1) configure syslog to accept network log events. This is done
# by adding the'-r' option to the SYSLOGD_OPTIONS in
# /etc/sysconfig/syslog
#
# 2) configure local2 events to go to the /var/log/haproxy.log
# file. A line like the following can be added to
# /etc/sysconfig/syslog
#
# local2.* /var/log/haproxy.log
#
#log 127.0.0.1 local2
log 127.0.0.1 local0 info

?```
chroot /var/lib/haproxy
pidfile /var/run/haproxy .pid
maxconn 4000
user haproxy
group haproxy
daemon

# turn on stats unix socketstats socket /var/lib/haproxy/stats
? ```

#-------------------- -------------------------------------------------< br />
# common defaults that all the'listen' and'backend' sections will

# use if not designated in their block

#--- -------------------------------------------------- ----------------
defaults
mode http
log global
option httplog
option dontlognull
option http-server-close
option forwardfor except 127.0.0.0/8
option redispatch
retries 3
timeout http-request 10s
timeout queue 1m
timeout connect 10s
timeout client 1m
timeout server 1m
timeout http-keep-alive 10s
timeout check 10s
maxconn 3000

#- ------- -------------------------------------------------- -----------

# kubernetes apiserver frontend which proxys to the backends

#------------- -------------------------------------------------- ------
frontend kubernetes-apiserver
mode tcp
bind *:8443
option tcplog
default_backend kubernetes-apiserver

#------------------------------------------------- --------------------

# round robin balancing between the various backends

#----- -------------------------------------------------- --------------
backend kubernetes-apiserver
mode tcp
balance roundrobin
server master01 192.168.4.129:6443 check
server master02 192.168.4.130:6443 check
server master03 192.168.4.133:6443 check

#--------------------- ------------------------------------------------

# collection haproxy statist ics message

#--------------------------------------- ------------------------------
listen stats
bind *:1080
stats auth admin:awesomePassword
stats refresh 5s
stats realm HAProxy\ Statistics
stats uri /admin?stats

5.3 Set service startup sequence and dependencies (master01 and master02 operations)

vim /usr/lib/systemd/system/keepalived.service

[Unit]
Description =LVS and VRRP High Availability Monitor
After=syslog.target network-online.target haproxy.service
Requires=haproxy.service

5.4 Start service< /h3>

systemctl enable keepalived && systemctl start keepalived && systemctl enable haproxy && systemctl start haproxy && systemctl status keepalived && systemctl status haproxy

6. Installation docker

6.1 install some necessary system tools (all server installation)

yum install -y yum-utils device-mapper -persistent-data lvm2

6.2 Add software source information (all server configurations)

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo

yum list docker-ce --showduplicates | sort -r

yum -y install docker-ce-18.06.3.ce-3.el7

usermod -aG docker bumblebee

6.3 Configure daemon.json file (all server configuration)

mkdir -p /etc/docker/ && cat> /etc/docker/daemon.json << EOF{ "registry-mirrors ":[ "https://c6ai9izk.mirror.aliyuncs.com" ], "max-concurrent-downloads":3, "data-root":"/data/docker", "log-driver":"json- file", "log-opts":{ "max-size":"100m", "max-file":"1" }, "max-concurrent-uploads":5, "storage-driver":"overlay2" , "storage-opts": ["overlay2.override_kernel_check=true" ]}EOF

6.4 start check docker service

 systemctl enable docker && systemctl restart docker && systemctl status docker< /pre> 

7 Use kubeadm to deploy kubernetes

7.1 Configure kubernetes.repo (each machine is configured Configuration required)

cat < /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes /yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=0repo_gpgcheck=0gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/ kubernetes/yum/doc/rpm-package-key.gpgEOF

7.2 Install prerequisite software (installed on all machines)

yum install -y kubelet-1.14.2 kubeadm-1.14.2 kubectl-1.14.2 ipvsadm ipset#Set the kubelet to start automatically. Note: You can’t execute systemctl start kubelet directly in this step. An error will be reported. After the kubelet is successfully initialized It will automatically start systemctl enable kubelet

7.3 modify initial configuration

Use kubeadm config print init-defaults> kubeadm-init.yaml to print out the default configuration, and then Modify the configuration according to your own environment

Note that you need to modify advertiseAddress, controlPlaneEndpoint, imageRepository, serviceSubnet

where advertiseAddress is the ip of master01, and controlPlaneEndpoint is V IP+8443 port, imageRepository is modified to Ali's source, serviceSubnet looks for an IP segment that no one uses in the network group

[[emailprotected] ~]# cat kubeadm-init.yaml < br />apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- groups:
-system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
-signing
-authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.4.129
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: master01
taints:
-effect: NoSchedule
key : node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta1
certificatesDir: / etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.4.110:8443"
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/et cd
imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.14.2
networking:
dnsDomain: cluster.local< br /> podSubnet: ""
serviceSubnet: "10.209.0.0/16"
scheduler: {}

7.4 Pre-download mirror

[[email Protected] ~]# kubeadm config images pull --config kubeadm-init.yaml[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver: v1.14.2[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.14.2[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube- scheduler:v1.14.2[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.2[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause: 3.1[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10[config/images] Pulled registry.cn-han gzhou.aliyuncs.com/google_containers/coredns:1.3.1

7.5 initialization

[[emailprotected] ~]# kubeadm init- -config kubeadm-init.yaml[init] Using Kubernetes version: v1.14.2[preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight ] You can also perform this action in beforehand using'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Activating the kubelet service[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "etc d/ca" certificate and key[certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [master01 localhost] and IPs [192.168.4.129 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [master01 localhost] and IPs [192.168.4.129 127.0.0.1 ::1][certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.209.0.1 192.168.4.129 192.168.4.110][certs] Generating "apiserver-kubelet-client" certificate and key[certs ] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "sa" key and public key[ kubeconfig] Using kubeconfig folder "/etc/kubernetes"[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Writing "admin.conf" kubeconfig file[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Writing "kubelet.conf" kubeconfig file[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Writing "controller-manager.conf" kubeconfig file[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control -plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for loc al etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient ] All control plane components are healthy after 17.506253 seconds[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube -system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --experimental-upload-certs[mark-control-plane] Marking the node master01 as control-plane by adding the label "node- role.kubernetes.io/master=``"[mark-control-plane] Marking the node master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token : abcdef.0123456789abcdef[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rule s to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace[addons] Applied essential addon: CoreDNS[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp- i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run " kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root: kubeadm join 192.168.4.110:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:0ca0a5fd28409faecba5d2f21aeb010a945a5dae42023fe361424d621708edc1 --experimental-control- on then you nodes can join the following number of workers by running any of the following workers each as root:kubeadm join 192.168.4.110:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:0ca0a5fd28409faecba5d2f21aeb010a945a5dae42023fe361424d621708edc1

kubeadm init /strong>

  • [init]: Specify the version for initialization operation
  • [preflight]: Check before initialization and download the required Docker image file
  • < li>[kubelet-start]: Generate the kubelet configuration file "/var/lib/kubelet/config.yaml". The kubelet cannot be started without this file, so the kubelet before initialization actually fails to start.

  • [certificates]: Generate certificates used by Kubernetes and store them in the /etc/kubernetes/pki directory.
  • [kubeconfig]: Generate a KubeConfig file and store it in the /etc/kubernetes directory. Corresponding files are required for communication between components.
  • [control-plane]: Use the YAML file in the /etc/kubernetes/manifest directory to install the Master component.
  • [etcd]: Use /etc/kubernetes/manifest/etcd.yaml to install Etcd service.
  • [wait-control-plane]: Wait for the Master component deployed by control-plan to start.
  • [apiclient]: Check the service status of the Master component.
  • [uploadconfig]: Update configuration
  • [kubelet]: Use configMap to configure kubelet.
  • [patchnode]: Update CNI information to Node and record it by way of comments.
  • [mark-control-plane]: Tag the current node, with the role of Master, and the unschedulable tag, so that the Master node will not be used to run Pod by default.
  • [bootstrap-token]: Generate token and record it, and then use kubeadm join to add nodes to the cluster.
  • [addons]: Install additional components CoreDNS and kube-proxy

7.6 Preparing Kubeconfig file for kubectl

By default, kubectl will look for config in the .kube directory under the executed user’s home directory document. Here is to copy the admin.conf generated in the [kubeconfig] step during initialization to .kube/config.

[[emailprotected] ~]# mkdir -p $HOME/.kube[[emailprotected] ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME /.kube/config[[email protected] ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

In the configuration file, it records The access address of the API Server, so you can connect to the API Server normally by directly executing the kubectl command later.

7.7 View component status

[[emailprotected] ~]# kubectl get cs
NAME STATUS MESSAGE ERROR
controller- manager Healthy ok
scheduler Healthy ok
etcd-0 Healthy {"health":"true"}
[[emailprotected] ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 NotReady master 4m20s v1.14.2

Currently there is only one node, the role is Master, the status is NotReady, and the status is NotReady because the network plug-in has not been installed.

7.8 其他master部署(在master01机器上执行)

在master01将证书文件拷贝至master02、master03节点

#拷贝正式至master02节点USER=rootCONTROL_PLANE_IPS="master02"for host in ${CONTROL_PLANE_IPS}; do    ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"    scp / etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/    scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/ kubernetes/pki/    scp /etc/kubernetes/pki/front-proxy-ca.* "${ USER}"@$host:/etc/kubernetes/pki/    scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/    scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/done#拷贝正式至master03节点USER=rootCONTROL_PLANE_IPS="master03"for host in ${CONTROL_PLANE_IPS}; do    ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"    scp /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/    scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/    scp /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/    scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/    scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/done

在master02上执行,注意注意--experimental-control-plane参数

[[email protected] ~]# kubeadm join 192.168.4.110:8443 --token abcdef.0123456789abcdef >     --discovery-token-ca-cert-hash sha256:0ca0a5fd28409faecba5d2f21aeb010a945a5dae42023fe361424d621708edc1 >     --experimental-con trol-plane
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
[preflight] Running pre-flight checks before initializing the new control plane instance
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserv er serving cert is signed for DNS names [master02 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.209.0.1 192.168.4.130 192.168.4.110]
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master02 localhost] and IPs [192.168.4.130 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master02 localhost] and IPs [192.168.4.130 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"
[certs] Using the existing "sa" key
[kubeconfig] Generating kubeconfig files
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address
[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[check-etcd] Checking that the etcd cluster is healthy
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Activa ting the kubelet service
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
[etcd] Announced new etcd member joining to the existing etcd cluster
[etcd] Wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"
[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[mark-control-plane] Marking the node master02 as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master02 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

This node has joined the cluster and a new control plane instance was created:

* Certificate signing request was sent to apiserver and approval was received.
* The Kubelet was informed of the new secure connection details.
* Co ntrol plane (master) label and taint were applied to the new node.
* The Kubernetes control plane instances scaled up.
* A new etcd member was added to the local/stacked etcd cluster.

To start administering your cluster from this node, you need to run the following as a regular user:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Run 'kubectl get nodes' to see this node join the cluster.

注意**:token有效期是有限的,如果旧的token过期,可以使用kubeadm token create --print-join-command重新创建一条token。

mkdir -p $HOME/.kube && cp -i /etc/kubernetes/admin.conf $HOME/.kube/config && chown $(id -u):$(id -g) $HOME/.kube/config

在master03上执行,注意注意--experimental-control-plane参数

[[email protected] ~]# kubeadm join 192.168.4.110:8443 --token abcdef.0123456789abcdef >     --discovery-token-ca-cert-hash sha256:0ca0a5fd28409faecba5d2f21aeb010a945a5dae42023fe361424d621708edc1 >     --experimental-control-plane[preflight] Running pre-flight checks    [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[preflight] Running pre-flight checks before initializing the new control plane instance[preflight] Pulling images required for setting up a Kubern etes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "front-proxy-client" certificate and key[certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [master03 localhost] and IPs [192.168.4.133 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [master03 localhost] and IPs [192.168.4.133 127.0.0.1 ::1][certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [master03 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.209.0.1 19 2.168.4.133 192.168.4.110][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Valid certificates and keys now exist in "/etc/kubernetes/pki"[certs] Using the existing "sa" key[kubeconfig] Generating kubeconfig files[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Using existing kubeconfig file: "/etc/kubernetes/admin.conf"[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[check-etcd] Checking that the etcd cluster is healthy[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" Confi gMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Activating the kubelet service[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...[etcd] Announced new etcd member joining to the existing etcd cluster[etcd] Wrote Static Pod manifest for a local etcd member to "/etc/kubernetes/manifests/etcd.yaml"[etcd] Waiting for the new etcd member to join the cluster. This can take up to 40s[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[mark-control-plane] Marking the node master03 as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node master03 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]This node has joined the cluster and a new control plane in stance was created:* Certificate signing request was sent to apiserver and approval was received.* The Kubelet was informed of the new secure connection details.* Control plane (master) label and taint were applied to the new node.* The Kubernetes control plane instances scaled up.* A new etcd member was added to the local/stacked etcd cluster.To start administering your cluster from this node, you need to run the following as a regular user:    mkdir -p $HOME/.kube    sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config    sudo chown $(id -u):$(id -g) $HOME/.kube/configRun 'kubectl get nodes' to see this node join the cluster.
[[email protected] ~]# mkdir -p $HOME/.kube > && cp -i /etc/kubernetes/admin.conf $HOME/.kube/config > && chown $(id -u):$(id -g) $HOME/.kube/config[[email protected] ~]# kubectl get nodesNAME       STATUS     ROLES    AGE     VERSIONmaster01   NotReady   master   15m     v1.14.2master02   NotReady   master   3m40s   v1.14.2master03   NotReady   master   2m1s    v1.14.2

8. node节点部署

在node01、node02执行,注意没有--experimental-control-plane参数

注意**:token有效期是有限的,如果旧的token过期,可以在master节点上使用kubeadm token create --print-join-command重新创建一条token。

在node01和node02上执行下面这条命令

kubeadm join 192.168.4.110:8443 --token lwsk91.y2ywpq0y74wt03tb     --discovery-token-ca-cert-hash sha256:0ca0a5fd28409faecba5d2f21aeb010a945a5dae42023fe361424d621708edc1

9. 部署网络插件calico

9.1 下载calico.yaml文件

wget -c https://docs.projectcalico.org/v3.6/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

9.2 修改calico.yaml(根据实际情况配置)

修改CALICO_IPV4POOL_CIDR这个下面的vaule值,默认是192.168.0.0/16

# The default IPv4 pool to create on startup if none exists. Pod IPs will be
# chosen from this range. Changing this value after installation will have
# no effect. This should fall within `--cluster-cidr`.
- name: CALICO_IPV4POOL_CIDR
value: "10.209.0.0/16"

9.3 执行kubectl apply -f calico.yaml

[[email protected] ~]# kubectl apply -f calico.yaml
configmap/calico-config created
customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created
customresourced efinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created
customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created
clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created
clusterrole.rbac.authorization.k8s.io/calico-node created
clusterrolebinding.rbac.authorization.k8s.io/calico-node created
daemonset.extensions/calico-node created
serviceaccount/calico-node created
deployment.extensions/calico-kube-controllers created
serviceaccount/calico-kube-controllers created

9.4 查看节点状态

一开始没安装网络组件,是显示notReady的,装完cailco后就变成Ready,说明集群已就绪了,可以进行下一步验证集群是否搭建成功

[[email protected] ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master01 Ready master 23h v1.14.2
master02 Ready master 22h v1.14.2
master03 Ready master 22h v1.14.2
node01 NotReady 19m v1.14.2
node03 NotReady 5s v1.14.2

10. kube-proxy开启ipvs[单个master节点执行]

10.1 修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: "ipvs"

kubectl edit cm kube-proxy -n kube-system

10.2 之后重启各个节点上的kube-proxy pod:

[[email protected] ~]# kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -n kube-system")}'
pod "kube-proxy-8fpjb" deleted
pod "kube-proxy-dqqxh" deleted
pod "kube-proxy-mxvz2" deleted
pod "kube-proxy-np9x9" de leted
pod "kube-proxy-rtzcn" deleted

10.3 查看kube-proxy pod状态

[[email protected] ~]# kubectl get pod -n kube-system | grep kube-proxykube-proxy-4fhpg                           1/1     Running   0          81skube-proxy-9f2x6                           1/1     Running   0          109skube-proxy-cxl5m                           1/1     Running   0          89skube-proxy-lvp9q                           1/1     Running   0          78skube-proxy-v4mg8                           1/1     Running   0          99s

10.4 查看是否开启了ipvs

日志中打印出了Using ipvs Proxier,说明ipvs模式已经开启

[[email protected] ~]# kubectl logs kube-proxy-4fhpg -n kube-system
I0705 07:53:05.254157 1 server_others.go:176] Using ipvs Proxier.
W0705 07:53:05.255130 1 proxier.go:380] clusterCIDR not specified, unable to distinguish between internal and external traffic
W0705 07:53:05.255181 1 proxier.go:386] IPVS scheduler not specified, use rr by default
I0705 07:53:05.255599 1 server.go:562] Version: v1.14.2
I0705 07:53:05.280930 1 conntrack.go:52] Setting nf_conntrack_max to 131072
I0705 07:53:05.281426 1 config.go:102] Starting endpoints config controller
I0705 07:53:05.281473 1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controller
I0705 07:53:05.281523 1 config.go:202] Starting service config controller
I0705 07:53:05.281548 1 controller_utils.go:1027] Waiting for caches to sync for service config controller
I0705 07:53:05.381724 1 controller_utils.go:1034] Caches are synced for endpoints config controller
I0705 07:53:05.381772 1 controller_utils.go:1034] Caches are synced for service config controller

11. 查看ipvs状态

[[email protected] ~]# ipvsadm -L -n
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 10.209.0.1:443 rr
-> 192.168.4.129:6443 Masq 1 0 0
-> 192.168.4.130:6443 Masq 1 0 0
-> 192.168.4.133:6443 Masq 1 0 0
TCP 10.209.0.10:53 rr
-> 10.209.59.193:53 Masq 1 0 0
-> 10.209.59.194:53 Masq 1 0 0
TCP 10.209.0.10:9153 rr
-> 10.209.59.193:9153 Masq 1 0 0
-> 10.209.59.194:9153 Masq 1 0 0
UDP 10.209.0.10:53 rr
-> 10.209.59.193:53 Masq 1 0 0
-> 10.209.59.194:53 Masq 1 0 0

12. 测试一个运行一个容器

[[email protected] ~]# kub ectl run nginx --image=nginx:1.14 --replicas=2
kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.
deployment.apps/nginx created

12.1 查看nginx pod

[[email protected] ~]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx-84b67f57c4-d9k8m 1/1 Running 0 59s 10.209.196.129 node01
nginx-84b67f57c4-zcrxn 1/1 Running 0 59s 10.209.186.193 node03

12.2 通过curl命令测试nginx

[[email protected] ~]# curl 10.209.196.129



Welcome to nginx!



Welcome to nginx!


If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.



For online documentation and support please refer to
nginx.org.

Commercial support is available at
nginx.com.



Thank you for using nginx.




[[email protected] ~]# curl 10.209.186.193



Welcome to nginx!



Welcome to nginx!


If you see this page, the nginx web server is successfully installed and
workin g. Further configuration is required.



For online documentation and support please refer to
nginx.org.

Commercial support is available at
nginx.com.



Thank you for using nginx.



能显示出Welcome to nginx,说明pod运行正常,间接也说明集群可以正常使用

13. 测试dns

进入后执行nslookup kubernetes.default

[[email protected] ~]# kubectl run curl --image=radial/busyboxplus:curl -itkubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.If you don't see a command prompt, try pressing enter.[ [email protected]:/ ]$ nslookup kubernetes.defaultServer:    10.209.0.10Address 1: 10.209.0.10 kube-dns.kube-system.svc.cluster.localName:      kubernetes.defaultAddress 1: 10.209.0.1 kubernetes.default.svc.cluster.local    #能显示类似这样的输出,说明dns是okay的

至此kubernetes集群部署完成。

14.典型报错

14.1 镜像无法拉取报错

ct: connection timed out
Warning FailedCreatePodSandBox 3m44s (x17 over 28m) kubelet, node01 Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response from daemon: Get https://k8s.gcr.io/v2/: dial tcp 74.125.204.82:443: connect: connection timed out

解决办法

先去其他渠道找到对应的镜像,然后docker tag下

docker pull mirrorgooglecontainers/pause:3.1

docker tag mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1

Leave a Comment

Your email address will not be published.