# 第一步:关闭防火墙# 关闭原因:iptables防火墙会对网络流量进行过滤、转发,如果是内网集群一般都进行关闭,防止影响网络性能。systemctl stop firewalldsystemctl disable firewalld# 禁用SELinux或者用setenforce 0指令将SELinux设置为许可模式,先临时关闭,并配置永久关闭。# 关闭原因:linux增加安全组件,很多组件不兼容。另外容器访问主机系统也必须关闭SELinux。setenforce 0 # 临时sed -i '/selinux/s/enforcing/disabled/' /etc/selinux/config # 永久# 关闭swap。先临时关闭,并配置永久关闭。# 关闭原因:内存不足时,Linux会自动使用swap,降低性能,并且无法感知容器OOM。swapoff -a # 临时sed -ri 's/.*swap.*/#&/' /etc/fstab # 永久# 加载overlay和br_netfilter模块sudo modprobe overlay # 加载overlay模块,overlay是一种文件系统,用于实现docker镜像的分层存储sudo modprobe br_netfilter # 加载br_netfilter模块,用于在bridge模式下进行网络包过滤# 将overlay和br_netfilter模块添加到containerd.conf文件中cat <<EOF | sudo tee /etc/modules-load.d/containerd.confoverlaybr_netfilterEOF# 设置网络参数cat <<EOF | sudo tee /etc/sysctl.d/99-kubernetes-cri.conf#由iptables管理的桥接网络的数据包可以被iptables的规则所处理。net.bridge.bridge-nf-call-iptables = 1#这行设置开启了IPv4的转发功能。这对于Kubernetes的Pod网络是必要的,因为Pod之间的通信需要通过节点转发。net.ipv4.ip_forward = 1#这行设置使得由ip6tables管理的桥接网络的数据包可以被ip6tables的规则所处理。这对于使用IPv6的Kubernetes集群是必要的net.bridge.bridge-nf-call-ip6tables = 1EOF#Linux系统中用于加载系统级别的内核参数的命令,用于验证是否配置好上述参数sudo sysctl --system#安装containerdsudo dnf config-manager --add-repo=https://download.docker.com/linux/centos/docker-ce.reposudo dnf updatesudo dnf install -y containerdsudo mkdir -p /etc/containerdsudo containerd config default | sudo tee /etc/containerd/config.tomlsudo vi /etc/containerd/config.toml#这个设置是为了让containerd使用systemd作为cgroup管理器。cgroup是Linux内核的一项功能,用于限制、记录和隔离进程组的资源使用(如CPU、内存、磁盘I/O等)。1、找到[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]并将值更改SystemdCgroup为true#这个设置是为了更改Kubernetes的pause容器镜像的来源。pause容器是每个Pod的父容器,用于持有网络和IPC命名空间以供Pod内的其他容器使用。这里将镜像源从Google的镜像仓库更改为阿里云的镜像仓库,保障网络通畅。2、找到sandbox_image = "k8s.gcr.io/pause:3.6"并改为sandbox_image = "registry.aliyuncs.com/google_containers/pause:3.6"sudo systemctl restart containerdsudo systemctl enable containerd#添加kubernetes仓库x86cat <<EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetes#如果是arm64架构使用 baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-aarch64baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/#启动这个软件源enabled=1#这两行设置了关闭GPG签名检查。GPG签名是用于验证软件包完整性和来源的一种机制。gpgcheck=0repo_gpgcheck=0#这行设置了GPG公钥的位置gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF#安装sudo dnf updatesudo dnf install -y kubelet kubeadm kubectlsudo systemctl enable kubelet#master节点设置hostnamesudo hostnamectl set-hostname "master-node"#worker节点设置namesudo hostnamectl set-hostname "worker-node1"cat >> /etc/hosts << EOF==192.168.1.15 master-node==192.168.1.14 worker-node1==192.168.1.13 worker-node2==192.168.1.16 worker-node3EOFcat >> /etc/hosts <<EOF192.168.31.113 minio.dduping.comEOF#增加ssh免密登陆ssh-copy-id -i id_rsa.pub ming@192.168.31.132#安装ufw开启必要端口yum install epel-releaseyum install --enablerepo="epel" ufwufw --version#master机器上开启如下:sudo ufw allow 6443/tcpsudo ufw allow 2379/tcpsudo ufw allow 2380/tcpsudo ufw allow 10250/tcpsudo ufw allow 10251/tcpsudo ufw allow 10252/tcpsudo ufw allow 10255/tcpsudo ufw reload#worker机器上开启如下:sudo ufw allow 10251/tcpsudo ufw allow 10255/tcpsudo ufw reloadkubeadm init \ --apiserver-advertise-address=YourMasterIP \ --image-repository registry.aliyuncs.com/google_containers \ --pod-network-cidr=10.244.0.0/16 #备用命令(重新生成token)sudo kubeadm token create --print-join-command==#创建声明目录#(mkdir -p $HOME/.kube 这条worker节点也需要执行)mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config#node加入集群kubeadm join 192.168.1.15:6443 --token dav4g8.ozu40s6q3v3andxm --discovery-token-ca-cert-hash sha256:d091cd31d9ab6fe1601225f0c11b201b354c1f2759ca2bea37bc2d9ca17c8d32==#子节点设置为workerkubectl label node worker-node3 node-role.kubernetes.io/worker=worker#增加子节点访问集群能力scp $HOME/.kube/config root@YourWorkerIP:~/.kube/configkubectl -n kubernetes-dashboard create token admin-user#dashboard tokeneyJhbGciOiJSUzI1NiIsImtpZCI6Il8xTDQxZlRFamJEck5MTEtiRXdCUUN0dk5XS0ZzRzBxcGE1cC1ieHNuMFUifQ.eyJhdWQiOlsiaHR0cHM6Ly9rdWJlcm5ldGVzLmRlZmF1bHQuc3ZjLmNsdXN0ZXIubG9jYWwiXSwiZXhwIjoxNjgyNjUyMDY2LCJpYXQiOjE2ODI2NDg0NjYsImlzcyI6Imh0dHBzOi8va3ViZXJuZXRlcy5kZWZhdWx0LnN2Yy5jbHVzdGVyLmxvY2FsIiwia3ViZXJuZXRlcy5pbyI6eyJuYW1lc3BhY2UiOiJrdWJlcm5ldGVzLWRhc2hib2FyZCIsInNlcnZpY2VhY2NvdW50Ijp7Im5hbWUiOiJhZG1pbi11c2VyIiwidWlkIjoiZDVhNmFkMGYtZWFiMC00NTJlLTk0ZGUtMTZhM2I0ZDMzZGZhIn19LCJuYmYiOjE2ODI2NDg0NjYsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDprdWJlcm5ldGVzLWRhc2hib2FyZDphZG1pbi11c2VyIn0.duG9p12Anj3sCcLQMulnNCihJf9Nrx9NR78XNI2VFrovgkLq2ZXndKgeJ9J-vMT0pmqbrlLlF_0SjA1B6I8XmYSWAJe12wCs5q_P7N_QKBOJpBhUCc2fnvPOmCzjvfej49o44r1u2WOpolD67GmKQeF4qMaoenPD1vRMaUsTTihewicbzDFRAKTbC-SGFHttTI7CAv1h9ZtcWcvpsgXrjcZ7hUC54Nla5iwVRlMkR6PrKuVdPufnkyh03hTjGpUCze9jeaBkq7ynUFgL5zzX-v6i4UvXjGipm-6JVgxZ7NkkM8RAteaUHvZHwW6I4L63NQ9sjM_khcCuAA-nCqu9Yg#增加对crictl支持crictl config runtime-endpoint unix:///var/run/containerd/containerd.sockcrictl config image-endpoint unix:///var/run/containerd/containerd.sock#挂载 nas中的nftdnf install nfs-utilsmkdir -p /nfs/data#nft地址mount -t nfs 192.168.1.32:/KubernetesStorage /nfs/data/systemctl enable nfs-client.targetsystemctl enable rpcbind #如果以后nfs报错 program not registered 执行一下命令,重启nfssystemctl restart nfs# 如果nfs报错systemctl restart nfs-server.servicesystemctl restart rpcbind#查看nfs挂载命令mount -e 192.168.1.32df -h #验证是否挂载成功 192.168.31.123:/K8storage 5.5T 2.1T 3.5T 38% /nfs/dataecho "testOK?" > /nfs/data/testOk.txt#去查看nfs服务端是否出现testOk.txt文件,有则成功#配置补全yum install -y bash-completionecho 'source <(kubectl completion bash)' >> ~/.bashrcsource ~/.bashrcvim ~/.bashrc#添加alias k='kubectl'#helm安装(master node)wget https://get.helm.sh/helm-v3.11.2-linux-amd64.tar.gztar -zxvf helm-v3.11.2-linux-amd64.tar.gzmv linux-amd64/helm /usr/local/bin/helm#安装nfs插件helm repo add azure http://mirror.azure.cn/kubernetes/charts/helm repo add stable https://kubernetes.oss-cn-hangzhou.aliyuncs.com/charts#查看与设置master参与调度kubectl describe node master-node |grep Taintskubectl taint nodes master-node node-role.kubernetes.io/control-plane-#linux代理配置,如果有访问墙外的需求,可使用自己的代理,配置如下#不走代理ipexport no_proxy="127.0.0.1,192.168.1.15,192.168.1.13,192.168.1.14"export http_proxy="http://192.168.1.5:7891"export https_proxy="https://192.168.1.5:7891"#取消代理unset http_proxyunset https_proxy#当前session强制全部走代理命令(可更改局域网ip)export https_proxy=http://192.168.31.6:7890 http_proxy=http://192.168.31.6:7890 all_proxy=socks5://192.168.31.6:789#配置registry http访问,每个节点都要加#这里由于我的本地镜像仓库没有配置https证书,考虑只有内网机器才能访问镜像仓库,就开启了http访问功能vim /etc/containerd/config.toml在mirrors下([plugins."io.containerd.grpc.v1.cri".registry.mirrors])增加两个子plugins [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"] endpoint = ["https://registry.cn-hangzhou.aliyuncs.com"] ====== [plugins."io.containerd.grpc.v1.cri".registry.mirrors."192.168.1.32:55000"] endpoint = ["http://192.168.1.32:55000"] #iptables查看命令iptables -nL FORWARD访问不了可能是这里的问题,如果kube网络没有,重启containerd NFS服务搭建