CodeSnippet.Cn
代码片段
Csharp
架构设计
.NetCore
西班牙语
kubernetes
MySql
Redis
Algorithm
Ubuntu
Linux
Other
.NetMvc
VisualStudio
Git
pm
Python
WPF
java
Plug-In
分布式
CSS
微服务架构
JavaScript
DataStructure
Shared
K8S(4)——使用 kubeadm 搭建 kubernetes 集群 Ubuntu
0
kubernetes
小笨蛋
发布于:2021年09月20日
更新于:2022年01月08日
287
#custom-toc-container
> Ubuntu18.4 ### 安装 kubernetes 主节点 执行以下命令初始化主节点,该命令指定了初始化时需要使用的配置文件,其中添加 `--experimental-upload-certs` 参数可以在后续执行加入节点时自动分发证书文件。追加的 `tee kubeadm-init.log` 用以输出日志。 **执行如下命令** `kubeadm init --config=kubeadm.yml --experimental-upload-certs | tee kubeadm-init.log` **如果出现如下错误** > unknown flag: --experimental-upload-cert 将 `–experimental-upload-certs` 替换为 `--upload-certs` 即`kubeadm init --config=kubeadm.yml --upload-certs | tee kubeadm-init.log` ------------ **如出现如下错误** > The HTTP call equal to 'curl -sSL http://localhost:10255/healthz' failed with error: Get http://localhost:10255/healthz: dial tcp 127.0.0.1: 10255: getsockopt: connection refused. 解决办法`vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf` 在该文件中增加 `Environment="KUBELET_SYSTEM_PODS_ARGS=--pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --fail-swap-on=false"`即可 保存后运行 `systemctl daemon-reload` 与 `systemctl restart kubelet` ------------ **如出现如下错误** ![](/uploads/images/20210920/110829-13d1977fd5f84c02bdcc4e73e9d6e7c2.png) 执行删除即可 `rm -rf /etc/kubernetes/manifests` ------------ **如出现如下错误** ![](/uploads/images/20210920/110956-bd072be136a649948713ba6753c54f2a.png) 即端口被占用。执行 `kubeadm reset` 命令重置kubeadm即可 ------------ **如出现如下错误** **dial tcp 127.0.0.1: 10248: connect: connection refused** [kubelet-check] It seems like the kubelet isn't running or healthy. [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost: 10248/healthz' failed with error: Get http://localhost: 10248/healthz: dial tcp 127.0.0.1: 10248: connect: connection refused. 这是cgroup驱动问题。默认情况下Kubernetes cgroup驱动程序设置为system,但docker设置为systemd。我们需要更改Docker cgroup驱动,通过创建配置文件/etc/docker/daemon.json并添加以下行: `{"exec-opts": ["native.cgroupdriver=systemd"]}` [![代码片段](/uploads/images/20220108/181648-946bc621acff4ffcbf58e3beb71b5b1c.png "代码片段")](https://Www.CodeSnippet.cn "代码片段") 注意:命令将会重写/etc/docker/daemon.json 然后,为使配置生效,你必须重启docker和kubelet。 ```shell systemctl daemon-reload systemctl restart docker systemctl restart kubelet ``` 现在,我们尝试重新初始化一个Kubernetes集群,通过运行以下命令。 ```shell sudo kubeadm reset sudo kubeadm init ``` ------------ ```shell # 安装成功则会有如下输出 [init] Using Kubernetes version: v1.14.1 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.141.130] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.141.130 127.0.0.1 ::1] [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.141.130 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for "kube-apiserver" [control-plane] Creating static Pod manifest for "kube-controller-manager" [control-plane] Creating static Pod manifest for "kube-scheduler" [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 20.003326 seconds [upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in ConfigMap "kubeadm-certs" in the "kube-system" Namespace [upload-certs] Using certificate key: 2cd5b86c4905c54d68cc7dfecc2bf87195e9d5d90b4fff9832d9b22fc5e73f96 [mark-control-plane] Marking the node kubernetes-master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node kubernetes-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: abcdef.0123456789abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy #看到这句即成功 Your Kubernetes control-plane has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ Then you can join any number of worker nodes by running the following on each as root: # 后面子节点加入需要如下命令 kubeadm join 192.168.141.130:6443 --token abcdef.0123456789abcdef \ --discovery-token-ca-cert-hash sha256:cab7c86212535adde6b8d1c7415e81847715cfc8629bb1d270b601744d662515 ``` > 注意:如果安装 kubernetes 版本和下载的镜像版本不统一则会出现 timed out waiting for the condition 错误。中途失败或是想修改配置可以使用 kubeadm reset 命令重置配置,再做初始化操作即可。 ### 配置 kubectl ```shell mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config # 非 ROOT 用户执行 chown $(id -u):$(id -g) $HOME/.kube/config ``` ### 验证是否成功 ```shell kubectl get node # 能够打印出节点信息即表示成功 NAME STATUS ROLES AGE VERSION kubernetes-master NotReady master 8m40s v1.14.1 ``` 至此主节点配置完成 ### kubeadm init 的执行过程 - init:指定版本进行初始化操作 - preflight:初始化前的检查和下载所需要的 Docker 镜像文件 - kubelet-start:生成 kubelet 的配置文件 `var/lib/kubelet/config.yaml`,没有这个文件 kubelet 无法启动,所以初始化之前的 kubelet 实际上启动不会成功 - certificates:生成 Kubernetes 使用的证书,存放在 `/etc/kubernetes/pki` 目录中 - kubeconfig:生成 KubeConfig 文件,存放在 `/etc/kubernetes` 目录中,组件之间通信需要使用对应文件 - control-plane:使用 `/etc/kubernetes/manifest` 目录下的 YAML 文件,安装 Master 组件 - etcd:使用 `/etc/kubernetes/manifest/etcd.yaml` 安装 Etcd 服务 - wait-control-plane:等待 control-plan 部署的 Master 组件启动 - apiclient:检查 Master 组件服务状态。 - uploadconfig:更新配置 - kubelet:使用 configMap 配置 kubelet - patchnode:更新 CNI 信息到 Node 上,通过注释的方式记录 - mark-control-plane:为当前节点打标签,打了角色 Master,和不可调度标签,这样默认就不会使用 Master 节点来运行 Pod - bootstrap-token:生成 token 记录下来,后边使用 `kubeadm join` 往集群中添加节点时会用到 - addons:安装附加组件 CoreDNS 和 kube-proxy
这里⇓感觉得写点什么,要不显得有点空,但还没想好写什么...
返回顶部
About
京ICP备13038605号
© 代码片段 2024