0%

(一)kubernetes(k8s)安装与入门

top

机器准备

准备4台机器,至少2GB内存, 至少2个CPU。如果资源不够,可以用虚拟机,安装centos 7以上操作系统,本实验所用的版本号为:CentOS Linux release 7.6.1810 (Core),机器规划如下:

name host ip user
master master, master.k8s.io 192.168.233.129 root
slave1 slave1, slave1.k8s.io 192.168.233.131 root
slave2 slave2, slave2.k8s.io 192.168.233.132 root
slave3 slave3, slave3.k8s.io 192.168.233.133 root

其中Pod网络地址分配如下:10.244.0.0/16,Service网络地址分配如下:10.96.0.0/12

机器配置(以master为例,其它的三台slave类似)

配置静态IP

编辑文件/etc/sysconfig/network-scripts/ifcfg-ens33,

1
2
3
4
5
6
7
8
ONBOOT=yes
BOOTPROTO="static"
IPADDR=192.168.233.129
NETMASK=255.255.255.0
GATEWAY=192.168.233.2
PREFIX=24
DNS1=192.168.233.2
PEERDNS=no

配置主机名

1
hostnamectl set-hostname master

配置hosts

/etc/hosts

1
2
3
4
5
6
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.233.129 master master.k8s.io
192.168.233.131 slave1 slave1.k8s.io
192.168.233.132 slave2 slave2.k8s.io
192.168.233.133 slave3 slave3.k8s.io

系统设置

systemctl命令 说明
systemctl start 启动服务
systemctl enable 设置服务开机启动
systemctl status 查看服务状态

开启时钟同步

1
2
3
systemctl start chronyd.service
systemctl enable chronyd.service
systemctl status chronyd.service

关闭防火墙

1
2
3
systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl status firewalld.service

关闭SELinux

1
2
3
4
#Permissive
setenforce 0
getenforce
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config

SELinux

关闭swap

1
swapoff -a

编辑/etc/fstab,注释掉swap相关的行,行首前面加#
fstab

查询最终状态如下:

1
free -m

swap

最好重启一下机器,然后下一步安装docker

安装docker

docker install centos

移除旧的docker

1
2
3
4
5
6
7
8
yum remove docker \
docker-client \
docker-client-latest \
docker-common \
docker-latest \
docker-latest-logrotate \
docker-logrotate \
docker-engine

设置docker-ce.repo

1
2
3
4
5
yum install -y yum-utils

yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo

安装docker-ce

1
yum install docker-ce

配置docker

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
mkdir /etc/docker

# Setup daemon.
cat > /etc/docker/daemon.json <<EOF
{
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts": {
"max-size": "100m"
},
"storage-driver": "overlay2",
"storage-opts": [
"overlay2.override_kernel_check=true"
]
}
EOF

sed -i "13i ExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT" /usr/lib/systemd/system/docker.service

# start Docker
service network restart
systemctl daemon-reload
systemctl enable docker
systemctl start docker
docker info

安装kubernetes

kubernetes install kubeadm

配置kubernetes.repo

google源

1
2
3
4
5
6
7
8
9
10
cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-\$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

阿里镜像源

1
2
3
4
5
6
7
8
9
10
touch /etc/yum.repos.d/kubernetes.repo
cat > /etc/yum.repos.d/kubernetes.repo <<EOF
[kubernetes]
name=Kubernetes Repo
enabled=1
gpgcheck=1
repo_gpgcheck=1
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

安装k8s

1
yum -y install kubelet kubeadm kubectl

配置网络路由相关

Make sure that the br_netfilter module is loaded. This can be done by running

1
lsmod | grep br_netfilter

To load it explicitly call

1
modprobe br_netfilter

配置k8s.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0
vm.overcommit_memory=1
vm.panic_on_oom=0
fs.inotify.max_user_watches=89100
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
sysctl --system

enable ip_forward

1
echo 1 > /proc/sys/net/ipv4/ip_forward

配置kubelet.service

1
systemctl enable kubelet.service

以上配置,master和slave是一样的过程。

下载k8s镜像

由于k8s镜像默认下载不了,需要通过docker hub mirror进行下载

获取所需镜像版本

1
kubeadm config images list

images

master节点下载

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
docker pull mirrorgcrio/kube-apiserver:v1.18.6
docker pull mirrorgcrio/kube-controller-manager:v1.18.6
docker pull mirrorgcrio/kube-scheduler:v1.18.6
docker pull mirrorgcrio/kube-proxy:v1.18.6
docker pull mirrorgcrio/pause:3.2
docker pull mirrorgcrio/etcd:3.4.3-0
docker pull mirrorgcrio/coredns:1.6.7

docker tag mirrorgcrio/kube-apiserver:v1.18.6 k8s.gcr.io/kube-apiserver:v1.18.6
docker tag mirrorgcrio/kube-controller-manager:v1.18.6 k8s.gcr.io/kube-controller-manager:v1.18.6
docker tag mirrorgcrio/kube-scheduler:v1.18.6 k8s.gcr.io/kube-scheduler:v1.18.6
docker tag mirrorgcrio/kube-proxy:v1.18.6 k8s.gcr.io/kube-proxy:v1.18.6
docker tag mirrorgcrio/pause:3.2 k8s.gcr.io/pause:3.2
docker tag mirrorgcrio/etcd:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker tag mirrorgcrio/coredns:1.6.7 k8s.gcr.io/coredns:1.6.7

docker rmi mirrorgcrio/kube-apiserver:v1.18.6
docker rmi mirrorgcrio/kube-controller-manager:v1.18.6
docker rmi mirrorgcrio/kube-scheduler:v1.18.6
docker rmi mirrorgcrio/kube-proxy:v1.18.6
docker rmi mirrorgcrio/pause:3.2
docker rmi mirrorgcrio/etcd:3.4.3-0
docker rmi mirrorgcrio/coredns:1.6.7

master节点最终镜像列表

1
docker images | grep "k8s.gcr.io"

k8s.gcr.io

slave节点只需要下载kube-proxy:v1.18.6y和pause:3.2

1
2
3
4
5
6
7
8
docker pull mirrorgcrio/kube-proxy:v1.18.6
docker pull mirrorgcrio/pause:3.2

docker tag mirrorgcrio/kube-proxy:v1.18.6 k8s.gcr.io/kube-proxy:v1.18.6
docker tag mirrorgcrio/pause:3.2 k8s.gcr.io/pause:3.2

docker rmi mirrorgcrio/kube-proxy:v1.18.6
docker rmi mirrorgcrio/pause:3.2

请确保所有的机器镜像都提前下载好,否则后面k8s启动会失败。

启动k8s

master

运行kubeadm init

1
2
3
4
5
6
7
8
9
10
11
kubeadm init \
--kubernetes-version=v1.18.6 \
--pod-network-cidr=10.244.0.0/16 \
--service-cidr=10.96.0.0/12 \
--apiserver-advertise-address=0.0.0.0

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

slave

复制master节点/run/flannel/subnet.env到各个slave节点

运行kubeadm join命令,xxx和yyy分别是master打印出来的日志里面的内容

1
2
kubeadm join 192.168.233.129:6443 --token xxx \
--discovery-token-ca-cert-hash yyy

检查状态

master节点执行

集群组件状态

命令

1
kubectl get cs

kubectlgetcs

故障排查

kubernetes:v1.18.6 get cs127.0.0.1 connection refused

编辑文件/etc/kubernetes/manifests/kube-controller-manager.yaml和/etc/kubernetes/manifests/kube-scheduler.yaml
注释或删除掉 –port=0所在的行

重启kubelet.service

1
systemctl restart kubelet.service

集群节点状态

命令

1
kubectl get nodes

kubectlgetnodes

Hello World

master节点执行

创建service

创建文件 hello-service.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
apiVersion: v1
kind: Service
metadata:
name: hello-service
spec:
selector:
app: hello-k8s-demo
ports:
- protocol: TCP
port: 81
targetPort: 80
nodePort: 30001
type: NodePort

执行命令kubectl create,其中80是nginx端口,81是pod端口,30001是映射到外部的端口

1
kubectl create -f hello-service.xml

创建deployment和pod

创建文件hello-deployment.xml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hello-k8s-demo
name: hello-deployment
spec:
replicas: 2
selector:
matchLabels:
app: hello-k8s-demo
template:
metadata:
labels:
app: hello-k8s-demo
spec:
containers:
- image: docker.io/nginx:latest
name: hello-k8s
ports:
- containerPort: 80

执行命令kubectl create, pod数量为2,会随机选择2个slave节点运行nginx docker

1
kubectl create -f hello-deployment.xml

查询状态

1
kubectl get svc,pod,deployment -o wide

svcpoddeployment

验证nginx

如上图hello-service的IP为10.101.171.221, 内部端口为81,外部IP为:192.168.233.129(master IP),端口为30001

在master,slave,节点内部访问http://10.101.171.221:81进行验证

1
curl http://10.101.171.221:81

看到如下内容,说明成功

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

在k8s外部访问http://192.168.233.129:30001进行验证

1
curl http://192.168.233.129:30001

结果相同

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

小结

我们通过虚拟机可以配置k8s集群,其中一台master,3台slave,k8s启动之后就可以运行nginx hello world例子。

bottom